
It’s not just “cute” art—it’s corporate surveillance and environmental extraction. Here’s why you should skip the latest AI bandwagon.
“Create a caricature of me and my job based on everything you know about me.” This is the latest ChatGPT-fueled social media trend. You upload a photo, let AI do its thing, and out comes a cartoon of you—surrounded by symbols of your career, hobbies, interests, personality traits, and lifestyle. It can be eerily accurate. When it’s not, users ought to feed it more details to get things “right.”
By now, you’ve probably joined the bandwagon. It’s cute and validating. Everyone’s doing it, and FOMO is real. Hate to be a killjoy, but instead of feeling flattered, you must feel uneasy. Alarmed, even.
The high price of accuracy
Ask yourself: Why does ChatGPT know that much about you? Memory feature aside, the accuracy of its outputs reflects how much information people have willingly—and unwittingly—shared with it. Lives become data, which corporations harvest and monetize. As Data Ethics PH pointed out, the more accurate the output, the deeper the surveillance. You and thousands of others become easy marks.
Not to sound alarmist, but we’ve seen how cybercriminals maneuver sensitive information. Faces plus professional and personal information equals deepfakes, identity theft, and fraud. All it takes is a selfie and some “harmless” details forda clout.
Media as an enabler
What’s more disturbing is how some local media organizations have hopped onto the trend without anxiety. One digital publication with over a million Facebook followers published a how-to guide using its own writers’ faces. Its parent newspaper with 11 million followers amplified the story, effectively urging the public to follow suit.
To their credit, many readers pushed back. “This is definitely not the right thing to advertise,” a commenter said. A sharer didn’t mince words: “If you still participate in dogsh*t AI trends like this in 2026, you are CRINGE.”
When newsrooms normalize AI gimmicks, they only legitimize them. A how-to guide isn’t a mere neutral report. It’s an endorsement that signals fun and harmlessness—the real risks brushed aside. In chasing engagement, media forgets its duties: to serve the public interest and hold institutions accountable.
And joining the trend costs not only data privacy but also the environment.
The era of “Water Bankruptcy”
Generative AI systems run on massive data centers, and training and running these models require continuous cooling—mostly through millions of liters of water. Last January, the United Nations University Institute for Water, Environment, and Health reported that the world has entered an era of global “water bankruptcy.”
Every prompt and image also requires tons of electricity, only contributing to the climate crisis that has long plagued the planet. In a Massachusetts Institute of Technology review, data centers accounted for about 4.4% of US electricity use in 2024 and by 2028, AI alone could consume as much power as 22% of all US households annually.
So this supposed cute trend is everything. It exposes how we casually surrender our data, how media enables AI corporations for clicks, and how little we question systems that extract not just information but water, energy, and accountability.
Beyond the “clout,” this trend exposes your personal data to surveillance and accelerates the global water crisis. It’s time to rethink the “cute” side of AI.
READ:
Architects of AI: are we building a creative future without humans?
radar Editors
December 30, 2025
AI powerhouse: how Filipinos became one of the world’s most vigilant ChatGPT users
radar Editors
December 31, 2025
‘Slop’ is Merriam-Webster’s 2025 Word of the Year: The digital rubbish clogging the age of AI
Nikko Miguel Garcia
December 16, 2025
