When I saw my colleague Kylie Robison’s story about OpenAI’s new image generator on Tuesday, I thought this week might be fun. Generative AI images raise all kinds of ethical issues, but I find them wildly entertaining, and I spent large chunks of that day watching other Verge staff test ChatGPT in ways that covered the entire spectrum, from cute to cursed.
But on Thursday afternoon, the White House decided to spoil it. Its X account posted a photograph of a crying detainee that it bragged was an arrested fentanyl trafficker and undocumented immigrant. Then it added an almost certainly AI-generated cartoon of an officer handcuffing the sobbing woman — not attributed to any particular tool, but in the unmistakable style of ChatGPT’s super-popular Studio Ghibli imitations, which have flooded the internet over the past week.
An ugly use of a software tool shouldn’t necessarily indict that tool. But as the picture joined the host of others on my social feeds, the adorable Ghibli filter and the White House’s social media blitz started to feel somehow made for each other. They’re both, as counterintuitive as it might sound, the product of a mindset that treats basic decency as weakness and callousness as the prerogative of power.
We’ve reached out to OpenAI and the White House for more details, but the move amounted to a bizarre product advertisement from a company President Donald Trump himself has close ties to. Heads of state have been jumping on social media memes for years now, and we don’t technically know whether ChatGPT or another AI generator produced the image. (In the 1 percent chance the White House commissioned an artist and they’re reading this, I’d love to hear from them.) But OpenAI CEO Sam Altman has been promoting the Ghibli-style generated images as a cool feature currently exclusive to ChatGPT’s paid tiers. And Trump is a highly public booster of OpenAI’s Stargate project, announcing it at a press conference with Altman.
On the surface, AI Ghibli and Trump fit together bizarrely. The White House’s clear goal was a familiar kind of extremely online performative sadism; this is the same account that posted an “ASMR: Illegal Alien Deportation Flight” video of prisoners’ clinking chains. It’s gross and juvenile, even if we assume all its information is accurate, rather than, say, the result of something like agents reading an autism awareness tattoo as a gang symbol. No reasonable person defends jokey national public humiliation of what appears to be a fairly low-level immigration detainee as good governance, effective public messaging, or a moral good.
The Ghibli aesthetic is so wholesome that it undercuts this. Even one prominent Silicon Valley conservative has pointed out that depicting a sobbing anime woman arrested by a stone-faced agent does not put most people’s sympathy with the agent.
AI media in general, though, is the MAGA movement’s primary aesthetic, producing plenty of other strange, tasteless work. It’s a natural outgrowth of their longstanding love of photoshopped pictures and political cartoons depicting Trump as an over-the-top muscleman. It’s also the product of links between Trump and the AI industry — most prominently “First Buddy” and xAI founder Elon Musk, but also things like Stargate and the placement of David Sacks as “AI czar.”
Eight years ago, a tech company might have distanced itself from someone jumping on its memes to promote mass deportation
I don’t know how OpenAI and Altman feel about the White House promoting a joint advertisement for ChatGPT and a brutal and likely partially illegal attempt to expel immigrants from America. (Altman was a well-known backer of progressive causes until this administration.) Before this picture’s publication, the OpenAI team emphasized that ChatGPT’s image generator is supposed to offer highly flexible guardrails, so they may contend this is no different from using Photoshop offensively. And this might go without saying, but I’m not clear OpenAI should or could block the mere production of something like this image — if it hadn’t been posted by the White House, you could even read it as a protest of these arrests.
At the same time, 8 years ago, when Silicon Valley and Trump were in stark opposition, a major tech company might have distanced itself. A statement like “OpenAI believes in maximum artistic freedom and responsiveness to user requests, but this administration’s post does not reflect our company’s values” is not a tough needle to thread.
The social and political pressure to avoid doing that now is overwhelming. Whatever OpenAI staff’s internal opinions are, it’s bad business to get feted by a vindictive president and then turn around to criticize his policies, particularly amid a larger Silicon Valley rightward turn.
But there’s also something deeper at play, because the Ghibli filter itself has a sour aftertaste — at its core, it’s a minor echo of the Trump era’s utter disregard for other human beings.
I’m not remotely immune to the appeal of Ghiblifying pictures. Seriously. Some of them really are adorable. People have loved anime filters for years, and I don’t think most of these images were created with ill intent. But filmmaker Hayao Miyazaki, whose name is synonymous with the animation studio, is one of the most famously anti-AI artists in the world. He’s widely quoted for calling an earlier version of AI animation “an insult to life itself,” and there’s no sign he approves of ChatGPT being used to imitate his signature style probably thanks to training on his art, let alone OpenAI selling subscriptions off the back of it. Using Ghibli’s work specifically for publicity, as Blood in the Machine author Brian Merchant explains, is a power move. It loudly tells the artists whose creations make ChatGPT function, We’ll take what we want, and we’ll tell everyone we’re doing it. Do you consent? We don’t care.
OpenAI could have approached artists as partners, not obsolete producers of raw training data
Contemporary tech and politics are united in an ideology of domination: the principle that strength, money, and authority are all best wielded by bluntly forcing others to do what you want. With Trump, this is probably self-explanatory. With tech, it manifests in every pointless AI feature that replaces something useful — in the insistence that a technology will happen because it is inevitable, not because you’ve persuaded people it does anything good. Criticism is a senseless tearing-down of great men. Empathy, self-examination, and compromise are effeminate and weak.
The irony is that amid a sea of pointless or dysfunctional AI use cases, the Ghibli filter is wildly popular. But there’s a world where OpenAI captured its appeal without blatant disrespect for the people whose work it’s building on. AI companies could easily (if not as cheaply) have built their products while approaching artists as partners instead of obsolete producers of raw training data. Even if someone like Miyazaki might never agree to automated imitation, OpenAI could have found another animator or cartoonist and tuned ChatGPT to work well with their style — promoting a lesser-known artist in the process. But that would require believing that people who are not Great Men are worth working with and learning from, not simply overpowering.
Again, do I think paying for ChatGPT makes you a bad person? At some point, paying for almost anything funds something inhumane and harmful, often in far more destructive ways. We all draw these lines for ourselves, and I’m not sure where mine fall.
The Tesla Takedown protests, however, do demonstrate how tying your business to toxic politics can backfire. Countless people are using ChatGPT to make cute pictures of their loved ones; there’s something very sad in OpenAI silently letting the White House showcase the meme as a way to bully the powerless instead. Do OpenAI’s researchers think this advances the cause of “AI for good”? And as every company in Silicon Valley vies to hawk its AI systems, where will they draw their lines?