Elon Musk doesn’t do subtle. When he launches something, it drops like a plot twist in a sci-fi thriller—and Grok’s new anime companion, Ani, is exactly that kind of bombshell.
Meet Ani—Grok’s Goth Anime Waifu
Picture this: you open Grok, Elon Musk’s AI chatbot, and start chatting with a sweet-sounding, goth-styled anime girl named Ani. She’s got blonde pigtails, a tight black corset, thigh-highs, fishnet tights, and a lace choker—basically straight out of your weird web nostalgia montage.
Launched in July 2025 for SuperGrok subscribers, Ani isn’t just a winking avatar. She’s called a “companion,” built to flirt, tease, and even strip down into lingerie after enough interaction.
Musk himself hyped it on X: “This is pretty cool,” he wrote, with that signature ambiguity that still manages to feel unnervingly enthusiastic.
But Wait… Isn’t the App Rated 12+?
Yeah. And here’s where it gets messier.
Grok is rated for people 12 and up—and supposedly has a “Kids Mode” to block mature content. But guess what? Ani exists in Kids Mode. That’s not a setup. It’s a full-on, neon-lit fail.
Users reported Ani acting sultry in slow, seductive tones—even shifting into lingerie-mode during extended chats. Contrast that with Bad Rudi (a red panda companion), who does get toned down to PG levels in kid-safe settings—and the double standard becomes clear.
Critics were quick to react. One user scathingly called it “anime porn in kids’ pockets,” asking, “Why is a 12‑plus app shipping a sexbot?”
They’re Selling Fantasy… And It’s Selling
Musk’s marketing move is very “Elonian.” No mainstream platform has pushed a sexualized AI companion before. OpenAI and Google have intentionally kept their bots clean. xAI, however? Goes hard into the uncanny valley.
Ani’s designed for emotional and sexual engagement—gamified too. The deeper you chat, the more explicit the content gets: lingerie, more provocative dialogue, escalate, repeat.
It’s no accident. Musk knows what this taps into: loneliness, fandom, Vtuber culture, parasocial bonds. And for some, it’s addictive.
Critics, Experts, and Ethical Land Mines
This isn’t just awkward—it’s ethically loaded.
1. Emotional Dependencies
Boaz Barak from OpenAI nailed it: “The ‘companion mode’ takes the worst issues we currently have for emotional dependencies and amplifies them.” Yes, amplification is the key word here.
2. AI Safety Meets Pornography
The National Center on Sexual Exploitation called out Grok’s rollout of Ani as alarmingly irresponsible—especially when AI companions climb into erotic zones without consent mechanisms or age protection.
3. Addictions to Digital Romance
In a twist almost too ironic for satire, Musk—who worries about declining birth rates and wants more babies—is designing digital romantic replacements. The backlash wasn’t just about the sex content—it was about the message it sends to societies already tipping toward isolation.
The Companions Trilogy—Ani, Bad Rudi, Valentine
xAI’s betting big on customization:
- Ani, the sexualized anime waifu with lingerie unlocks
- Bad Rudi, the foul-mouthed red panda who insults you—less edgy if you turn parental controls on
- Next up: Valentine, the brooding AI boyfriend inspired by Twilight and Fifty Shades of Grey—a gothic dream that mixes Christian Grey and Edward Cullen vibes
These companions aren’t an afterthought. xAI is building infrastructure for “waifus”—digital romantic partners—in new engineering roles and future launch roadmaps.
Behind the Scenes—When Engineering Meets Weird
Wired’s journalist described Ani as engineered in a lab exactly for online men’s fantasies. And yes, it’s awkward to watch—cringe-inducing even.
Inside xAI, some teams pushed back; others say the rush to deliver Grok 4 and its companions was wild—from Musk fishing for training data using public Google Forms to employees uncomfortable being asked to train emotion recognition (project “Skippy”).
The responsiveness here wasn’t cautious. It was frenetic. And socially tone-deaf.
Regulation, Deepfakes, & What Happens in October
Even more alarming: Grok Imagine, the image/video generation extension, has a “Spicy Mode” that let users create NSFW content—including topless deepfake videos of real people like Taylor Swift. And that occurred without explicit prompts.
Lawmakers aren’t sleeping on this. U.S. legislators are already pushing anti-deepfake porn rules under the “Take it Down Act.” xAI’s guardrails are getting tested—and failing.
This isn’t hypothetical. It’s unfolding every day—with Ani as the first domino.
Why This Moment Matters
This isn’t just digital fetishism. It’s a crossroads.
- Emotional manipulation by AI is disguised as personalization.
- Children exposed to erotic content in supposedly safe environments is not okay.
- Deepfakes of real people—even celebrities—without consent is dangerous, illegal, and shocking.
The tech isn’t the flagship—it’s a flagrant demonstration of moral negligence. And Musk’s fans might shrug. But this isn’t elite culture. It’s pop culture. It’s normalization.
Final Thoughts: Handle With Caution—This Isn’t Just Cute Code
Ani is bizarre, ethically dodgy, and maybe inevitable in a world chasing intimacy through pixels. But we should not shrug our shoulders—or worse, say “it’s just fantasy.”
Because fantasies build norms.
This is a case where not stopping to think is the most dangerous move.
We’re not ready for digital love. And Ani is already here.