For some time now, Grok has been “undressing” people — and even animals — on the X platform. All it takes is tagging the chatbot and entering the right prompt (for example: “Hey @Grok, dress this man in a short skirt and flip-flops”), and a deepfake is generated, often with erotic undertones. Such uses directly conflict with App Store and Google Play rules, which prohibit apps that enable the creation of pornographic deepfakes or content that violates privacy. In the past, similar “nudify” apps were removed from these stores, often shortly after launch, precisely because of abuse and insufficient safeguards.
In Grok’s case, however, the response from distribution platforms remains unclear, despite the tool’s scale and its tight integration with X, which significantly amplify its potential impact compared to niche projects. This raises questions about equal enforcement of rules for large, high-profile AI applications versus smaller developers whose products were previously blocked or delisted.
The controversy is also beginning to have tangible business implications. Advertisers and investors tied to X are increasingly voicing concerns about reputational risk associated with a tool that may become synonymous with the generation of problematic content. Pressure is mounting to limit or disable Grok features seen as most controversial, alongside growing doubts over whether the current moderation model is sufficient to meet Apple’s and Google’s requirements. Any potential removal of the app from the App Store or Google Play would represent a serious blow to X’s broader AI distribution strategy.
Historically, both Apple and Google have repeatedly removed — temporarily or permanently — apps from their stores when they were deemed to violate content or user safety policies. In 2018, Tumblr was removed from the App Store after child sexual content was discovered and only returned after implementing extremely strict moderation. Similar action was taken against Telegram, which that same year was temporarily removed from both the App Store and Google Play following reports of illegal content. In later years, Apple and Google systematically removed “nudify” and similar apps that used AI to generate sexualized images without the consent of the people depicted. These precedents show that both companies have reacted not only to declared app features, but also to their real-world potential for abuse — a context that directly applies to the current Grok controversy.
It is worth emphasizing that app stores today must evaluate not only what an app claims to do, but also what can realistically be “extracted” from it through AI prompting. The line between permitted and prohibited use can be blurry, and the absence of a swift, decisive response risks setting a precedent for other large AI applications eager to test the boundaries of platform rules.
As of publication, Apple and Google have not publicly announced whether they are conducting formal proceedings against Grok. X and xAI have likewise not provided detailed explanations regarding safeguards or image moderation mechanisms. Meanwhile, market pressure continues to build, and the Grok case is becoming one of the most high-profile examples of the tension between rapid advances in generative AI and the enforcement of safety rules within mobile ecosystems.

