Close Menu
    2digital.news2digital.news
    • News
    • Analytics
    • Interviews
    • About us
    • Editorial board
    2digital.news2digital.news
    Home»News»Do Grok and X Get More Leeway Than Others? Controversy Surrounding Deepfakes and Sexual Content
    News

    Do Grok and X Get More Leeway Than Others? Controversy Surrounding Deepfakes and Sexual Content

    January 9, 20263 Mins Read
    LinkedIn Twitter

    For some time now, Grok has been “undressing” people — and even animals — on the X platform. All it takes is tagging the chatbot and entering the right prompt (for example: “Hey @Grok, dress this man in a short skirt and flip-flops”), and a deepfake is generated, often with erotic undertones. Such uses directly conflict with App Store and Google Play rules, which prohibit apps that enable the creation of pornographic deepfakes or content that violates privacy. In the past, similar “nudify” apps were removed from these stores, often shortly after launch, precisely because of abuse and insufficient safeguards.

    In Grok’s case, however, the response from distribution platforms remains unclear, despite the tool’s scale and its tight integration with X, which significantly amplify its potential impact compared to niche projects. This raises questions about equal enforcement of rules for large, high-profile AI applications versus smaller developers whose products were previously blocked or delisted.

    The controversy is also beginning to have tangible business implications. Advertisers and investors tied to X are increasingly voicing concerns about reputational risk associated with a tool that may become synonymous with the generation of problematic content. Pressure is mounting to limit or disable Grok features seen as most controversial, alongside growing doubts over whether the current moderation model is sufficient to meet Apple’s and Google’s requirements. Any potential removal of the app from the App Store or Google Play would represent a serious blow to X’s broader AI distribution strategy.

    Historically, both Apple and Google have repeatedly removed — temporarily or permanently — apps from their stores when they were deemed to violate content or user safety policies. In 2018, Tumblr was removed from the App Store after child sexual content was discovered and only returned after implementing extremely strict moderation. Similar action was taken against Telegram, which that same year was temporarily removed from both the App Store and Google Play following reports of illegal content. In later years, Apple and Google systematically removed “nudify” and similar apps that used AI to generate sexualized images without the consent of the people depicted. These precedents show that both companies have reacted not only to declared app features, but also to their real-world potential for abuse — a context that directly applies to the current Grok controversy.

    It is worth emphasizing that app stores today must evaluate not only what an app claims to do, but also what can realistically be “extracted” from it through AI prompting. The line between permitted and prohibited use can be blurry, and the absence of a swift, decisive response risks setting a precedent for other large AI applications eager to test the boundaries of platform rules.

    As of publication, Apple and Google have not publicly announced whether they are conducting formal proceedings against Grok. X and xAI have likewise not provided detailed explanations regarding safeguards or image moderation mechanisms. Meanwhile, market pressure continues to build, and the Grok case is becoming one of the most high-profile examples of the tension between rapid advances in generative AI and the enforcement of safety rules within mobile ecosystems.

    Share. Twitter LinkedIn
    Avatar photo
    Mikolaj Laszkiewicz

    An experienced journalist and editor passionate about new technologies, computers, and scientific discoveries. He strives to bring a unique perspective to every topic. A law graduate.

    Related Posts

    News

    Iran Almost Completely Cut Off from the Internet Amid Protests Over Economic Crisis

    January 9, 2026
    Analytics

    “When the AI bubble bursts, we could be facing the first Great Depression of the 21st century” One more grim future forecast.

    January 9, 2026
    News

    OpenAI Launches ChatGPT Health — A New, Secure Way to Connect Medical Data with AI

    January 8, 2026
    Read more

    Personalized medicine – how far can we go with precision medicine

    January 2, 2026

    The coyote who never fell. Why Geoffrey Hinton’s prediction about radiologists missed the mark

    December 30, 2025

    Letting go of the grip. How shared human–machine control could fix bionic hands

    December 26, 2025
    Stay in touch
    • Twitter
    • Instagram
    • LinkedIn
    Demo
    X (Twitter) Instagram LinkedIn
    • News
    • Analytics
    • Interviews
    • About us
    • Editorial board
    • Contact us

    Type above and press Enter to search. Press Esc to cancel.