Although the creators of popular AI chatbots generally try to ensure their software is safe and does not violate anyone’s privacy, this does not always succeed. The situation looks particularly troubling in the case of Grok from Elon Musk’s xAI. Over the past few days, the popular bot has effectively been “undressing” people on platform X and posting altered images in comment sections.
Many users began voicing concern about Grok generating illegal and extremely harmful image manipulations, including content involving minors. The incidents sparked outrage among politicians, child protection organizations, and technology safety experts, who point to serious gaps in the safeguards of AI tools operating directly within social media platforms.
The issue has exposed how easily generative AI can be used to violate privacy and dignity when it is integrated into a platform with massive reach and poorly configured content moderation mechanisms. Critics emphasize that the scale at which such material can be distributed makes rapid detection and removal far more difficult, while harm to victims often occurs before moderators can react.
Users are also raising serious questions about who will bear responsibility if, as a result of such manipulated images, someone harms themselves or even takes their own life. Following user reports, some of the images were removed, and the chatbot acknowledged in public posts on X that “serious gaps in protective mechanisms” had been identified and were being urgently fixed. The system also stressed that material related to the sexual exploitation of children is illegal and strictly prohibited.
The matter is particularly serious because even tools that claim to have protective barriers can be manipulated in ways that enable the creation of extremely harmful content. This phenomenon is especially alarming to organizations focused on child protection. The Internet Watch Foundation reported that in the first half of 2025, reports of AI-generated child sexual abuse material increased by 400%.
It is also worth remembering that xAI itself positioned Grok as a more “permissive” model than competing chatbots. In 2024, the company introduced, among other things, a “Spicy Mode” that allows erotic content and partial nudity involving adults. At the same time, the terms of service prohibit pornography using the likeness of real people and any sexual content involving minors. Representatives of xAI did not respond to requests for comment on the incident.
The problem of generative AI abuse is not limited to sexual content or to a single country. In Poland, for example, fake AI-generated videos have recently circulated—mainly on TikTok—suggesting that Poland plans to leave the European Union. Authorities have called on EU institutions to take action against platforms where such disinformation spreads. This, in turn, benefits states seeking to destabilize other countries without deploying large numbers of spies or bot farms—AI and a few hundred convincing-looking videos may be enough.

