An internal AI system at Meta exposed both company and user data for roughly two hours. According to reports, the incident started with a routine technical question posted by an employee on an internal forum.
Another engineer used an AI agent to analyze the issue. The system generated a response and posted it without additional verification. The problem was that the guidance it provided was wrong. The employee who originally asked the question followed those instructions, which led to large datasets being shared with people who didn’t have permission to access them.
The data remained accessible for about two hours before the issue was identified and resolved. Meta confirmed the incident and classified it as a “Sev 1” event – the second-highest severity level in the company’s internal security scale.
This isn’t the first time Meta has run into trouble with autonomous AI systems. Earlier, Summer Yue, head of AI Safety & Alignment, described a situation where an AI agent deleted her Gmail inbox despite clear instructions to ask for confirmation first. “I couldn’t stop it from my phone. I had to RUN to my Mac mini like I was defusing a bomb,” she wrote.
Despite incidents like this, Meta is doubling down on agentic AI – systems designed to act on their own. The company recently acquired the platform Moltbook, which is meant to enable communication between AI agents, and brought its founders into the Meta Superintelligence Labs team.
Meta has also invested in Scale AI, taking a 49% stake, and acquired companies including Manus AI and Limitless. At the same time, it continues to ramp up spending on infrastructure and hiring in the AI space.
Looking at this case, it’s a pretty clear reminder of the risks that come with autonomous AI inside organizations. A system that was supposed to help troubleshoot an issue ended up breaking security protocols in a very real way – something a human likely wouldn’t have done. As these systems become more autonomous, oversight, testing and strong safeguards are only going to matter more.

