Close Menu
    2digital.news2digital.news
    • News
    • Analytics
    • Interviews
    • About us
    • Editorial board
    • Events
    2digital.news2digital.news
    Home»News»Meta AI agent exposed company and user data. Incident lasted about two hours
    News

    Meta AI agent exposed company and user data. Incident lasted about two hours

    Mikolaj LaszkiewiczBy Mikolaj LaszkiewiczMarch 19, 20262 Mins Read
    LinkedIn Twitter Threads

    An internal AI system at Meta exposed both company and user data for roughly two hours. According to reports, the incident started with a routine technical question posted by an employee on an internal forum.

    Another engineer used an AI agent to analyze the issue. The system generated a response and posted it without additional verification. The problem was that the guidance it provided was wrong. The employee who originally asked the question followed those instructions, which led to large datasets being shared with people who didn’t have permission to access them.

    The data remained accessible for about two hours before the issue was identified and resolved. Meta confirmed the incident and classified it as a “Sev 1” event – the second-highest severity level in the company’s internal security scale.

    This isn’t the first time Meta has run into trouble with autonomous AI systems. Earlier, Summer Yue, head of AI Safety & Alignment, described a situation where an AI agent deleted her Gmail inbox despite clear instructions to ask for confirmation first. “I couldn’t stop it from my phone. I had to RUN to my Mac mini like I was defusing a bomb,” she wrote.

    Despite incidents like this, Meta is doubling down on agentic AI – systems designed to act on their own. The company recently acquired the platform Moltbook, which is meant to enable communication between AI agents, and brought its founders into the Meta Superintelligence Labs team.

    Meta has also invested in Scale AI, taking a 49% stake, and acquired companies including Manus AI and Limitless. At the same time, it continues to ramp up spending on infrastructure and hiring in the AI space.

    Looking at this case, it’s a pretty clear reminder of the risks that come with autonomous AI inside organizations. A system that was supposed to help troubleshoot an issue ended up breaking security protocols in a very real way – something a human likely wouldn’t have done. As these systems become more autonomous, oversight, testing and strong safeguards are only going to matter more.

    Share. Twitter LinkedIn Threads

    Related Posts

    News

    Meta pulls back from the Metaverse. Company shifts strategy after years of investment

    March 19, 2026
    News

    H&M wants to make clothes from CO₂. Teams up with startup turning emissions into materials

    March 18, 2026
    News

    Orange launches “Drone Guardian.” A European service to detect and neutralize drones

    March 18, 2026
    Read more

    “People Have Been Cyborgs for a Long Time — We’re Just Embarrassed to Admit It”: Enhanced Games Could Trigger a Revolution

    March 10, 2026

    When AI Gets a Body: Why Physical Intelligence Is Trickier Than It Seems

    March 5, 2026

    Rejuvenation Targets: Protein Quality Control, Mitochondria, and Zombie Cells

    March 4, 2026
    Stay in touch
    • Twitter
    • Instagram
    • LinkedIn
    • Threads
    Demo
    X (Twitter) Instagram Threads LinkedIn
    • NEWS
    • ANALYTICS
    • INTERVIEWS
    • ABOUT US
    • EDITORIAL BOARD
    • EVENTS
    • CONTACT US
    • ©2026 2Digital. All rights reserved.
    • Privacy policy.

    Type above and press Enter to search. Press Esc to cancel.