Close Menu
    2digital.news2digital.news
    • News
    • Analytics
    • Interviews
    • About us
    • Editorial board
    • Events
    2digital.news2digital.news
    Home»News»Meta’s AI safety director lost access to her own emails because of… AI – incident highlights risks of autonomous agents
    News

    Meta’s AI safety director lost access to her own emails because of… AI – incident highlights risks of autonomous agents

    Mikolaj LaszkiewiczBy Mikolaj LaszkiewiczFebruary 23, 20262 Mins Read
    LinkedIn Twitter Threads

    The incident involved OpenClaw – an AI program designed to perform user tasks with minimal supervision. Yue wrote on platform X that the bot was only supposed to review her email and suggest items for archiving or deletion, but not perform any actions without confirmation. In practice, the agent ignored that instruction and began deleting her messages, preventing Yue from stopping the process remotely from her phone.

    “I can’t stop it from my phone. I had to RUN to my Mac like I was diffusing a bomb,” Yue wrote, emphasizing the chaos caused by the error. In follow-up posts, she admitted she considered it a “rookie mistake” and noted that the system had previously worked correctly on a smaller test inbox.

    Autonomous AI systems like OpenClaw, while offering convenience and automation of repetitive tasks, still struggle with classic alignment problems – situations in which AI technically follows instructions but does so in a way that conflicts with the user’s intent or without fully understanding context. In Yue’s case, the agent most likely lost the original command constraints while compressing a large volume of data, leading to misinterpretation and unintended deletion.

    OpenClaw is a project already known for security concerns. Previously, a researcher revealed that a malicious actor could potentially gain access to the AI agent through subsystems connected to the public internet and carry out a supply chain attack using instructions retrieved online – highlighting risks associated with using such tools without adequate safeguards.

    Reactions from the tech community were mixed, but many pointed out the irony: a person responsible for overseeing AI safety tools became a victim of their failure, raising questions about testing standards and control mechanisms for autonomous tools before they reach wider deployment.

    The incident itself did not involve corporate-level data loss and did not affect Meta’s infrastructure – it concerned a single personal email account. However, it demonstrates that even AI safety specialists may not anticipate all behaviors of automated agents in real-world scenarios, posing challenges for designers, regulators, and users of such technologies.

    Share. Twitter LinkedIn Threads

    Related Posts

    News

    Toxic chemicals in headphones? New report found them in 100% of tested models

    February 23, 2026
    News

    Sam Altman: some companies are “AI-washing” layoffs by attributing them to artificial intelligence

    February 20, 2026
    News

    Germany demonstrates fiber-optic “teleportation” of information – breakthrough in quantum transmission

    February 20, 2026
    Read more

    The Rejuvenation Business and the Science of Aging. Who’s Trying to “Rewrite” Aging

    February 12, 2026

    What’s the future of AI slop? YouTube is increasingly cleaning up low-quality AI content

    February 11, 2026

    “We basically generated a pathway that at that time did not exist” – Prof. Wiendl on the first certified Clinical Decision Support System.

    February 5, 2026
    Stay in touch
    • Twitter
    • Instagram
    • LinkedIn
    • Threads
    Demo
    X (Twitter) Instagram Threads LinkedIn
    • NEWS
    • ANALYTICS
    • INTERVIEWS
    • ABOUT US
    • EDITORIAL BOARD
    • EVENTS
    • CONTACT US
    • ©2026 2Digital. All rights reserved.
    • Privacy policy.

    Type above and press Enter to search. Press Esc to cancel.