Close Menu
    2digital.news2digital.news
    • News
    • Analytics
    • Interviews
    • About us
    • Editorial board
    • Events
    2digital.news2digital.news
    Home»News»AI-Powered Browsers Aren’t Safe — and Almost No One Realizes It
    News

    AI-Powered Browsers Aren’t Safe — and Almost No One Realizes It

    October 31, 20252 Mins Read
    LinkedIn Twitter Threads

    Web browsers have evolved far beyond their original purpose of simply displaying websites. Today, they’re becoming the core interface for AI, capable of summarizing content, managing user accounts, and even learning behavioral patterns — all with just a few clicks through built-in assistants.

    However, while this may be convenient, it also creates a massive attack surface for cybercriminals. As Prof. Hamed Haddadi, Professor of Human-Centred Systems at the Department of Computing at Imperial College London, told The Verge, early versions of AI browsers have a “broad attack surface,” and despite built-in safeguards, they are “far more powerful than traditional browsers” because they maintain contextual memory that learns from every user action.

    Search history, emails, and AI chat data are stored in memory and can be accessed by both the system and potential attackers. Prof. Yash Vekaria from the University of California, Davis, warns that such browsers can track and profile users more deeply than ever before — creating “a more invasive profile than ever before.”

    AI assistants also lack human intuition for distinguishing between legitimate and malicious software on suspicious sites. When asked to find a specific file, program, or obscure piece of information, the assistant might inadvertently guide users to unsafe websites or unreliable sources, potentially leading to malware infections on their devices.

    Browsers like ChatGPT Atlas are still effectively in beta, with many features not yet fully secured. One major risk is prompt injection — an attack in which malicious text, images, or links embedded on a webpage are read and executed by the AI agent as if they were user instructions. Researchers at Palo Alto Networks have demonstrated that once a user visits such a page, the AI may store the malicious command in its memory and later act on it — for example, by sharing sensitive data or performing actions without the user’s consent.

    It’s worth exercising real caution when using any AI-driven tools. Remember that they are still just tools — and every system eventually develops new vulnerabilities that hackers can exploit. To stay safe, never share sensitive or personally identifiable information in AI chats, no matter how trustworthy the platform may seem.

    Share. Twitter LinkedIn Threads
    Avatar photo
    Mikolaj Laszkiewicz

    Related Posts

    News

    YouTube blocks background playback in mobile browsers — users must switch to the app or Premium

    February 3, 2026
    News

    Texas begins releasing “glowing” flies to stop the dangerous screwworm from entering the United States

    February 3, 2026
    News

    AI in mammography reduces detection of advanced breast cancer – results from the first randomized controlled trial

    February 2, 2026
    Read more

    “One can use AI to predict hurricanes, cyberattacks, and disease, but not financial panics.” We spoke with an economist about where AI can actually help them

    January 21, 2026

    The Health Data Gold Rush. How OpenAI and Anthropic Are Competing for Medical Records

    January 20, 2026

    GPUs, Budgets, and API Grey Zones: The Hidden Cost of External Models in Pharma

    January 16, 2026
    Stay in touch
    • Twitter
    • Instagram
    • LinkedIn
    • Threads
    Demo
    X (Twitter) Instagram Threads LinkedIn
    • NEWS
    • ANALYTICS
    • INTERVIEWS
    • ABOUT US
    • EDITORIAL BOARD
    • EVENTS
    • CONTACT US
    • ©2026 2Digital. All rights reserved.
    • Privacy policy.

    Type above and press Enter to search. Press Esc to cancel.