Close Menu
    2digital.news2digital.news
    • News
    • Analytics
    • Interviews
    • About us
    • Editorial board
    • Events
    2digital.news2digital.news
    Home»News»Artificial Intelligence Can Be Used to Create New Viruses — A Major Opportunity and a Serious Threat
    News

    Artificial Intelligence Can Be Used to Create New Viruses — A Major Opportunity and a Serious Threat

    Mikolaj LaszkiewiczBy Mikolaj LaszkiewiczOctober 8, 20252 Mins Read
    LinkedIn Twitter Threads Reddit
    Share
    Twitter LinkedIn Threads Reddit

    A team of American researchers from Stanford University and the Arc Institute in Palo Alto, California, has used artificial intelligence to design bacteriophages — viruses capable of infecting bacteria. This potentially groundbreaking achievement could, in theory, pave the way for new treatments, particularly for patients suffering from antibiotic-resistant infections.

    According to the scientists, the algorithms they worked with could prove invaluable in the event of a global pandemic. In theory, AI could help analyze and compare virus samples to detect emerging threats earlier or accelerate the development of effective treatments in the future.

    However, this research immediately raises ethical and safety concerns. After all, the line between therapeutic applications and biological weaponization can be alarmingly thin. The researchers emphasize that their AI models were trained under strict guidelines to ensure they did not design viruses capable of infecting humans, animals, or plants. The system was specifically limited to tasks predefined by the research team.

    Even within this controlled environment, things didn’t always go perfectly. Another group of scientists demonstrated that AI could sometimes circumvent built-in restrictions, with roughly 3% of potentially dangerous genetic sequences managing to bypass safety filters. Much like in traditional cybersecurity, biotech systems also lack completely unbreakable defenses.

    For now, the technical barriers remain high — creating a virus with AI assistance still requires significant time, expertise, and specialized equipment. Yet given the pace of technological progress, what takes months today could soon take only minutes — an unsettling prospect for biosecurity experts.

    The most realistic path toward managing these risks lies in clear regulatory frameworks that define how AI can be accessed and applied in biotechnology. Unfortunately, legislation has yet to catch up with the speed of innovation in this field. Still, it seems increasingly inevitable that international regulations will be required to prevent the misuse of AI-driven bioengineering.

    Related Posts

    News

    Scientists have created “vacuums” invisible to the naked eye that hunt bacteria

    April 27, 2026
    News

    Mobile “jammers” disconnect thousands of phones from the network and block emergency numbers in Toronto

    April 27, 2026
    News

    White House accuses China of stealing US AI models on an “industrial scale”

    April 24, 2026
    Read more

    Platform Engineering Is Eating DevOps. The Agentic Era Made It Inevitable

    April 17, 2026

    Why Haven’t Surgical Robots Taken Over Operating Rooms Yet?

    April 16, 2026

    From AI Picking to Robots by Subscription: How Industrial Robotics Is Changing

    April 15, 2026
    Demo
    X (Twitter) Instagram Threads LinkedIn Reddit
    • NEWS
    • ANALYTICS
    • INTERVIEWS
    • ABOUT US
    • EDITORIAL BOARD
    • EVENTS
    • CONTACT US
    • ©2026 2Digital. All rights reserved.
    • Privacy policy.

    Type above and press Enter to search. Press Esc to cancel.