Close Menu
    2digital.news2digital.news
    • News
    • Analytics
    • Interviews
    • About us
    • Editorial board
    2digital.news2digital.news
    Home»News»Artificial Intelligence Can Be Used to Create New Viruses — A Major Opportunity and a Serious Threat
    News

    Artificial Intelligence Can Be Used to Create New Viruses — A Major Opportunity and a Serious Threat

    October 8, 20252 Mins Read
    LinkedIn Twitter

    A team of American researchers from Stanford University and the Arc Institute in Palo Alto, California, has used artificial intelligence to design bacteriophages — viruses capable of infecting bacteria. This potentially groundbreaking achievement could, in theory, pave the way for new treatments, particularly for patients suffering from antibiotic-resistant infections.

    According to the scientists, the algorithms they worked with could prove invaluable in the event of a global pandemic. In theory, AI could help analyze and compare virus samples to detect emerging threats earlier or accelerate the development of effective treatments in the future.

    However, this research immediately raises ethical and safety concerns. After all, the line between therapeutic applications and biological weaponization can be alarmingly thin. The researchers emphasize that their AI models were trained under strict guidelines to ensure they did not design viruses capable of infecting humans, animals, or plants. The system was specifically limited to tasks predefined by the research team.

    Even within this controlled environment, things didn’t always go perfectly. Another group of scientists demonstrated that AI could sometimes circumvent built-in restrictions, with roughly 3% of potentially dangerous genetic sequences managing to bypass safety filters. Much like in traditional cybersecurity, biotech systems also lack completely unbreakable defenses.

    For now, the technical barriers remain high — creating a virus with AI assistance still requires significant time, expertise, and specialized equipment. Yet given the pace of technological progress, what takes months today could soon take only minutes — an unsettling prospect for biosecurity experts.

    The most realistic path toward managing these risks lies in clear regulatory frameworks that define how AI can be accessed and applied in biotechnology. Unfortunately, legislation has yet to catch up with the speed of innovation in this field. Still, it seems increasingly inevitable that international regulations will be required to prevent the misuse of AI-driven bioengineering.

    Share. Twitter LinkedIn
    Avatar photo
    Mikolaj Laszkiewicz

    An experienced journalist and editor passionate about new technologies, computers, and scientific discoveries. He strives to bring a unique perspective to every topic. A law graduate.

    Related Posts

    News

    U.S. imposes steep tariffs on Nvidia and AMD chips exported to China. A major blow to China’s AI sector

    January 16, 2026
    News

    Global disparities in AI adoption may deepen economic inequalities, Anthropic report finds

    January 16, 2026
    News

    European Union Introduces Mandatory Monitoring of “Forever Chemicals” in Drinking Water – New Rules Now in Force

    January 15, 2026
    Read more

    «Not a ranking, but an X-ray»: How the IMF Measures Countries’ Readiness for AI

    January 8, 2026

    Why Employers Need Women’s Health Programs

    January 7, 2026

    Personalized medicine – how far can we go with precision medicine

    January 2, 2026
    Stay in touch
    • Twitter
    • Instagram
    • LinkedIn
    Demo
    X (Twitter) Instagram LinkedIn
    • News
    • Analytics
    • Interviews
    • About us
    • Editorial board
    • Contact us

    Type above and press Enter to search. Press Esc to cancel.