Close Menu
    2digital.news2digital.news
    • News
    • Analytics
    • Interviews
    • About us
    2digital.news2digital.news
    Home»News»Artificial Intelligence Can Be Used to Create New Viruses — A Major Opportunity and a Serious Threat
    News

    Artificial Intelligence Can Be Used to Create New Viruses — A Major Opportunity and a Serious Threat

    October 8, 20252 Mins Read
    LinkedIn Twitter

    A team of American researchers from Stanford University and the Arc Institute in Palo Alto, California, has used artificial intelligence to design bacteriophages — viruses capable of infecting bacteria. This potentially groundbreaking achievement could, in theory, pave the way for new treatments, particularly for patients suffering from antibiotic-resistant infections.

    According to the scientists, the algorithms they worked with could prove invaluable in the event of a global pandemic. In theory, AI could help analyze and compare virus samples to detect emerging threats earlier or accelerate the development of effective treatments in the future.

    However, this research immediately raises ethical and safety concerns. After all, the line between therapeutic applications and biological weaponization can be alarmingly thin. The researchers emphasize that their AI models were trained under strict guidelines to ensure they did not design viruses capable of infecting humans, animals, or plants. The system was specifically limited to tasks predefined by the research team.

    Even within this controlled environment, things didn’t always go perfectly. Another group of scientists demonstrated that AI could sometimes circumvent built-in restrictions, with roughly 3% of potentially dangerous genetic sequences managing to bypass safety filters. Much like in traditional cybersecurity, biotech systems also lack completely unbreakable defenses.

    For now, the technical barriers remain high — creating a virus with AI assistance still requires significant time, expertise, and specialized equipment. Yet given the pace of technological progress, what takes months today could soon take only minutes — an unsettling prospect for biosecurity experts.

    The most realistic path toward managing these risks lies in clear regulatory frameworks that define how AI can be accessed and applied in biotechnology. Unfortunately, legislation has yet to catch up with the speed of innovation in this field. Still, it seems increasingly inevitable that international regulations will be required to prevent the misuse of AI-driven bioengineering.

    Share. Twitter LinkedIn
    Avatar photo
    Mikolaj Laszkiewicz
    • LinkedIn

    An experienced journalist and editor passionate about new technologies, computers, and scientific discoveries. He strives to bring a unique perspective to every topic. A law graduate.

    Related Posts

    News

    Pulsenmore Receives FDA Clearance for At-Home Prenatal Ultrasound

    November 4, 2025
    News

    Microsoft Isn’t Running Out of AI Chips — It’s Running Out of Power

    November 4, 2025
    News

    Chinese Economic Espionage Is Increasingly Coming from Within — and It’s Becoming Harder to Detect

    November 3, 2025
    Read more

    HealthTech Podcasts That Make You Smarter

    October 23, 2025

    The WHO Warns Digital Health Solutions May Amplify Inequities. What’s the Developers’ View?

    October 22, 2025

    AI and the Deskilling of Clinicians

    October 21, 2025
    Stay in touch
    • Twitter
    • Instagram
    • LinkedIn
    Demo
    • News
    • Analytics
    • Interviews
    • About us
    • Contact us

    Type above and press Enter to search. Press Esc to cancel.