Close Menu
    2digital.news2digital.news
    • News
    • Analytics
    • Interviews
    • About us
    • Editorial board
    • Events
    2digital.news2digital.news
    Home»News»China Expands Its Surveillance State — AI Now Predicts Protests, Filters Content, and Identifies “Social Threats”
    News

    China Expands Its Surveillance State — AI Now Predicts Protests, Filters Content, and Identifies “Social Threats”

    Mikolaj LaszkiewiczBy Mikolaj LaszkiewiczDecember 5, 20252 Mins Read
    LinkedIn Twitter Threads

    China has long been known for monitoring its citizens on a scale unmatched anywhere else in the world — but until recently, AI was not deeply embedded in these systems. That is now changing. A new report described by CNN reveals that the technologies used by Chinese authorities include facial recognition, analysis of location data, social-media monitoring, and integration of government records.

    All of this is connected to AI systems that generate alerts about “high-risk” individuals — even before they take any action. According to the report’s authors, government documents indicate that the algorithms are designed to detect “signs of disloyalty” and supply local authorities with lists of citizens who require surveillance.

    The system places particular emphasis on the internet. AI decides in real time which posts are deleted or hidden and identifies users who regularly consume politically sensitive content. According to data cited by CNN, algorithms can detect things like contact with foreign individuals, participation in religious gatherings, or even neutral posts containing “keywords” that the system interprets as potential indicators of opposition activity.

    Experts warn that algorithmic profiles of citizens are being integrated with data from public institutions, schools, hospitals, and transportation systems — creating one of the most advanced surveillance ecosystems in the world. Citizens often do not know why they have been flagged, and there are no practical mechanisms to appeal decisions made by AI systems, leading to even deeper self-censorship and reinforcing an atmosphere of fear.

    It’s also important to note that these are not experimental or pilot systems — many people report having encountered Chinese surveillance in real-world situations. For example, there are publicly displayed “walls of shame” showing photos of citizens who crossed the street at a red light. Footage from cameras is presented in visible locations to send a clear message: anonymity does not exist, and “Big Brother is always watching.”

    This system may become a model for other authoritarian states such as Russia or Belarus. Predictive algorithms could allow governments not only to react to dissent but also to preemptively neutralize behaviors that fall outside desired norms.

    Share. Twitter LinkedIn Threads

    Related Posts

    News

    $5 million on the line to prove quantum computers work in medicine. Results expected in April

    March 20, 2026
    News

    Amazon acquires Rivr to develop stair-climbing delivery robots

    March 20, 2026
    News

    Meta AI agent exposed company and user data. Incident lasted about two hours

    March 19, 2026
    Read more

    Three Mechanisms of Aging: Autophagy, Metabolism, and Stem Cells

    March 11, 2026

    “People Have Been Cyborgs for a Long Time — We’re Just Embarrassed to Admit It”: Enhanced Games Could Trigger a Revolution

    March 10, 2026

    When AI Gets a Body: Why Physical Intelligence Is Trickier Than It Seems

    March 5, 2026
    Stay in touch
    • Twitter
    • Instagram
    • LinkedIn
    • Threads
    Demo
    X (Twitter) Instagram Threads LinkedIn
    • NEWS
    • ANALYTICS
    • INTERVIEWS
    • ABOUT US
    • EDITORIAL BOARD
    • EVENTS
    • CONTACT US
    • ©2026 2Digital. All rights reserved.
    • Privacy policy.

    Type above and press Enter to search. Press Esc to cancel.