When people talk about the Great Firewall, they often picture a giant blocklist. That picture is outdated. China’s internet controls have shifted from simple yes/no filters to AI-assisted prediction and instant response a system that doesn’t just delete posts after they go viral, but tries to stop virality before it happens. Researchers now show that the Firewall can detect new encrypted traffic patterns, guess what they are and shut them down in near real time. In 2023, for example, a USENIX Security paper documented how the Great Firewall began detecting and blocking fully encrypted circumvention protocols on the fly, a clear sign that deep-learning models and traffic fingerprinting are now part of the censorship toolkit.
Beijing calls the broader project the Golden Shield. In practice, what many analysts call “Golden Shield 2.0” is the union of old administrative controls with new machine-learning classifiers. The result is speed and scale. By April 2024, measurement labs observed the Firewall targeting QUIC (Google’s UDP-based transport) by decrypting QUIC Initial packets at scale and applying heuristic rules, then rolling those rules out across the country. A 2025 study further warned that these QUIC upgrades, meant to harden censorship, also introduced a design flaw that could disrupt large chunks of China’s DNS and UDP traffic if abused a reminder of how far-reaching these experiments have become.
AI helps at every step. First, predictive filtering–models trained on past takedowns and “sensitivity” labels can score a post, hashtag, or stream within seconds and throttle it before users notice. Second, traffic inference–even when content is encrypted, classifiers can spot the “shape” of disallowed tools VPN tunnels, Tor bridges, Shadowsocks variants and kill connections mid-flow. That is why circumvention communities report a cat-and-mouse cycle measured in days and weeks rather than months. Freedom House notes that VPN interference typically spikes around political events and that China keeps upgrading the technical playbook against unlicensed VPNs.
This brings us to the VPN crackdown of 2024–2025. Alongside long-standing pressure on app stores, researchers picked up stronger, smarter disruptions–new blocks on tunneling, faster resets and targeted attacks on Telegram and other ‘offshore’ platforms. Provincial studies even found regional censorship surges—Henan, for instance, blocking orders of magnitude more domains than the national average between late-2023 and early-2025 suggesting a mix of central upgrades and local zeal supported by AI tools.
A second front is export. What used to be bespoke, in-house control is turning into a package that can be shipped. In autumn 2025, a large leak about Geedge Networks a company linked to architects of the Firewall—showed sales of “secure gateways” that can filter traffic, block VPNs, track users and inject code, with deployments or pilots reported in Kazakhstan, Pakistan, Ethiopia and Myanmar. Investigations by major outlets and rights groups called it “digital authoritarianism as a service.” This is not just influence. It is infrastructure traveling along Belt and Road ties.
Supporters of the system argue it protects social stability and reduces harmful content. But there is a price tag. Studies of China’s online economy find that hard walls split Chinese developers and firms from global tools, talent and customers. One 2023 analysis linked higher censorship intensity to lower online labor productivity and output, implying deadweight losses that grow as controls tighten. The QUIC episode shows another cost: fragility. When you tinker with foundational protocols to chase new blocks, you risk collateral damage that can break legitimate services and hurt domestic businesses.
There is also the global spillover. As Chinese platforms, vendors and state media expand abroad, pieces of the Firewall model travel with them. The domestic AI that sorts “harmful” from “healthy” speech informs moderation and recommendation logic. The network gear sold to partner states arrives pre-tuned for keyword blocking, DPI and behavior scoring and, when crises hit, the same toolchain can flip from content management to surveillance and arrests. The export files suggest that some clients are explicitly shopping for that fusion of filter, identify and punish—a triple play that moves far beyond ordinary platform moderation.
All of this adds up to a Great Firewall 2.0 that is less a wall and more a living system. It learns from traffic, updates its models and deploys rules at machine speed. It reaches into the stack—from domain names and transport layers to app stores and creator economies—so that a problem can be solved at whichever layer is most efficient. And it increasingly scales beyond China’s borders, through vendor deals and policy training.
The hard question for the world is not whether China‘s censors—that is settled—but what it means when an AI-powered censorship stack becomes a standard export. For democracies, it complicates open internet efforts: the protocol layer can be subverted, the platform layer saturated, and the legal layer borrowed by imitators. For Chinese users and firms, it narrows horizons and raises operational risk, even as it delivers the state what it wants most: stability on demand.
The Great Firewall began as a set of filters. In its 2.0 form, it is a predictive, AI-driven nervous system for political control—faster than human censors, harder to route around, and now, increasingly, for sale.
(Ashu Mann is an Associate Fellow at the Centre for Land Warfare Studies. He was awarded the Vice Chief of the Army Staff Commendation card on Army Day 2025. He is pursuing a PhD from Amity University, Noida, in Defence and Strategic Studies. His research focuses include the India-China territorial dispute, great power rivalry, and Chinese foreign policy.)































