There is a useful exercise for understanding how authoritarian control actually works: follow the infrastructure, not the ideology. The machinery matters more than the stated purpose, because stated purposes change while machinery does not. In Russia's case, the machinery was built openly, in plain view, justified at every step by one of the most unchallengeable arguments a government can make: protecting children from predators on the internet.

That machine is now capable of monitoring the online activity of 145 million people, severing Russia's internet from the global web, and reaching inside encrypted communications without a court order. This is not an accident of mission creep. It was an architecture, assembled incrementally across a decade, each layer politically legitimized before the next was added.

/ The Architecture of Control
2012
Federal Law 139-FZ: the "Blacklist Law." Child protection framing. No judicial review.
2013
Amendments extend blocking to "extremism" and "unsanctioned gatherings."
2014
Infrastructure first used for systematic political censorship post-Crimea.
2016
Yarovaya Law: mass data retention and mandatory FSB encryption backdoors.
2019
"Sovereign Internet" Law: DPI hardware installed at every ISP; Runet isolation capability.
2022+
War in Ukraine accelerates total censorship; 150+ media outlets banned.

2012: The Innocent Registry

Federal Law 139-FZ · The "Blacklist Law"

In the summer of 2012, the Russian State Duma passed Federal Law 139-FZ unanimously. The law created what its architects called a "unified register," a centralized blacklist of URLs to be blocked across all Russian internet service providers. The stated targets were unambiguous: child sexual abuse material, drug promotion, and content encouraging suicide among young people. Few politicians anywhere in the world are willing to vote against such measures on the record, and Russian legislators were no different. The bill passed without dissent from any of the four parties represented in the Duma.

The agency charged with administering the register was Roskomnadzor, the Federal Service for Supervision of Communications, Information Technology, and Mass Media, a body that had existed since 2008 but had previously held a modest regulatory remit over broadcast licensing. With the blacklist law, it became something else: a censorship organ with the authority to order ISPs to block any listed domain without seeking a court order.

Key Mechanism: The blacklist law established a three-day window: once a site was added to the register, its hosting provider had 72 hours to remove the offending content. If it didn't, the entire domain (not just the specific URL) could be blocked. The speed and extrajudicial nature of this process were features, not oversights.

The procedural structure was telling from the start. Roskomnadzor could add sites to the register on the basis of determinations by three categories of authority: the courts, state regulatory bodies, and, critically, direct citizen submissions, which the agency was permitted to verify and act upon itself. There was no independent judicial review of the agency's blocking decisions. Once a site landed on the list, Russian ISPs were legally required to block it. Non-compliance carried fines.

Critics pointed to an immediate logical flaw: the mechanism blocked by domain or IP address, not by specific URL. This meant that if a prohibited piece of content appeared on a shared hosting platform, the entire platform could be blocked. In the first days of the register's operation, the satirical wiki Lurkmore appeared on the list, not for child exploitation material, but because of how the block was technically implemented. The bluntness of the tool was apparent immediately. Nobody in government seemed particularly troubled by it.

What the law had established, beyond the specific blocking mechanism, was something more durable: a precedent that the Russian state had the right to administer a secret list of unacceptable internet content, to act on that list without judicial authorization, and to compel private infrastructure operators to enforce it. The register's contents were not fully public. The reasoning behind specific additions was not reviewable by independent courts. The architecture of unaccountable, extrajudicial content control was now embedded in Russian law, justified entirely by the protection of children.

2013: Expanding the Target List

The Pivot from Harm to Heresy

The Blacklist Law was barely a year old when it was amended. In 2013, the categories of blockable content were expanded to include material "suspected in extremism," content "calling for illegal meetings," content "inciting hatred," and anything else that constituted "actions violating the established order." The child protection justification remained in place. These new categories were layered on top of it.

The legal language was, deliberately, elastic. "Extremism" under Russian law has no fixed technical meaning equivalent to western legal standards. It is broadly construed to include statements that "undermine state security," "incite discord," or "discredit" government officials. "Calling for illegal meetings" can encompass a Facebook post encouraging people to attend a protest the authorities have declined to permit. "Violating the established order" is essentially a catch-all.

By the Numbers: According to data published by the Russian Society for Internet Users, instances of censorship using the blacklist infrastructure increased by a factor of 1.5 between 2013 and 2014 alone, the single year following the "extremism" amendments and the annexation of Crimea.

This was the legal pivot point. The infrastructure, the register, the ISP compliance obligations, the extrajudicial blocking authority, remained identical. Only the permitted targets had changed. The same technical mechanism that blocked a website showing child abuse images could now block a website showing footage from a protest in Moscow. No new apparatus needed to be built. No new agency needed to be created. The plumbing was already in place.

2014: The Infrastructure Goes Political

Crimea, Opposition, and the First Wave of Targeted Censorship

In March 2014, days after Russia's annexation of Crimea, Roskomnadzor executed a move that removed all remaining ambiguity about the register's purpose. On March 13th, the agency blocked Alexei Navalny's LiveJournal blog, along with the independent political news sites Grani.ru, Kasparov.ru, and Yozh.ru. None of these blocks had anything to do with child protection, drug promotion, or suicide content, the original stated justifications for the register's existence. They were blocked because their content was politically inconvenient.

The same week, independent news outlet Lenta.ru received an official warning from Roskomnadzor for including a link to an interview with a Ukrainian nationalist leader. The editorial responded by deleting the link, but its editor-in-chief was fired regardless. A large portion of the publication's journalists quit in protest. The man who replaced her as editor was, within four years, working in the Russian presidential administration.

The technical infrastructure was identical. Only the target list changed.

Freedom House's annual assessments documented the systematic nature of what was happening. The blocking infrastructure constructed under the justification of protecting children was being used to suppress political opposition, independent journalism, civil society websites, and eventually the online presence of international human rights organizations. Between 2015 and 2020, Roskomnadzor blocked 22 organizations deemed "undesirable," including Open Russia, the National Endowment for Democracy, the Open Society Foundations, and the Atlantic Council.

Russia's courts consistently sided with the executive. The European Court of Human Rights was another matter: in 2020, the court ruled in three separate cases that Russia's blocking actions were clear violations of Articles 10 and 13 of the European Convention on Human Rights, protecting freedom of expression and the right to an effective remedy. Russia, by that point, was past caring about European Court rulings.

2016: The Surveillance Backbone

The Yarovaya Law and the End of Private Communication

If the 2012 Blacklist Law built the mechanism for controlling what Russians could see, the 2016 Yarovaya Law, formally Federal Laws 374-FZ and 375-FZ, built the mechanism for monitoring what they said. The laws were framed as an anti-terrorism package, introduced in the shadow of the October 2015 bombing of Metrojet Flight 9268, which killed 224 Russian citizens over the Sinai desert.

Yarovaya Law, Core Provisions:

Data retention: All telecom providers must store the full content of voice calls, text messages, video, images, and other data for six months. Metadata must be retained for three years.

Encryption backdoors: Any service that uses encrypted communications must provide the FSB with the means to decrypt those communications, including encryption keys.

No judicial oversight: Providers must disclose communications and metadata to the FSB and other investigative authorities on request, without a court order.

Service termination: Providers can be ordered to cut service to any user whose identity cannot be confirmed.

Human Rights Watch described the law as taking "Big Brother surveillance to a whole new level," noting that "no digital communication would be safe from government snooping, no matter how innocuous or unrelated to terrorism." The Electronic Frontier Foundation identified what may be the law's most insidious feature: it was deliberately designed to be impossible to fully comply with.

Russia's largest telecom operators estimated that storing six months of full communications content would require building data infrastructure costing approximately 2.2 trillion rubles, roughly $33 billion. MTS, Russia's biggest mobile operator, calculated that full compliance would require every ruble of the company's profits for the next hundred years.

Russia's ISPs, messaging services, and social media platforms cannot reasonably comply with all the demands of the Yarovaya package, so they become de facto criminals whatever their actions. And that, in turn, gives the Russian state the leverage to extract from them any other concession it desires. The impossibility of full compliance is not a bug; it's an essential feature.

Electronic Frontier Foundation, July 2016

The law passed anyway. A telecom company that cannot legally fully comply with state requirements is permanently at the mercy of selective enforcement. The state can always find a reason to fine, pressure, or shut down any company that becomes inconvenient, because every company is, by definition, out of compliance with some provision of the law. The legal impossibility is a mechanism of leverage.

2017 to 2019: Sealing the Exits

VPN Bans, Sovereign Internet, and Deep Packet Inspection

As the blocking and surveillance infrastructure matured, Russian authorities turned their attention to the tools people were using to escape it. In November 2017, Russia banned VPN services and anonymizers that did not integrate with and enforce the state blacklist; in other words, any privacy tool that actually provided privacy.

In 2019, the "sovereign internet" law took the infrastructure to its logical conclusion. Telecom operators were required to install Deep Packet Inspection hardware at every network node, technology that allows the state to inspect, filter, redirect, or block internet traffic at the packet level, in real time, invisibly to the user. A national domain name system was constructed to function independently from global DNS infrastructure. Roskomnadzor was granted the authority, under emergency conditions, to disconnect Russia's internet segment from the global web entirely.

What Deep Packet Inspection Does: DPI technology doesn't just block specific addresses. It reads the content of network traffic as it moves through infrastructure, enabling the state to identify what protocols are being used, what services are being accessed, and what information is being communicated, even across encrypted connections, through traffic analysis techniques that don't require decryption. In 2021, Roskomnadzor used DPI to throttle Twitter as a political pressure tactic. In 2024, the same infrastructure was used to throttle YouTube.

By 2024, the practical result of this layered infrastructure was visible in daily life. A young man in the Ural region who accidentally opened a browser tab containing content related to a designated Ukrainian organization, without sharing, downloading, or posting anything, found that his mobile provider had automatically logged the access and passed the data to the FSB. He had not done anything that would register as an action in any conventional sense. He had simply been browsing the internet.

The Architecture of Creep

The Russian case offers a clear anatomy of how surveillance states are assembled in democratic or semi-democratic contexts. The process is rarely announced as what it is. It proceeds through a series of defensible steps, each one building on the last, each one justified by the most recent crisis or the most sympathetic category of victim.

First, establish the precedent that the state may maintain a secret list of unacceptable content and compel private infrastructure to enforce it, without judicial oversight. Do this in the name of protecting children, a purpose so legitimate that objections seem monstrous. Second, expand the categories using vague language: "extremism," "illegal gatherings," "violating the established order." Do this in the name of stability. Third, use the infrastructure politically, systematically, against opponents, journalists, and civil society organizations. Fourth, mandate that all communications be stored and made accessible to security services without court orders. Do this in the name of counterterrorism. Fifth, install surveillance hardware at every network node. Do this in the name of sovereignty.

At no point in this sequence does the government announce: "We are building a total surveillance state." At every point, the specific measure being implemented can be described in terms of legitimate state interests. The child protection framing of 2012 was not purely cynical; the blacklist did target genuinely harmful content. The terrorism framing of the Yarovaya Law was not purely cynical; Russia has faced genuine terrorist attacks. But in both cases, the infrastructure built to address the stated problem was far larger, more flexible, and more durable than the stated problem required. That excess capacity is not wasted. It is the point.

A system able to monitor the internet activities of millions of citizens and ready to ban contents considered undesirable by the Russian government.

European Digital Rights initiative (EDRi)

By 2025, nearly one million websites had been blocked in Russia. More than 150 media organizations had been designated illegal. Every search query, every connection, every post was logged. The scaffolding was always the same. Only the signs had changed.


Sources


This is part of a series on how governments worldwide use protective legislation to build surveillance infrastructure. See also: Child Safety as a Backdoor to the Surveillance State and The Global Surveillance Playbook.