An oral history of the inapplicability of laws in cyberspace8 minutes read

A recent opinion piece of mine hasn’t gone down well with a clutch of lawyers at the helm of the privacy debate in India. I experienced a backlash of sorts in a Whatsapp group operating under the Chatham House rules, so I’m not in a position to share much. Apart from the fact that the article wasn’t even written keeping them in mind, the clutch imagining itself to be the sole torchbearer on the issue did disturb me.

I was aspersed and told that I don’t know the meaning of “Hobbesian” and “Libertarian” – loaded words for someone like me to use, no doubt. What followed was a minor showdown of sorts. The comment did pinch me a little, not because I’ve invested in education to hone my legal knowledge, but because I’ve always known code and law to be the realm of autodidacts. Anyone can cook, code and interpret the law. I also feared that the group could be an echo chamber, and echo chambers kill republics.

Both code and law are an exercise in semantics – abstract in definition, yet precise in application. But the similarities only go that far. Hackers actually are the new lawyers – they test the enforceability of the laws of cyberspace by violating them. Unlike lawyers, hackers never needed institutional support to do that. We’ve no Ivy League or International Court of Justice. Nevertheless, we’ve managed to do well for ourselves. And when the laws of meatspace are misapplied to the territory we inhabit, we make sure that the contemporary justice system creaks at its very foundations, undermining nation states in the process.

Here’s a simple axiom for everyone to test: if your law can’t be validated with code, then it’ll remain inapplicable at best and violable at worst. You realise that most of the proposed laws fall in either of the two categories, risking our present by underestimating the future.

The secure foundations of the internet have been forged in the crucible of illegality. Reverse engineering is how we fixed the leaking roof of cyberspace, saving it from many impending apocalypses, by finding vulnerabilities that plague proprietary software. Not only has it abused Microsoft’s EULA with impunity for decades but is also the de jure way the company itself undertakes vulnerability research now. Dozens of posts laying bare the innards of proprietary software get published every day. The vulnerabilities are then weaponised as proof-of-concept exploits, generously shared among the security community. By this definition, being a hacker itself is a thoughtcrime. Microsoft realised many years ago that it’s better to cajole and pamper these outlaws by hosting conferences like Blue Hat as they could easily lasso the company into submission. Cisco, Oracle, Adobe and Apple eventually followed suit, too.

Let’s bring the discussion to the present context – say, the eagerness of activists to enforce the Right to Forget in India. By lacking nuance, the legal community has given away its power to progressively intervene – it has been relegated to the strict for-and-against binary of the argument. If this continues, these self-proclaimed arbiters of privacy may eventually get side-lined by the civil society.

Let’s imagine some teenaged activist like Sean Parker – who, by the way, was a member of the elite hacking group w00w00 in the halcyon days – wakes up pissed one day and decides that the Right to Forget impedes political transparency…which is true. The kid can cook-up a clone of the Wayback Machine, hosted on Pirate Bay-like infrastructure, that crawls or crowdsources content via a Tor-like network, recording all the information with a blockchain. You’ve a machine that’ll never forget what you did last summer, even if it wants to – and all of it could just be a summer project executed in a basement. Parker, too, singlehandedly broke the copyright industry with Napster.

Check out Politwoops, an archive of the deleted tweets of European politicians. Does the GDPR have anything to say about it? You may not have noticed but a variety of websites that used to talk to the Twitter API and archive tweets for good or bad reasons have recently shut down. Twitter sent legal notices and revoked their access to the API to comply with the GDPR, but can this work at scale and speed?

The only option against the kid’s blockchain would be to corrupt it using cryptanalytic or other types of cyber offence – which is selective and wilful and, hence, extra-judicial. I’m not even mentioning the kind of computational power and cyber capabilities it requires. It’s for such reasons that all international tech policy is merely the Clausewitzean continuation of cyberwar by other means. Nation states have long tried cloistering the mathematics of cyber offence, as they’ve too much to lose, but not for long. The non-state actors are catching up fast. Mathematics has never been equitable and remains unfazed by legality. The U.S. couldn’t even bring down Wikileaks for all those years despite a global witch-hunt, forget the Right to Forget. In fact, as I postulate in my upcoming book, ‘availability’ is the most potent cyberweapon.

Look at how the Arms Export Control Act has been rendered completely useless and inapplicable in cryptography. The browsers you use have been violating it for 25 years. The apparatchiks and lawyers fell facedown when they tried to introduce cyberweapons to the Wassenaar Arrangement in 2017.

Look at the Tallinn Manual, which sought the application of the Law of Armed Conflict to cyberattacks. The following excerpt from a paper in the Temple International & Comparative Law Journal sums the whole effort up:

For cyberspace, however, how international law applies is currently much less clear. Efforts like the Tallinn Manual (both the original and 2.0 versions) may be celebrated for highlighting the extent to which various international law prohibitions and requirements apply in cyberspace. Yet, a close reading of the text of both editions evidences extensive and substantial interpretative disagreements even among its Independent Group of Expert authors (e.g., on defining an armed attack under the jus ad bellum). Moreover, outside the Tallinn process, others have questioned the very existence in cyberspace of some of the international law rules identified in the Tallinn Manual (e.g., self-defence, sovereignty, due diligence).

To further quote from an upcoming book of mine:

The only logical explanation for the persistence of NATO’s strategic posturing around the Tallinn Manual could be that it serves as a ready geopolitical instrument for the wilful enforcement of decrees; a retrospective justification for any punitive measures or mobilisation against adversarial cyber powers like Russia; and a saving grace after the abject failure of the GGE dialogue.

All cyberlaws have become subservient to geopolitics. Look how the completely outdated, 30-year old Computer Fraud and Abuse Act is used by the U.S. Department of Justice to chase foreign hackers – not even choosing to hide the fact that the subpoena or warrant relies on possibly illegal intercepts of the NSA.

Even domestically within the U.S., look at the fate of the Digital Millennium Copyright Act.

Recently, the Rule 41 (Search and Seizure) of the U.S. Federal Rules of Criminal Procedure got amended to allow the FBI to violate the networks of other countries for stuff as trivial as evidence gathering. You’ve to wonder if this is law or merely picket-fencing.

Has the legal framework for intellectual property protection done anything to curb Chinese cyber espionage, dubbed as the greatest transfer of wealth in [American] history?

Beyond a certain level of complexity, ascertaining the intent of a software, say, malware, becomes mathematically difficult. And the overlap of intent and impact also becomes tenuous. Lawyers would have a hard time figuring out if the purported cyber action was a crime, espionage or an act of war. Laws rely on precision – that can’t happen when causality and proportionality fail. What if the perpetrator was a non-state actor residing over foreign soil?

While Moore’s law remains valid, the amount of data collected would always exceed computational power. That’s why we need algorithms. And as Dan Geer prophesises:

The more data [an autonomous system] is given, the more its data utilisation efficiency matters. The more its data utilisation efficiency matters, the more its algorithms will evolve to opaque operation. Above some threshold of dependence on such an algorithm in practice, there can be no going back.

Algorithms used in machine learning, Artificial Intelligence and Big Data would sacrifice accountability for performance. As the opaqueness of an algorithm increases, the interrogatability decreases. Geer also adds that “the self-driving car will choose between killing its solo passenger or fifteen people on the sidewalk” – and we may never know why in a legal sort of way. That’s when autonomous code starts writing its own law, as hinted at by Lawrence Lessig 20 years ago (albeit in a different context).

I can’t, even if I want to, draw past precedents from the Indian legal system. Because of the terrible enforcement of existing laws, we, as a nation, have resorted to piling up the system with populist legislation on rape, lynching, cybersecurity, privacy and what not.

This is not the conclusion but the mere opening argument. “Hobbesian”, to me, means collisions and “Libertarian”, cryptography. Go figure.

Part I of this essay: All roads of data sovereignty lead to a dystopia.