I’m breaking two of my own rules for this post: I’m getting a wee bit personal and expressing outrage. For talk is cheap and outrage comes wholesale over the net.
The Centre for Internet & Society has just released the policy brief, “Leveraging the Coordinated Vulnerability Disclosure Process to Improve the State of Information Security in India.”
The equities process or ‘balance of equities’ is a patently Western philosophical concept largely applied to the American legal process, sometimes specifically to the US’s national security imperatives. Its underlying premise is roughly this:
[The] violations of federal statutes can be enjoined only if an injunction is supported by the balance of equities, a test also known as the “balance of hardships,” the “balance of interests,” and the “balance of conveniences.”— Jared Goldstein, Equitable Balancing in the Age of Statutes
Equitability has certain established legal precedence and norms within the US national security establishment, but for India it’s a totally alien concept at the strategic level.
I tried hard to find a credible definition of undertaking an equities process in the CIS policy brief. It jumps straight to the Vulnerabilities Equities Process without explaining the overarching legal and procedural nuances, and whether it complies with the domestic context.
Yet again, an Indian think tank has simply aped its Western counterparts and brazenly imported a taxonomy that is inapplicable to our national security processes. I mean, this is like a disease here in India. I had earlier written how the ORF has waylaid Indian cybersecurity policy by simply ripping off the American discourse (when American policy promotes American hegemony!).
Even in this case, such an ill-thought-out and shoddy manoeuvre can do more damage to our national security than good — and it may eventually end up undermining our civil liberties. I don’t get why they do it. Is it because all these buzzwords are oh-so-cool, so CIS can score some left-liberal brownie points?
General, attention-deficit readers can go back to Twitter and share this post because I have already debunked the paper; the argumentative types may carry on.
The brief has three authors. I’m clueless why the budding hacker Karan Saini hangs out with jholawala think tankers. I can sense some of his highlighted concerns and would address them later in the post. Pranesh Prakash reminds me of the aboriginal headmen of Australia who create their own mythos. Elonnai Hickok? I’m sorry, but why is a foreigner advising us on national security?
When American transparency activists started challenging their government’s VEP in 2017, Dave Aitel and Matt Tait ended up proving that what they were doing may in fact haemorrhage their own cause: freedom and civil liberties in cyberspace. Aitel is a celebrated exploit engineer of the NSA and Tait his contemporary from the British GCHQ. Like me, they come from the other side.
With a trail of opinion pieces spanning two years, Aitel nearly forced the venerable Electronic Frontier Foundation to recant its perilous beliefs. The security community, too, panned EFF. Mind you, Aitel, Tait and others are not warmongering nerds but the products of the hacker counterculture.
The hallowed Belfer Centre of Harvard had to embarrasingly retract its paper on ‘vulnerability discovery’ when Aitel pointed out its obvious misassumptions.
The CIS brief falls under the same category of disrepute. Let me give you an example: its title mentions “Coordinated Vulnerability Disclosure” but its executive summary suddenly starts talking about “formalising a Vulnerabilities Equities Process (‘VEP’) framework for the Indian context.” As if the former is all there is to the latter.
Responsible disclosure is possibly the fag end of a long drawn process of equities which could start with: doctrine, policy, national security imperatives, national intelligence estimates, operational mandates, institutional capabilities, tactical and strategic cost-benefits, operating protocols, targeting, adversarial profiling, threat perception, vulnerability assessment, vulnerability research, offensive frameworks, operational security, and vulnerability sourcing, etc. How can an outsider interfere without being apprised of this evolutionary process?
And vulnerability discovery is as challenging and ambiguous as quantum physics. Firstly, the definition of a vulnerability could range from a simple misconfiguration, a backdoor, the supply chain chaos, a vague logical loophole, an esoteric system complexity to something more relatable and contemporary like memory corruption which VEP generally assumes. We don’t have its proper phenomenal interpretation and response is derived on a case-by-case basis. Generic, thou-shall-not statutes don’t work.
Edit: Here’s how Sergey Bratus, a pioneer of the science of exploitation, defines this crippling ambiguity:
Designers, vendors, and programmers themselves have trouble describing whether a software or hardware feature is operating as intended or is, in fact, a security flaw. Advanced exploitation techniques are rapidly moving towards “the boundary between bug and expected behavior”, “almost the fringe of what can be classified as an explicit hole or flaw.” Advanced exploitation is rapidly becoming synonymous with the system operating exactly as designed—and yet getting manipulated by attackers.— Sergey Bratus, The Wassenaar Arrangement’s intent fallacy
As Aitel says, “Regulations are hard because every cyberweapon is different.” While the CIS paper doesn’t mention the term explicitly, quite often we believe in the myth that most offensive programmes rely on acquiring zero-days — when in reality that is generally a very rare exception. Compare the total budget of the NSA’s TAO with its zero-day acquisition fund — it’s a mere fraction (link to be appended).
What nation states do is build capabilities and offensive toolchains over decades. They birth a mathematical lineage of exploitation and, as Dave Aitel beautifully puts it, mathematics is blind to equities.
I am putting into question the very qualifier of what a vulnerability is. Are new techniques of reconnaissance, antivirus evasion, persistence or exfiltration to be deemed as vulnerabilities (or even zero-days), because then the ambit of a hypothetical VEP becomes really, really vague? You could easily silence a well-meaning security researcher into submission this way. An offensive toolchain is like a rocket with multiple stages from launch to reentry. Stealth or uniqueness may be the quality of one or two put together, but not of the others or all the stages. And what applies to us also applies to our adversaries.
Let me also tell you, even if two vulnerabilities may look alike, there’s no certain proof that they actually are — impinging upon a weird assumption called vulnerability collision on which VEPs rely (the CIS paper doesn’t even delve into such intricacies). Patching or mitigation is another mind-bending maze which CIS has simply skirted away. Sure, let’s fix vulnerabilities and make the world a safer place, but how about the mind-boggling equities of patching (more on that later)?
Let me quote Aitel again:
Every team who has ever had an 0day has seen an advisory come out, and said “Oh, that’s our bug” and then when the patch came out, you realized that was NOT your bug at all, just another bug that looked very similar and was even maybe in the same function. Or you’ve seen patches come out and your exploit stopped working and you thought “I’m patched out” but the underlying root cause was never handled or was handled improperly.— Dave Aitel, The proxy problem to VEP
In fact, go read EVERYTHING what Aitel has written about VEP like I did and then start a deliberation process rather than shoving a policy brief against our faces.
At this juncture, when the Indian cyber offence apparatus is painfully insignificant, such over-zealousness could stymie our national security by decades. It’s exactly kind of myopia that antagonises and creates a trust deficit between the government and the activist community.
And something about CIS’s big mission statement of improving “State of Information Security in India” with VEP. In 2014, cryptologist Bruce Schenier wrote, “Should U.S. Hackers Fix Cybersecurity Holes or Exploit Them?”
It was premised on a simple question: are vulnerabilities dense or sparse? If they are sparse, then swatting each one would have already improved the global cybersecurity posture by now — but we all know that it has only gotten worse. If they are dense, then fixing them one-by-one would be inconsequential. Schneier calls it an “impossible puzzle.” In fact, security prophet Dan Geer offered an intricately complex solution to the problem in 2015 that still hasn’t been satisfactorily proven. Placing an unfair emphasis on patching could divert precious resources, delay projects and add hundreds of thousands of man-hours to the actual developmental work. And what about the equities of fixing bugclasses than individual bugs?
Lastly, on Saini’s apprehension of finding loopholes in government software with a good intention without getting burnt. Let me tell you, there’s no easy way out. You may, in fact, get into trouble. That’s just how the system is, internationally. It happened to Michael Lynn in 2005, when the NSA decided that router exploitation is solely its prerogative. In fact, many security companies have tried to throw researchers in jails when they broke their software. It’s a very common phenomenon — caution is fully advised.
I am not countering CIS for the heck of it, neither am I a government stooge. My previous opinion piece towed an explicitly anti-establishment line. I’m just basing my critique, trenchant as it may be, on the merits of the argument — or shall I say, the equities of the argument. Like it did with VEP, CIS could just blindly copy EFF and say: we’re not going to let Dave Aitel or Pukhraj Singh tell us to ‘slow our roll.’😁
Well played Dave, well played.
On a more serious note, I did actually integrate several learnings from our discussion (those I didnt disagree with) in my presentations, “lobbying” efforts and way of thinking.
— Sven Herpig (@z_edian) January 24, 2019
I think on the top of the list goes the defensive value of retention bundled with operational risk of disclosure for previously used or still operationalized vulns.
— Sven Herpig (@z_edian) January 24, 2019