Arindrajit Basu and Karan Saini wrote a detailed critique of my essay on cyber norms for the Modern War Institute: Setting International Norms of Cyber Conflict Is Hard, but That Doesn’t Mean We Should Stop Trying.
Here’s my rejoinder to their rejoinder:
I am honoured by the authors’ interest in my essay. Without distracting the reader, I would like to briefly point out a fundamental technical anomaly in Basu and Saini’s central argument.
If the reader spends enough time mulling over it, she may realise that due to this very anomaly, the presupposition of applicability of international law to cyber operations also crumbles.
Had I known that a specific discursive approach of my essay would become the opening salvo, I would have made it clearer that cyber operations, offensive toolchains and exploitation — while overlapping — have their own distinct emergent properties which feed into overarching improbability, uncertainty and ambiguity.
My argument based on ambiguity began from the former and ended with the latter. I was trying to apprise the reader of its cumulative effect, but perhaps I should have been even more verbose.
I will take specific real-world examples:
Former NSA deputy director Rick Ledgett conceded that while the NSA had complete access to the Iranian bot-herding system, it just could not rely on it to take an executive decision. Mind you, a DDoS attack is generally considered low-grade.
It is clear from the DoJ indictments that the NSA was getting a livestream of intelligence on APTs 28-29 from its Dutch partner AIVD. Obama still had to rely on highly placed double-agents like Oleg Smolenkov, Sergei Mikhailov and Ruslan Stoyanov to complete the picture.
Despite substantial TECHINT on Bureau 121, it was HUMINT again that came to the USG’s rescue during the Sony Pictures escalation.
And these are trivial attacks that we are talking about!
If the topmost tier of your threat intelligence framework is plagued with so much uncertainty and warped cost-benefits (mole in the Kremlin just for cyber intel, seriously?), how can the chain of command prosper?
We still don’t fully understand the functionality of all modules of the Equation Group (NSA’s TAO), Slingshot (USSOCOM) and the Lamberts (CIA’s Vault 7).
We don’t understand their targeting criteria, their geopolitical imperatives, and their CONOPS just from the reverse-engineered code.
We don’t understand how they all map to the larger national security portfolio or remit – which is a must for any norm-setting exercise. This, even though a major portion of these toolchains have been leaked.
Reverse engineering is not a panacea. The intent of an operation doesn’t reside in the code. One must read JA Guerro-Saade’s paper on false flags to realise that, even though the whole industry was picking apart the technical evidence, we almost got carried away by the deception. We DO get carried away by false flags despite sure-shot technical evidence.
If you can’t even rely on technical evidence, then what?
Companies like Google and Microsoft have invested a LOT OF MONEY in building high-level ontological frameworks fusing various techniques like tactical cyber intelligence, code similarity engines, heuristics and telemetry, etc.
And now it’s possible to undertake basic STRATEGIC intelligence case management like profiling of threat actors to understand their limits, boundaries of knowledge and incentives.
Talking about money, I remember Costin Raiu mentioning how his proposal for building a 5000-node code similarity engine (like that of Google or Intezer) was summarily rejected by Kaspersky’s CFO.
How can this be considered standard? How can cyber norms exercises expect nation states to have such cost-prohibitive capabilities? Forget about the costs, how would governments get the required telemetry? And would this ever be fully declassified for normative frameworks?
Guerro-Saade’s blog post on GossipGirl, supra threat actors and Flame 2.0 is telling in terms of the amount of analytical horsepower that was required to derive such strategic conclusions. How can this be considered normal or expected?
And I am not even talking about the imbalance of power due to the politics of access. Can cyber norms be built on such an inequitable foundation?
From an operator’s standpoint, generally only a small portion of the toolchain manifests over the adversarial infrastructure. It, quintessentially, is like a rocket programme with many launch stages (e.g. Kill Chain, ATT&CK) and a massive mission control (90% of effort is spent on the targeting framework).
As Grugq says, “You need a lot of people to have a small numbers of hackers hacking.” I don’t even want to explain how many things can and DO go wrong.
Do you think Turla was having fun while exfiltrating gigabytes of stolen data via crappy satellite uplinks? You can’t rely on mere access.
Unfortunately, the authors mostly focused on the lowermost tier of my argument – exploitation.
Sure, within a very narrow adversarial environment, it does look like a predictable exercise.
But operators spend majority of their time and resources in keeping the adversarial infrastructure primed for a militaristic cyber operation. That’s like a bunch of river-rafters trying to keep the raft STILL in a torrent of water.
Chris Inglis calls the preparatory aspects of exploitation as the cyber-ISR framework. He feels that battling this uncertainty and fluidity should be the main aim of a cyber intelligence agency – it’s THAT crucial and expensive.
On a lighter note, despite the hype around the Whatsapp zero-day, the NSO group giving multiple missed calls to its targets to activate the exploit is a case in point.
We haven’t even talked about Aaron Adams’s paper on the layers of exploit mitigations that the operator may encounter.
The legal argument is fine and stuff. My problem is that a part of this essay confuses operations, toolchains and exploitation. I am back to square one, as the emerging geo-strategic taxonomy relies on THIS VERY delineation.
Cyber operations, offensive toolchains and exploitation are three different things. Geo-strategy lives in the first, mechanised warfare in the second, and the political economy of proliferation in the third.