When Code Is Law – The Indian Express

Published by The Indian Express: http://indianexpress.com/article/opinion/columns/when-code-is-law-digitised-data-leak-5168760/.

With the debate spurred by the revelations of Cambridge Analytica’s dealings with Facebook — and, closer to home, by Aadhaar — we may have to revisit the very foundations of the individual’s social contract with the state when it comes to privacy. Those familiar with the hacker counter-culture of the Nineties knew one thing — the most potent weapon of information warfare is availability.

Julian Assange wrote an informal manifesto for Wikileaks in 2006, stating that “where leaking is easy, secretive or unjust systems are nonlinearly hit, relative to open just systems”. Dave Aitel, a cyber-offence expert recruited by the National Security Agency at the age of 18, concedes that Assange’s document “was way ahead of its time”. Back then, regimes around the world were still honing the dark art of extending the militaristic domain of information warfare to cyberspace, barring a few exceptions like the United States and its anglophone allies.

We crossed the Rubicon when Russia allegedly influenced the 2016 US presidential elections by weaponising the availability of information, proving the Wikileaks’ hypothesis that keeping secrets would become costly. Billions of digital identities are for sale at ridiculously cheap prices. Emin Gün Sirer, a self-proclaimed hacker and associate professor of computer science at Cornell University, writes that “our laws were written for a time and place where giant data collections and intersections were difficult to perform, so we’ve erred on the side of forcing the government to release whatever it knows”.

Every interaction is a leak, says Sirer. The digitisation of social interfaces has made leaking so rapid that it outpaces the human ability to comprehend it. He postulates that eventually “everyone will have access to all the data related to everyone who is alive during their lifetime”. The research-grade problem is to strip the data of its value, which requires a fundamental shift in how privacy is perceived.

Access to hitherto forbidden information spawns unexpected formations, which is what we are witnessing with the ideological echo chambers aggravated by social media. Despite being bound by perceived commonalities, these groups are ad-hoc, unpredictable and probably their own worst enemies.

When such unpredictability — intensified by the deluge of information — becomes the norm, we will obviously rely on Artificial Intelligence (AI), much to our peril. In 2015, reports appeared that Google’s artificial neural network started spouting “Dali-esque” images when queried about common worldly objects. It took a while to figure out that the system’s “brain” went on a learning overdrive.

There is a Gordian knot in the pursuit of objectivity. Dan Geer, a cybersecurity expert at the CIA’s venture capital fund, In-Q-Tel, explains the paradox: “The more data [an AI system] is given, the more its data utilisation efficiency matters. The more its data utilisation efficiency matters, the more its algorithms will evolve to opaque operation. Above some threshold of dependence on such an algorithm in practice, there can be no going back”. He terms this property of evolving opaqueness of algorithms as “interrogatability”.

It is obvious that analytical systems will become less interrogatable with the ongoing data deluge. It is already reaching those thresholds in domains like cyber-defence where one can only fight algorithms with algorithms. It is scary to imagine the impact this may have on the real world — on nations, societies and individuals making crucial decisions merely relying on esoteric computations. That is almost an eerie allusion to the technological singularity (when AI would surpass societal intelligence) speculated by futurists like Ray Kurzweil.

Democracies would be susceptible to its pitfalls. Life and liberty would be etched on semiconductors. Lawrence Lessig, a professor at Harvard Law School, divined in 2000 that code — the language in which computational logic is expressed — would act as the enforcer of law, and may even become the law.

The writer is a cyber-intelligence specialist and has worked with the Indian government and security response teams of global companies.

The next war in cybersecurity would be between CapEx and OpEx – LinkedIn

Published on LinkedIn: https://www.linkedin.com/pulse/next-war-cybersecurity-between-capex-opex-pukhraj-singh/.

Anton Chuvakin, formerly a log ninja and currently a VP at Gartner, has been whipping up some emphatic commentary on the SIEMs. You know, those ugly, inflexible monoliths which have dominated the decision layer of security since a decade, just refusing to go away.

He has driven home a couple of points on the absolute operational fragmentation of the security architecture. Like, there are more security boxes within an enterprise than there are people to manage them [1]. Or the fact that there could actually be a thing called “SaaS SIEM” – though I vehemently disagree with that term (more on that later) [2].

Sitting and building platforms in India gives you a very different perspective. You exist in a market where the deployment rates of technologies like the SIEM could be less than 1%. You realise, having investigated dozens of APTs, that the relative immaturity of the security architecture has no direct bearing on the perceived threat landscape, or even the economics of cybersecurity. A stupid, little irritant like WannaCry would pretty much have the same impact here as any other part of the world – and I am not talking about the mere numbers.

Probably because the American cybersecurity vendor landscape is pacing ahead by a few generations, Chuvakin may have missed emphasizing upon something else. The secondary catalyst of this domain, after offensive and defensive technological disruption, is economics. The moment you mull over it, the rabbit hole begins to reveal itself.

Richard Stiennon, another of Gartner’s alumnus, rightly puts it – this is the only field of IT whose biggest driver is external: the threat actor [3]. Now, imagine, how complex the econo-metrics to describe this larger ecosystem would be.

Let’s factor in some parts of the equation. I believe that the ‘Western’ security products are generally priced exorbitantly. Those price tags just don’t make sense especially in a market like India. In fact, I see a bubble there.

I had the realisation that this is because of our over-reliance on product engineering. The vendors bet on high CApital EXpenditure associated with the acquisition of the security architecture hoping to make huge profits, while the customers struggle with ballooning, mostly hidden OPerating EXpenditures. So grim is the situation that curbing the OpEx has become an existential challenge for the enterprises, a do-or-die situation much like cybersecurity itself.

The developmental paradigm of most American security companies assumes that the customer needs product engineering. In a pure-play services market like India, it’s easy to call the bluff on them. What the customers really anticipate is solutions engineering – an ability to transparently, seamlessly merge and control both CapEx and OpEx – and those vendors really don’t have the ability to horizontally scale up for that.

Moreover, this is happening at a time when skeletal Big Data stacks have become mature enough to serve most of the platform requirements like decision analytics, orchestration, event and intelligence correlation, risk quantification, and threat hunting, etc.

In fact, we at Bhujang realised this two years ago. “We do solutions engineering, not products” became our mission statement.

So, let me return to the objection of using the term “SaaS SIEM”. I think it is force-fitting the round peg of solutions into the square hole of products. Although I am fully in agreement with Chuvakin’s premise of shared or managed analytics being more actionable and cost-effective.

I will digress a bit now to another lingering problem. I see that many vendors are also misselling enterprise grade products to the homeland security market. That messes up with a nation’s economic resilience against cyber threats.

The reason I am pointing it out in this article is because the vendors haven’t made an honest effort to demarcate these two different territories. I have written a lot about the extreme lack of inter-compatibility within the enterprise-centric security architecture [4]. Somewhere down the line, we need to start acknowledging the ‘emergent’ nature of cybersecurity, to better understand why enterprises will keep on getting hacked. I think, beyond a certain point, what we would really need are universal, inter-operable and machine-to-machine layers of abstraction – and the enterprise vendors need to stay the hell away from them!

 

1.      Security Without Security People: A [Sad] Way Forward?

http://blogs.gartner.com/anton-chuvakin/2017/06/29/security-without-security-people-a-sad-way-forward/

2.      Action Item: SaaS SIEM Users Sought!

http://blogs.gartner.com/anton-chuvakin/2017/07/19/action-item-saas-siem-users-sought/

3.      The Entire IT Security Landscape

https://www.youtube.com/watch?v=YYNM2VRmncE

4.      For Enterprises Giving Up on Cybersecurity Vendors: Abstraction Is the Future

https://www.linkedin.com/pulse/enterprises-giving-up-cybersecurity-vendors-future-pukhraj-singh

For Enterprises Giving Up on Cybersecurity Vendors: Abstraction Is the Future – LinkedIn

Published on LinkedIn: https://www.linkedin.com/pulse/enterprises-giving-up-cybersecurity-vendors-future-pukhraj-singh/.

An interesting development marked the conclusion of the Borderless Cyber USA 2017 conference last week. A representative from the National Security Agency (NSA) announced the launch of OpenC2 – a “standardised computer language” that creates a layer of abstraction to facilitate cyber response across product and organisational boundaries at machine speed.

The future of cyber, and homeland security in general, would be these layers of abstraction which introduce machine-to-machine inter-operability and seamlessness in a highly fragmented ecosystem. This is probably the second such strategic initiative that is not driven by vendors, but standardisation bodies. The first layer of abstraction which paved the way for OpenC2 was STIX-TAXII.

But these standards really don’t matter. No, they don’t. What matters is our understanding of how we reached here in the first place.

The contemporary enterprise security architecture is dying by a thousand cuts but the attack surfaces have remained consistent since the last two decades.

 

 

We productised the security controls into what I call the Detection, Prevention & Response Layer – the irony being that if the controls act like little kernels of governance, the products would of course create their own layers and silos. From this incompatibility stemmed the need of a Decision Layer which has remained stagnant since the last decade with the onset of SIEMs.

The enterprises created a one-way street of security governance which only gets narrower as we reach the destination. You can’t turn back; there are no milestones or metrics to quantify the progress.

The products were so heavily focused on detection, detection and detection. From the very bottom of attack surfaces to the top of the Decision Layer, we lost almost 70% of our telemetry, context, intelligence and situational awareness. SIEMs became these ugly, inflexible monolithic monstrosities.

Right now, any enterprise worth its salt has around a dozen layers of incompatibility to deal with.

Then arose the question of the motivated state actors. Bet your millions, but the enterprises would always lose out to them. There’s the foundational insecurity of the internet, the routing edge that is literally the no man’s land, the sneaky technology vendors, and the self-defeating complexity of the security architecture.

 

 

During my talks, I never fail to highlight this paradox with the example of the Dirty Cow. A zero-day vulnerability that remained potent for almost a decade. A security administrator who detected it not by using point tools but capturing all the traffic to his datacentre. After twenty years of evolution of the cybersecurity vendor landscape, we have come full circle.

A similar case also drew a lot of media attention last week. An enterprise running Tanium, Cylance and McAfee – and ingesting 138 threat intelligence feeds – struck by an attack so precise that nothing tripped. The saviour being the good ol’ traffic recorder.

The economics of cybersecurity has become completely bogus because of these glitches in the formulae.

To quote Dave Aitel quoting Frank Heidt, “The emergent property of an avalanche is a grain of sand”. It is for this reason that MITRE, DHS, FS-ISAC and the NSA have started acknowledging the emergent nature of cybersecurity. Complex biological systems showcase such behaviour in defence against foreign bodies. The individual components of the architecture don’t make much sense, the system as a whole does. One cell is breached so that the symptoms are relayed and the larger organism survives the intrusion.

And that’s why OpenC2 and STIX mark just the beginning of a new economic incentive which encourages the distribution of risk across organisations to save the economy and the nation. The selfish herd survives at the cost of a small sacrifice. It would also de-layer the enterprise architecture by a factor of six.