Cyber Threats

A Brief Cybersecurity History

Published July 02, 2020 by Benny Lakunishok

From Prevention to Detection and Back Again!

History has a way of repeating itself - we see it time and time again (pun intended!), particularly in cybersecurity. That’s why we think a quick look back, into what we did to protect our networks and why we did it that way, can shed some light on what we should do to protect our networks going forward.

So what can we learn from yesterday’s strategies and tactics to improve the success of today’s approaches? I’ll give you a hint, everything comes full circle - prevention is not only in our past, but also our future.

Lost in Translation: From the Physical to the Digital

On November 2nd, 1988, Robert Tappan Morris unleashed the first ever computer worm to spread via the Internet. This “innocent” worm caused a massive slow down of the Internet, with damages estimated to be anywhere between $100,000 to $10 million USD. Taking into account the small number of computers connecting to the Internet at the time (about 60,000 computers worldwide) that’s sizeable damage (in today’s terms that’s close to $22 million).

The Morris worm was a harbinger of things to come, demonstrating how infectious and destructive a cyberattack can be, however, it took a while for the danger to be truly recognized. In the early days, everyone was just trying to figure out how to use these new technologies. It was less about reimagining the world and more about transfering the physical world to the digital one.

We saw this with the first viruses, which were designed to mimic the behaviour of biological viruses, spreading and infecting as many hosts, aka “healthy” files, as quickly as possible, so they could incubate and survive. We also saw this in the first cyber security defenses, which mimicked strategies that had worked in the physical world, namely building up immunities and creating strong defensible perimeters to keep the ‘bad’ at bay.

Anti-virus solutions, which ‘inoculated’ systems and networks against known infections, and firewalls, which established walls and gates to stop dangerous things from getting in, soon became the “bread and butter” of security. While admirable in their intent - preventing attacks before they could happen - they were limited in their scope and effectiveness. They could only protect against what they knew and failed to account for all the different risks (internal as well as external) and threat vectors being created by the very digital world they were trying to protect.

Struggling to Keep Up

As more and more companies, users, and devices connected to the Internet, more and more bad actors were interested in finding ways to profit and gain an advantage. Whether it was a nation state, an organized criminal group, or someone in their basement, they could typically get past any defenses in place. Even with the addition of new layers of protection, such as intrusion prevention systems (IPS), web application firewalls (WAF), and network access control (NAC) systems, attackers were successful.

As the first decade of the second millennium came to an end, tens of thousands of new malware samples were created that could not be detected by antivirus; and attackers continued to infiltrate perimeters to carry out their attack objectives unhindered. Verizon’s annual Data Breach Investigations Report (DBIR), which started back in 2008 collecting and analyzing information about data breaches, painted a clear picture: “Most breaches go undetected for quite a while and are discovered by a third party rather than the victim organization. Attacks tend to be of low to moderate difficulty and largely opportunistic in nature rather than targeted.”

On January 12, 2010, Google released a blog post that notified everyone they suffered a data breach on their infrastructure. According to Google, this attack originated from China and targeted the email accounts of human rights activists. Additionally, several other U.S. based companies were targeted by the same attack, dubbed operation aurora. Later that year, the Stuxnet worm, which targeted Iranian industrial facilities, was discovered.

It was clear that what was in place was not working - perimeter prevention, alone was not delivering the goods. Mandiant’s M-Trends 2011 report title suggested as much, "When Prevention Fails". The reason it was failing was because, while the digital world was changing everything, its defenses were still stuck in the past. It assumed digital threats could be mitigated the same way as physical ones; that it was easy to tell the difference between what is good and what is bad; and that people and machines, once inside, must be good. But, as we all know, very bad people and things can be everywhere, taking on all different sizes and shapes, often hiding in plain sight.

Detection: You Can’t Fight What You Can’t See

The reality that perimeter security was failing could no longer be ignored, causing a shift in the security industry, away from focusing on attack prevention towards attack detection. The new mantra was “be prepared for ‘when’, not ‘if’ you are attacked,” and it led to new strategies based on the premise that attackers were already in the network.

To battle this foe, CISOs armed themselves with technology designed to give them visibility and context into the threats in their environment, coupled with the ability to investigate and respond to those threats - hence the exponential growth of security incident and event management (SIEM), user and entity behavior analytics (UEBA), and endpoint detection and response (EDR) markets. CISOs also boosted hiring to try to get their security operations center (SOC) up and running to be able to respond to and defeat these inevitable attacks.

The new methodology of combating digital threats radically changed from the old one. The old method heavily relied on prevention, much like combating physical attacks, while the new approach favored detection. Detection relied on “good” humans (aka the SOC) defending against “bad” humans (aka hackers). The problem is it is hard to scale humans; plus it is hard to use humans to respond to automated attacks - hackers continue to use tools available to them to expand the speed and reach of their attacks, from bots and to AI and machine learning.

Enter ransomware. The first (arguably) modern ransomware attack began at 2014 with Cryptolocker, followed by the infamous WannaCry, which infected roughly 200,000 computers across 150 countries. The damage from these attacks was (and still is) great, with estimated damage exceeding $11.5 Billion dollars in 2019.

Having to rely on detection technologies to uncover the threat in the network before it can be addressed, means SOC teams are typically too late. By the time they are able to respond, the damage has already, in large part, been done. Plus, SOC teams who receive hundreds, even thousands, of alerts every day, can easily miss the ones they need to pay attention to and fail to take swift measures to contain them in time. (Attacks often remain undetected for days, weeks, even months, and many end up being discovered by an external source.)

It is clear that another shift is needed to improve the effectiveness of defenses. The time is ripe for a new approach. But let me suggest one that comes with a nostalgic twist, one that takes the good of past prevention and detection strategies and makes them modern...

Back to Prevention - What’s Old is New Again

In isolation, prevention at the perimeter and detection inside the perimeter fail to address the realities of today’s digital world: there is no network perimeter We need to assume every asset can be compromised, whether it’s “inside” or “outside” the perimeter. Once an asset is compromised, every packet coming out of that asset is potentially dangerous, hence there is no safety inside the perimeter. A packet travelling from point A to point B is guilty until proven innocent.

This new approach is colloquially referred to as a zero trust model and is being heralded as the way forward for security. Zero trust is based on the assumption that nothing should be trusted or allowed, either inside or outside the network, until it can be verified. In essence, it’s taking the old perimeter concept and narrowing its scope to an individual user or machine, versus an entire network, to make it work for the modern digital world.

Summary

Our journey in the past three decades has been fun, but also perilous! As attacks grew in sophistication, we’ve seen the security industry change tactics to try and protect what we hold precious: going from prevention, to detection, and back again!

As times progress, attacks are becoming more dangerous. Whether it’s commodity malware, ransomware, or targeted threats - attacks are moving too fast, and damaging too much to be dealt with traditional “detect and respond” technology.

What we see now, is the need to reinvent ourselves (without having to totally reinvest in our infrastructure) to get an edge on attacks. We, as an industry, have to make a new kind of prevention possible, one that flips our assumptions and bases our actions on the premise that no one can be trusted. This zero trust approach will make prevention more effective, and even detection and response easier, because it narrows the scope of what is allowed in the first place. We will finally be able to take control over the narrative and change cybersecurity outcomes going forward