For anyone who has been in the cybersecurity or tech industries for any amount of time, a year or two can feel like an eternity, and a decade tends to leave behind enough techno-fossils to fill countless warehouses. It is therefore hard to believe that the conversation surrounding zero trust has now been taking place for almost 30 years. The goal of this blog series is to give a cursory overview of how the Zero Trust Architecture evolved to where it is now, highlight some specific technological and industry sentiment shifts along the way, and ultimately parallel the much more recent evolution of Attack Surface Management to the challenges of zero trust adoption.
The start of zero trust
As with many practical applications of technology, zero trust first began as a conceptual model proposed by academic Stephen Paul Marsh in his doctoral thesis in 1994. Technologically at this point in time, the firewall reigned supreme, and the philosophy regarding security for enterprise organizations was that all things of value were behind the firewall – and that was all the protection required. To use a much more colloquial visual, enterprises at this time could be compared to a home with no doors separating different rooms. Once someone gets inside, they had access to everything and were trusted absolutely.
Over the next decade, the conceptual model for zero trust gained enough traction to warrant an inclusion into the Jericho Forum in 2003. At this point, technology had begun enabling remote work like never before. The ever-increasing need to accommodate this remote workforce began driving changes to corporate architectures resulting in further departures from the traditional perimeter model. The Jericho Forum was able to recognize these trends early on but it wasn’t until Google suffered a significant and very public breach in 2009 that the zero trust model began to develop more extensively with the creation of BeyondCorp.
Confusion around what is – and what isn’t – ‘zero trust’
In principle, zero trust is extremely simple and is exactly what it sounds like: Don’t. Trust. Anything. However, it quickly became clear that the practical execution led to a great deal of confusion in the 2010s.
Companies and marketing teams latched onto the term and slapped “zero trust” onto their product marketing and advertising. In the absence of an official framework or authority, “buzzword bingo” began to pollute the market with advertisements for products that weren’t truly zero trust and no one could truly say otherwise. This led to a conflation of ideas and technologies confusing customers and the market as a whole – no single individual or technology had a corner on the market and both the problem and solution was poorly understood. The category became so confusing that the model had to be broken up into sub-categories such as Workforce (users), Workplace (networking), and Workloads (applications).
To add to the ever-expanding complexity of the challenge, around this same time Apple and other companies completely changed the game by debuting their various app stores AND cloud adoption gained massive traction so quickly that many would say we’re currently in the “late adopters” stage of its utilization. Suddenly, the scope of security became exponentially more complex as hundreds to thousands of applications and compute resources utterly blew apart the traditional concept of a perimeter. More tooling became necessary and available, and dozens of companies spun into existence to meet the need, further confusing the landscape for consumers and producers alike.
The NIST Zero Trust Architecture
Thankfully, the culmination of this movement led to the NIST SP800-207 Zero Trust Architecture publication in 2018. This framework formalized the zero trust principles into three primary components:
- Enhanced identity governance and policy-based access controls
- Micro-segmentation
- Overlay networks and software-defined perimeters
Reference: Zero Trust Security Model, summarized from NIST SP800-207
Now of course you’re probably wondering what this has to do with Attack Surface Management. Stay tuned – that’s coming in the next installment.