Never Fight on Terrain of Your Opponent's Choosing
Applying the indirect approach to defensive cybersecurity
All warfare is psychological warfare. The element of surprise, of the indirect approach, has proven over thousands of years of conflict to be one of the deciding factors in the blood struggle between raw forces of power.
How can we translate this principle into defensive cybersecurity against real-world adversaries (especially when the military concept of “defense in depth” does not apply)?
First, since we are forbidden from engaging in offense, as civilian defenders we can never play the “surprise attack” card.
This drastically limits our ability to employ the principle of surprise and indirection in warfare.
However, there remain some obvious things that we can do—and we can even discuss them publicly because the nature of the indirection remains unclear even if discussed openly in broad terms.
You want your adversary confused, doubting, wondering what is real and what is not, wondering where they are, what they have access to.
You want to decrease their confidence and force them to spend time and money.
(This is really the true measure of our job performance as defenders—how much money and time can you force your adversaries to spend?)
And you want to do so asymmetrically, with as little spend as possible to force your adversary to spend as much as possible.
In such a scenario, honeypots are incredibly effective.
A properly-executed honeypot strategy can effectively gaslight your adversaries for days or weeks, getting them lost in plausibly real fake networks, and what’s more giving you time to study their TTPs and have an early warning of their initial foothold in your systems.
Gaslighting is generally a bad thing but if you have sovereign active nation-state adversaries trying to cause your employer harm I think we can make an exception in this case. ;-)
Front-line attackers are usually mouse jockeys running a playbook. The smart people are building tools and developing exploits, not clicky-clicky in a victim’s networks.
Befuzzle and discombobulate the mouse jockeys. “Where am I? What am I doing? Am I lost? Is any of this real?”
Again, we aren’t looking for perfect security, we want to force adversaries to spend time and money. And forcing the smart people to deal with befuddled mouse jockeys is an excellent asymmetric way to spend time as defenders.
Are there other ways to apply the indirect approach to pragmatic blue team defense? Maybe. Maybe not. And if I knew, would I tell you in this blog post? ;-)