[60] When it Rains it Pours
GM Readers!
Last week was a crazy week. Silicon Valley Bank, a top 20 bank collapsed and marked the second largest banking failure in the US history. Over the last few days, this event has been covered widely, this essay brings the conversation up one level with some observations on complex systems.
Building Blocks:
🎨 Life in Color Relevant Essays: [Progress Mechanisms], [Why is Good Governance so Hard?], [The First Trough]
Over the last week, Silicon Valley Bank (SVB), one of the largest banks in the US, faced a bank run. SVB is one of the top banking partners to startups, VCs and the tech industry. SVB’s failure marks the 2nd largest banking failure in the history of banking failures in the US.
Given the size and position of SVB, some are now worry about the contagion impacts are, even in light of the government stepping in to guarantee deposits.
A lot of people have written about what happened (so I won’t focus much on that front). To get up to speed, here are some articles:
I will focus this piece on pulling it up one level and cover some observations on the various forces and systems at play.
🕸 Complex Systems
Modern banking systems are complex systems. They have many moving parts and require different parts to work together in unison. In an interlinked system, when one part of the system breaks, there are trickle down effects to other parts of the system.
These trickle down effects are not always easy to see.
In moments of crisis and failures, human tendency is to isolate the cause of failure to one or a few definitive causes. This kind of reasoning might work in a simple system where things are more deterministic and the cause-and-effect relationships are clearer.
In complex systems, it’s not so clear cut and most of the cause-and-effect relationships exist in the messy middle. In SVB’s case, there were many things taking place all at once. (This article does a good job of highlighting a multitude of factors).
SVB’s context:
A low interest rate environment flips suddenly through a serious of interest rate hikes.
A lack of risk management practices (hedging) for $90Bn+ of exposure SVB had to long-dated securities.
The system did not address the risk that ~96% of deposits were uninsured, in a bank that indexed itself to industries (tech, startups, VC) that were already experiencing a bear market.
Markets were already spooked by the failure of Silvergate, a bank focused on the crypto industry.
And much more…
Any of the above causes risk in a system, but in a complex system, each element of risk has the potential to compound onto itself.
The combination of forces, situations, contexts and variables that underpin the “macro environment” created a tipping point that led to a downward spiral.
The failure of SVB wasn’t caused by any one thing.
If just one thing could have caused a top 20 bank to fail in a matter of days, one would hope that there would have been a guardrail to prevent that one thing.
If everyone knows the one thing that could have caused this failure and the system didn’t have a guardrail to prevent it… then the system is more fragile than we’d all like to admit.
🗡 Double Edged Sword: Everything Happens Faster
We live in a world where everything happens faster and at increasing velocity. The world is more global, more digital, and more connected than ever.
Both the rate and the velocity of information flow is increasing. People can share “breaking news” on social media without needing to wait for traditional media outlets. And on social media, people will follow up with commentary and analysis, giving all of us a live play-by-play about what is going on.
This knowledge is instantly shared for all of us to consume. In general, this is a good thing that brings more transparency into complex systems without having to depend on top-down information, analysis and thinking.
But not all narratives that form on social media are high quality and founded on facts. Some people take advantage of the situation to spread FUD (fear, uncertainty and doubt). In times of crises, there is a lot of misinformation. And we don’t all have high quality information diets.
Many complex systems are not structured to operate at a pace that mirrors how quickly things happen in the world today.
🤼 Individually Rational, Collectively Irrational
Bank runs illustrate how humans and individual companies operate. As humans, we might think in the macro but we operate in the micro.
We can all agree that a healthy and functioning system is good for everyone, but when push comes to shove, everyone protects themselves.
At the individual level, if you hear that others are withdrawing their money, the rational thing for you to do is also to go withdraw your money. But at the collective level, the best way to maintain stability in the system is for everyone to not withdraw their money.
What is rational at the individual level is irrational at the collective level.
At heart this is a tragedy of the commons problem. To solve a tragedy of the commons problem, we need to agree on what collective action looks like during times of crisis.
Collective action requires bottoms up buy-in, but most of our world runs on top-down decision making.
This is about coordination (at scale).
🫂 Everything is a Coordination Problem
In a complex world, which is just as series of complex systems interacting with each other, at increasing speed and velocity, coordination is the most important thing to problem solving at scale.
The interconnectedness of our world needs to be critically factored into system level decision making, otherwise we risk unintended consequences.
On unintended consequences, a good friend once said: “if we ask an all powerful AI to solve climate change, the most efficient thing for the AI to do is to get rid of all the humans. We forgot to define the constraints. And the broader question is can we ever define all the constraints to prevent all the unintended consequences?”
In general, we are really bad at managing complex systems and solving complex problems at scale.
⛈ When it rains it pours
What it takes to prevent and solve these large scale complex problems are often known.
The best practices and timeless wisdom all look the same.
They are things like: risk management, incentives to defend the system, healthy recurring maintenance of systems, etc.
None of these things are “rocket science” or that original. They are fairly simple concepts, but simple is hard to implement in complex systems.
The old adage of when it rains it pours describes how when one thing goes wrong another thing goes wrong. This metaphor is apt for downward spirals in systems.
Yes when it rain, it pours.
You cannot control the rain, but you can have an umbrella ready.
All complex systems need umbrellas.