Digging Into Security Cost
After my last post on using Security Cost as a metric and the basis for a core security team/CISO objective, a few Mining and Metals ISACmembers...
As I talk through the practical application of the security cost framework with our members, we are consistently running into a common stumbling block I’d like to discuss.
As a reminder from my previous post on the topic, the cyber cost model considers four components:
The idea is, we seek to manage the entire pie, not just one slice. The optimal solution is the one that results in the smallest total. However, the challenge in evaluating in a forward-looking manner is that two components, the cost of incident and security friction, have a risk component. How do we know how many security incidents will cost next year? How do we know how much a change to control “x” will impact it. How do we know what the cost of a control, say an EDR solution, will be on the business next year? How do we account for surprises like the CrowdStrike incident?
Often, the statement from the CIO or CISO I’m working with is “There is no way I can accurately tell you how much friction or incidents will cost me next year”. I would argue the statement they are really saying is “There is no way I can PRECISLY tell you how much friction or incidents will cost me next year”. The challenge is coming up with a single precise number – and you’re right, you can’t. What you can do, however, is come up with an ACCURATE range of values that will represent that cost.
So many of us want to give absolute answers, but risk, in fact, defies absolutes. We feel there is an expectation that we can come up with a single number to represent the incident or friction loss. However, in these situations, precision is not expected of company executives; accuracy is. If we can come up with a range of values, perhaps saying cyber incident costs will most likely be between $2M and $6.5M next year as is, and if we are evaluating a particular new control, we can say that control will drop that to between $1M and $6M, that may be an accurate prediction. It is simply not precise. It will give an accurate range of costs the business might experience next year, and that can be evaluated against risk tolerance statements to make decisions.
This is exactly the approach public companies already use, and boards and executive teams are already comfortable with. Public companies that need to report earnings, profit and expense guidance already use ranges. You’ll never see a public company issue guidance like “We expect to earn $6,453,453,342.32 in revenue next year.” Why? Because this precise number is almost assuredly precisely wrong. Instead, companies’ guidance will state, “We expect to earn between $5 Million and $8 Million in revenue next year.” This statement represents an accurate range to the market, with the size of the range representing the confidence in the quality of the data that goes into the assessment.
We can do the same thing by identifying the factors that contribute to our incident or friction loss and applying the practice of calibrated estimation to come up with ranges for those factors. From there, a model such as FAIR can be used to estimate the expected range of overall security loss. The same approach as used for security, be it FAIR or another model, can be applied to the expected friction losses.
I will make a nod to precision—in this case, precision is reflected in the size of the range you provide and in the quality of the data you use in your calibrated estimation process. The first time you run this analysis, you might find the size of the range being unreasonably large – “We expect security loss between $0 and $1 Trillion” might be accurate, but will not be seen as sufficiently precise. An overly wide range simply means that you must go back and find additional data or inputs to tighten the estimates. Odds are those data points exist somewhere; you might just need to talk to people you don’t normally include to get them (call that a happy side effect) and will allow you to refine down to an acceptable range.
This process is well within the ability of most teams, perhaps with some business analysis work up front to locate the needed data and SMEs to generate good, calibrated estimates. Like all new skills and practices, it will be challenging and perhaps a bit slower at first, but this is not something that needs to take a bunch of time. There are a number of good risk quantification tools and models available that can take those estimates and turn them into a high-quality risk prediction you can use. It is also critical to remember that you can (and should) refine those estimates as new data comes in or time passes - much like a company will, in their Q2 earnings report, refine the earnings forecast to say something like "We expect revenue to come in a the top end of our forecasted range" you can update those numbers and statements as the year progresses.
After my last post on using Security Cost as a metric and the basis for a core security team/CISO objective, a few Mining and Metals ISACmembers...
Phishing is a significant compromise vector for all companies in all industries. At theMining and Metals ISAC annual conference in November, we...
As security practitioners and leaders, we must contribute to the professionalization of our field by searching out data and evidence-based solutions...