Episode 32 — Use metrics to prioritize work and prove security program value (Task 18)
In this episode, we’re going to take the idea of metrics one step further and make them useful for decision making, not just for watching dashboards. When people first hear about security metrics, they often assume the purpose is to create reports that look impressive, but the real value is much more practical. Metrics help you decide what to do next when there are more problems than time, and they help you explain why your choices made sense to someone who was not in the room. They also help you prove that a security program is producing real outcomes, rather than just consuming effort and budget. By the end, you should be able to describe how to choose meaningful measurements, how to use them to prioritize work, and how to communicate value without exaggeration or fear.
Before we continue, a quick note: this audio course is a companion to our course companion books. The first book is about the exam and provides detailed information on how to pass it best. The second book is a Kindle-only eBook that contains 1,000 flashcards that can be used on your mobile device or Kindle. Check them both out at Cyber Author dot me, in the Bare Metal Study Guides Series.
A security program is the collection of policies, practices, people, and routines that keep systems safer over time. Beginners sometimes picture security as a set of locks you install once and never touch again, but security is more like maintaining a healthy body, where you track indicators, adjust behavior, and respond to changes. Metrics are those indicators, and they turn opinions into evidence. Without metrics, prioritization often becomes a loudest voice contest, where whoever is most worried or most senior gets attention first. With metrics, you can compare different risks using a shared scale, even if the risks are not identical. That does not make decisions automatic, but it makes them explainable, which is a huge step toward trust and consistency.
To use metrics to prioritize work, you need to understand what kind of work you are prioritizing. Some work is preventive, like improving access controls or tightening data handling rules. Some work is detective, like improving monitoring to catch misuse early. Some work is corrective, like fixing a vulnerability or cleaning up a risky configuration. Some work is responsive, like handling an incident. The trap for beginners is to treat all work as equally urgent, which leads to scattered effort and burnout. Metrics help you separate urgent from important, and they help you identify which investments will reduce the most risk per unit of time. In an A I security context, the same principle applies, because there are always more possible improvements than a team can realistically complete.
A useful starting point is to connect metrics to goals that matter, because measuring something that does not connect to a goal creates busywork. A common high level goal is reducing the chance of harm, such as data exposure, unsafe outputs, or unauthorized actions. Another goal is reducing the time it takes to detect and respond to problems, because speed often limits damage. A third goal is improving reliability and trust, because unreliable A I systems encourage people to bypass rules and create shadow usage. You can measure progress toward these goals with different kinds of metrics, and each kind answers a different question. Outcome metrics ask what changed in real risk or harm, output metrics ask what you produced or improved, and process metrics ask how efficiently you work and how consistently you follow good practices.
Outcome metrics are the most persuasive when you are proving value, but they are also the hardest to measure well. An outcome metric could be the reduction in confirmed security incidents tied to A I usage, or the reduction in the amount of sensitive data exposed through A I outputs. Another outcome might be fewer high severity policy violations or fewer repeated misuse attempts that reach risky stages. The challenge is that incidents are not always frequent, and a quiet quarter does not automatically mean the program worked. That is why outcome metrics should be interpreted with caution, and they should be supported by other evidence. Still, when you can measure outcomes honestly, they help everyone understand that security is about reducing real harm, not chasing perfect scores.
Output metrics describe what the security program delivered, and they are helpful for prioritizing because they show where effort is being spent. Examples include the number of A I systems brought under monitoring, the number of high risk data sources that received stronger controls, or the number of models that went through a security review before being deployed. Output metrics can also cover improvements like adding new detection rules, improving access approval steps, or updating policies to reduce risky use. These metrics are easier to gather than outcomes, but they can be misleading if you treat quantity as quality. If a team boasts about reviewing one hundred systems but the reviews were shallow, the number is not meaningful. The best output metrics include a quality indicator, like the percentage of reviews that found and fixed high impact issues, which links output back to risk reduction.
Process metrics describe how the program operates, and they often become the engine for prioritization. A classic example is Mean Time To Detect (M T T D), which measures how long it takes to notice a problem after it begins, and Mean Time To Respond (M T T R), which measures how long it takes to contain and resolve a problem after detection. After the first mention, we can refer to these as M T T D and M T T R. If you see that M T T D is high, you might prioritize better monitoring and alerting, because slow detection increases harm. If M T T R is high, you might prioritize clearer incident procedures, better coordination, or simpler containment steps. Process metrics also include things like how long approvals take, how often security reviews delay launches, and how many issues remain open beyond their target fix dates.
A key concept for beginners is that prioritization is not just ranking tasks, it is choosing tradeoffs using a consistent method. One simple method is to combine likelihood and impact, where likelihood is how probable a problem is, and impact is how bad it would be if it happens. Metrics help you estimate both sides more objectively. For likelihood, you might look at how often a certain type of misuse attempt occurs, how frequently a control fails, or how often a model produces policy violations. For impact, you might measure how much sensitive data is involved, how many users are affected, or whether the system can take actions that change real world outcomes. When you use metrics this way, you stop arguing about feelings and start discussing evidence, which helps teams make better choices under pressure.
It is also important to recognize that not all metrics are equally useful for proving value, because some can be gamed or misunderstood. A vanity metric is a number that looks impressive but does not meaningfully connect to safety, such as the total number of alerts generated without showing whether they were relevant. Another risky metric is one that encourages the wrong behavior, like rewarding teams for closing tickets quickly even if they close them incorrectly. Good metrics encourage good decisions, like rewarding reduction in repeated issues or rewarding improvements that prevent incidents. In an A I context, a good metric might track the percentage of high risk use cases that have strong guardrails and monitoring, rather than counting how many documents were written. The goal is to select measurements that make it easier to do the right thing, not measurements that create pressure to look good.
To prove security program value, you have to translate technical indicators into outcomes that non specialists care about. Many leaders do not want to hear a long list of vulnerabilities, but they do want to know whether the organization is safer, whether risk is under control, and whether the business can keep moving. Metrics let you tell that story clearly. For example, instead of saying the team improved monitoring, you can say detection time dropped from days to hours, which reduces the window in which harm can occur. Instead of saying policies were updated, you can show that policy violations dropped, or that high risk data exposure events became rarer. The story becomes, we invested effort in specific improvements, and here is the evidence that risk decreased or resilience increased.
A helpful way to frame value is to connect metrics to the life cycle of a security problem: prevent, detect, respond, and learn. Preventive value shows up when risky behavior becomes harder, such as fewer unauthorized access attempts succeeding or fewer sensitive inputs reaching the model. Detective value shows up when early signals become clearer, such as more accurate alerts and fewer false alarms that waste time. Responsive value shows up when containment and recovery happen faster, reducing user harm and operational disruption. Learning value shows up when repeated issues decrease over time, which is one of the strongest signs that a program is maturing. If you can show improvement across these stages, you demonstrate that security is not just reacting, it is becoming more effective and more efficient.
Metrics also help you prioritize by revealing bottlenecks, which are points where work gets stuck and risk accumulates. Suppose you monitor how many security issues remain open past their target fix date, and you notice a growing backlog in A I related issues. That might indicate the team needs clearer ownership, better tooling, or simpler remediation pathways. If you track how often A I deployments happen without review, that might indicate the process is too slow or too unclear, so people bypass it. If you track how often alerts lead to real action, you might discover that alerts are noisy and not trusted, meaning you should invest in tuning and clarity rather than adding more detections. Bottleneck metrics are powerful because they show where a small improvement can unlock a big gain.
Another beginner friendly insight is that metrics must be comparable over time to be meaningful. If you change the definition of a metric every month, you cannot tell whether you improved or simply changed counting rules. This is why consistent definitions matter, like what counts as a policy violation, what counts as a confirmed incident, or what counts as a high severity issue. It is also why you often track both absolute numbers and normalized numbers, such as per one thousand requests, because the system may be growing. If request volume doubles, you do not want to panic just because alerts doubled, and you also do not want to relax just because incident counts stayed flat. Normalized metrics help you see whether risk per unit of activity is increasing or decreasing.
It is worth addressing the fear that metrics will be used to punish people, because that fear can make teams avoid measurement. A healthy security program uses metrics to improve systems, not to blame individuals for every spike. When metrics are used as a weapon, people will hide problems, redefine categories, or avoid reporting, which makes security weaker. A healthier approach is to treat metrics as feedback, like a coach uses statistics to improve a team. You can still hold teams accountable, but the focus is on fixing root causes, removing repeated failure patterns, and improving reliability. In A I security, this matters even more because many problems are shared across teams, like data quality, integration complexity, and unclear usage rules.
To make this concrete, imagine you have three possible projects: improving detection of sensitive data in prompts, tightening access approvals for the A I system, and updating training so students stop pasting private data into the tool. Without metrics, these might all feel equally urgent. With metrics, you might discover that sensitive data inputs are common, making detection improvements high priority, or you might discover that most issues come from a small set of accounts, making access control improvements more urgent. You might also discover that violations are mostly accidental, making training and clearer guidance a quick win that reduces risk immediately. The point is not that metrics magically decide for you, but that they give you a clear reason for your choice, and that reason can be shared and defended.
By the end of this lesson, you should see metrics as both a compass and a receipt. They guide you toward the work that reduces the most risk and removes the most friction, which is how you prioritize in a world of limited time. They also document what changed, which is how you prove value without hype. The most effective security programs measure outcomes when possible, support them with strong output and process metrics, and keep definitions consistent so progress is real. When you can say we reduced detection time, reduced policy violations, and reduced repeated issues in A I systems, you are not just describing activity, you are demonstrating maturity. That is what metrics are for: smarter choices now, and credible proof later.