Episode 88 — Final rapid recap: remember the three domains and all 22 tasks (Tasks 1–22)
In this episode, we’re going to bring the whole certification map back into one clear mental picture so you can recall it under pressure without feeling like you’re juggling twenty-two separate facts. The trick to remembering the three domains and all the tasks is to stop thinking of them as a pile of requirements and start thinking of them as one continuous story. That story begins with governance, because someone has to decide what safe and acceptable use looks like in the first place. It moves into risk, because once you know what matters, you must identify what can go wrong and decide how to treat it. Then it lands in secure architecture and operations, because your decisions only matter if the system is built, controlled, and monitored in a way that makes safety repeatable. If you can hold that story in your head, you can usually place any task where it belongs without memorizing a rigid list.
Before we continue, a quick note: this audio course is a companion to our course companion books. The first book is about the exam and provides detailed information on how to pass it best. The second book is a Kindle-only eBook that contains 1,000 flashcards that can be used on your mobile device or Kindle. Check them both out at Cyber Author dot me, in the Bare Metal Study Guides Series.
Domain 1 is where you create the conditions for trust by making A I a governed capability instead of a collection of experiments. The most important memory here is that governance is not paperwork, it is decision-making with accountability. Domain 1 tasks focus on setting direction, defining roles, creating policies that actually map to reality, and ensuring privacy and ethics are not treated as afterthoughts. When you see a scenario that mentions leaders, auditors, accountability, scope, or acceptable use, your mind should immediately move to Domain 1 first, because the system is being defined before it is built. Domain 1 is also where you decide how oversight works, who approves changes, and what evidence must exist. If the organization cannot answer who owns the system and what rules it follows, everything else becomes reactive and fragile. That is why Domain 1 anchors the entire exam.
A critical part of Domain 1 memory is how privacy and ethics live inside governance rather than sitting off to the side. Privacy is about managing personal information across inputs, outputs, and user access, which means you must think about what people will type, what the model might reveal, and who can see stored prompts and logs. Ethical guardrails are about reducing harm while still meeting business goals, which means you define boundaries, fairness expectations, and safe behavior rather than assuming the model will act responsibly on its own. These topics can feel soft to beginners, but they are operational, because they turn into rules about data minimization, retention, access restrictions, and review processes. When you see scenarios involving customer trust, sensitive information, or harmful outputs, remember that privacy and ethics are not just compliance concerns. They are design requirements that shape architecture, controls, and oversight across the entire life cycle.
Domain 2 is where you take the governance boundaries and translate them into risk language that can drive real decisions. The easiest way to remember Domain 2 is to keep the risk lifecycle loop in your head: identify risk, assess it, treat it, and monitor it as conditions change. This domain is where you learn to recognize threats, evaluate what could go wrong, and choose controls that reduce the most important risks first. Domain 2 is also where you practice thinking in evidence instead of assumptions, because risk management is only meaningful when you can show why you believe a control is working. When you see scenarios mentioning threats, testing, assessments, or tradeoffs, you are standing inside Domain 2. The exam often tests whether you can choose the right risk action, such as when to accept risk, when to reduce it, and when to avoid a risky use entirely.
Threat thinking in Domain 2 becomes easier when you stop trying to memorize threat names and instead anchor on assets and pathways. If the asset is training data, you ask how it could be leaked, poisoned, or accessed improperly. If the asset is a model service, you ask how it could be misused, overwhelmed, or manipulated through inputs. If the asset is user interaction, you ask what people might reveal in prompts, what the system might expose in outputs, and how logs might quietly become a sensitive dataset. Domain 2 expects you to connect these assets to realistic threat behavior, including accidents and misuse, not only classic attackers. It also expects you to understand that threats evolve, so monitoring and review are built into the risk lifecycle. When you remember this asset-to-threat relationship, you can reason through scenarios even when the language is unfamiliar.
Testing and evidence are also central to Domain 2, and you can remember them as the bridge between risk theory and operational truth. Testing is how you find weaknesses before users do, and evidence is how you prove controls are operating rather than merely claimed. This is where audits, assessments, and structured validation methods show up, not as one-time hurdles, but as recurring checks that keep risk decisions grounded. Domain 2 also emphasizes vendors because vendors extend your system beyond your walls. Monitoring vendor controls through evidence, updates, and incident notifications keeps you aware, while verifying vendor security through audits, tests, and contract enforcement gives you leverage to demand fixes. When you see scenarios involving third parties, procurement, external services, or incident notifications from a provider, think Domain 2 immediately, because vendor risk is a risk lifecycle problem that never goes away after onboarding.
Domain 3 is where the system becomes tangible, because it focuses on secure A I technologies through architecture and controls. The best way to remember Domain 3 is that architecture is the map, and controls are the guardrails placed where the map shows risk is concentrated. This domain includes designing clear trust boundaries and data flows so you can see where information crosses from trusted to untrusted zones. It includes reducing attack surface by making smart deployment and integration choices so you do not create unnecessary pathways for misuse. It includes implementing protections for identity, secrets, and isolation so access is controlled, keys are protected, and compromises are contained. Domain 3 also includes integrating A I into enterprise architecture without shadow systems, which means aligning with identity, network, and data standards rather than creating an A I island that bypasses normal controls. When you see scenarios about system design, data paths, integrations, or enterprise standards, Domain 3 is the home base.
A powerful memory hook for Domain 3 is to picture the system as a set of boundaries where data moves and decisions happen. Every boundary is a place where you ask, who is allowed to cross, what must be true before crossing, and what evidence will show crossing happened safely. Trust boundaries and data flows make those questions explicit, which is why they come first. Attack surface reduction is the next natural idea, because the fewer boundaries you expose to untrusted users and the fewer integrations you allow without need, the easier it is to protect what remains. Identity, secrets, and isolation then become the practical protections that hold the system together under stress, because identity decides who can do what, secrets prove identity, and isolation prevents one failure from spreading. If you can narrate those concepts in order, you can usually place any Domain 3 task correctly without forcing memorization.
Now, to remember all 22 tasks without listing them, think of them as a journey through time and responsibility. The early tasks are about deciding and defining, which is governance, ethics, privacy, and accountability. The middle tasks are about measuring and proving, which is risk lifecycle thinking, threat awareness, testing, and vendor verification. The later tasks are about building and operating, which is architecture, data pipeline controls, model lifecycle controls, monitoring, and incident response. This is not a strict domain boundary, because tasks overlap, but it is a reliable mental flow. When you read a scenario, ask yourself what phase of the journey is most urgent. Is the organization still deciding what is acceptable, meaning governance and privacy choices must be made. Is the organization deciding what could go wrong and what to prioritize, meaning risk and testing are the right focus. Or is the organization already running the system, meaning architecture, monitoring, and response must carry the load.
Task numbers can still help, but only if you treat them as signposts rather than as a memorized list. Tasks 1 through 22 are not twenty-two separate worlds; they are recurring responsibilities that reappear as the system grows. A project may start with governance tasks, move into risk tasks, and then land in architecture tasks, but it will loop back when changes occur or incidents happen. This is why the exam often uses scenarios that blend domains, because real life blends domains. If you practice identifying the primary need, you will usually pick the correct task even when multiple tasks are relevant. That primary need is often revealed by the strongest risk signal, such as a privacy risk, a vendor incident, a sudden model behavior shift, or an audit request. Your goal is to recognize the signal and choose the task that stabilizes the situation first.
A very practical cross-task memory is to link controls to ownership and evidence, because that relationship connects nearly every domain. A control that cannot be owned will decay, and a control that cannot be evidenced cannot be trusted. This applies to identity restrictions, vendor oversight, data pipeline protections, monitoring rules, and oversight workflows. It also applies to explainability and audit alignment, because defensibility is evidence plus a coherent story. When you are unsure which task is being tested, ask whether the scenario is really about building a control, proving a control, or responding when a control failed. Building points toward architecture and lifecycle controls. Proving points toward validation, audits, and evidence collection. Responding points toward incident response coordination and operational monitoring. That simple three-way question often snaps the scenario into focus quickly.
Another memory pattern that helps under exam pressure is to separate prevention, detection, and response, because Tasks 1 through 22 cover all three even when they do not use those words. Prevention includes governance boundaries, safe architecture, minimized attack surface, and controlled data pipelines. Detection includes continuous monitoring of system behavior, control health, and security signals. Response includes connecting alerts to incident response so containment, investigation, and recovery happen quickly and consistently. Many scenarios are designed to test whether you jump to prevention when the situation already requires response, or whether you try to respond when the real issue is missing prevention. When an alert is already firing or harm has already occurred, response tasks should lead. When the system is being planned or expanded, prevention and governance tasks should lead. Detection tasks are the glue that keeps prevention and response connected over time.
Finally, remember that the exam is not asking you to be a tool operator; it is asking you to be a manager of safety, risk, and controls across a living A I system. That means you must be comfortable with tradeoffs, because safety cannot be absolute and usefulness cannot be unlimited. Risk-based human oversight exists because reviewing everything is impossible, but reviewing nothing is reckless. Explainability exists because decisions must be defensible, but explanations must also respect privacy and access boundaries. Monitoring exists because drift and misuse will happen, but monitoring must be tuned so people do not drown in noise. Vendor oversight exists because third parties will be involved, but oversight must be enforceable through evidence and contracts. If you keep these tradeoff pairs in mind, you will sound like someone who understands the field rather than someone reciting definitions.
To close, the fastest way to remember the three domains and all 22 tasks is to hold a single story: govern first so purpose and boundaries are clear, manage risk next so threats and priorities are evidence-driven, and secure the technology last so architecture and controls make safety repeatable in the real world. Domain 1 is about accountability, privacy, and ethics that set the rules of use. Domain 2 is about the risk lifecycle, threats, testing, and vendors that turn rules into prioritized safeguards. Domain 3 is about architecture, data protection, and operational discipline that make safeguards real and sustainable. Across Tasks 1 through 22, your job is to choose the right responsibility at the right moment, then back it with ownership and evidence so it survives change. When you can do that, you are not merely recalling tasks, you are thinking like the role the certification is trying to validate.