Monitoring and Detection
Monitoring is the ongoing observation of an automated system in operation. A system that runs without human attention runs without meaningful oversight. Monitoring must be active, capable of detecting problems and connected to people with the authority to act on what they observe. Passive monitoring, logging without review is not a governance control.
Escalation procedures are the defined paths through which concerns about an automated system are raised, reviewed and resolved. Having a path that people know about and are willing to use is different from assuming concerns will surface naturally. Escalation procedures must be documented, communicated and tested to constitute a governance control.
Human review means a person examining an automated decision with sufficient information, authority and independence to reach a different conclusion. A person who receives an output and approves it without scrutiny is not reviewing it. The standard requires that the reviewer can meaningfully engage with the decision not merely observe it.
Intervention authority is the formally assigned power to act on what monitoring reveals — to override a decision, halt a process, change a system's parameters or stop it entirely. Oversight without the authority to intervene is observation. Governance requires both.
Operational monitoring is the continuous observation of a live system during normal operation — distinct from periodic audit or review. It detects problems in real time rather than after the fact. The people doing the monitoring must have both the information to identify problems and the authority to respond to them.
Automated systems will encounter cases they were not designed for. Exception handling is the defined process for identifying, routing and resolving those cases. Without a functioning exception handling process, edge cases either fall through governance or are processed by a system operating outside its reliable range.
Alert systems are the mechanisms that trigger human attention when an automated system behaves unexpectedly or breaches defined thresholds. An alert that nobody receives, nobody acts on, or that fires so frequently it is ignored is not a governance control. Alert systems require defined response procedures and named responders.
System testing is the structured verification that an automated system performs as expected — before deployment and after any significant change. Testing governance requires that tests are documented, results are retained, failures are investigated and the people who approved deployment can demonstrate what was tested and what was found.
Operational reviews are structured, periodic examinations of a live system's performance, governance and risk profile. They are distinct from continuous monitoring and from one-time deployment testing. Regular review with documented findings and traceable responses is a standard expectation in regulated sectors.
Changes to automated systems, new data, updated models, changed parameters, modified integrations all alter what the system does. Change management requires that those alterations are governed with the same scrutiny as the original deployment. Unrecorded changes are one of the most common sources of accountability failure in AI systems.
The ability to stop an automated system quickly, safely and completely is a governance requirement. System shutdown must be technically possible, operationally tested and within the authority of a named person to execute without needing approval from the system's own chain of command. A system that cannot be stopped is not under human control.
Incident escalation is the path a significant system failure travels from first identification to formal organisational response. It requires a defined trigger, a named recipient, a documented process and a timeframe. Systems that rely on informal escalation, someone telling someone else, have no meaningful incident governance.
Human override is the ability of a named person with appropriate authority to reverse or change a specific decision made by an automated system. Three conditions make it real: the person must have the authority to override, the information to evaluate the decision and the practical ability to act before the decision takes effect.
Manual fallback is the defined, tested capability to continue operating by human means when an automated system fails, is suspended or must be stopped. Without it, system shutdown is not a governance control, it is an operational crisis that cannot be exercised in practice.
Access control governs who can interact with an automated system — who can read its outputs, change its parameters, retrain its model, or override its decisions. Without defined and enforced access controls, actions cannot be attributed to individuals and accountability cannot be established.
When a system produces harmful outputs or fails, the organisation must reconstruct what happened, limit the harm, meet reporting obligations and preserve evidence at the same time and under pressure. Incident response is the prepared capability that makes that possible.