Scope & Limits

Every automated system has a defined purpose, a set of inputs, and a range of outputs. Before any governance conversation can begin, the system itself must be understood. That means what it was designed to do, what data it relies on, how it learns, where its limits are and what it produces. Without that foundation, accountability has nothing to attach to.

System Properties explains the vocabulary that makes that description possible. Concepts such as intended use, training data, dataset bias, system scope, model performance and explainability are not technical abstractions. They are the terms that appear in regulatory guidance, board discussions and legal proceedings when automated decisions are examined. Understanding them is the first step in governing the systems they describe.

System Scope, Purpose and Limits

SP-01 - Automated Decision Making

Automated decision making occurs when a system produces an output that affects a person — a credit decision, a risk score, a content removal — without a human making that specific choice. Understanding what constitutes an automated decision is the starting point for every governance, legal and accountability obligation that follows.

SP-02 - Training Data

Training data is what a system learned from. The patterns, gaps, errors and biases in that data are built into the system's outputs. Governance of automated decisions cannot begin without understanding what the system was trained on, where that data came from, and what it may have embedded.

SP-03 - System Limits

Every automated system has a range within which it performs reliably and conditions under which it should not be used. System limits define those boundaries. Deploying a system outside its limits — whether through scope creep, pressure or oversight — is a governance failure that regulators and courts examine closely.

SP-04 - Data Sources

The origin of the data a system uses determines the reliability and legitimacy of the decisions it produces. Data sources must be identified, documented and assessed for quality, legality and potential bias. An organisation that cannot account for where its system's data came from cannot account for what the system does.

SP-05 - Data Quality

Poor quality data produces unreliable decisions regardless of the sophistication of the system processing it. Data quality covers accuracy, completeness, consistency and currency. It is a governance obligation — not just a technical concern — because decisions made on poor data expose organisations to legal and regulatory challenge.

SP-06 - Dataset Bias

Dataset bias is a systematic pattern in training data that causes a system to treat people differently based on characteristics such as race, gender, age or socioeconomic status. Because the bias is structural rather than intentional, it persists in outputs regardless of what the organisation intended. Identifying and documenting dataset bias is a governance prerequisite.

SP-07 - Model Updates

When a system is updated — whether through retraining, new data or changed parameters — its behaviour changes. Those changes require the same governance scrutiny as the original deployment decision. Organisations that govern initial deployment but not subsequent updates have a structural governance gap.

SP-08 - System Scope

System scope is the defined boundary of what a system is authorised to do and the decisions it is authorised to make. Deployment outside that boundary — to new populations, use cases or decision types — is not a technical matter. It is an accountability and legal matter that requires formal approval and documented governance.

SP-09 - Intended Use

Intended use is the specific purpose for which a system was designed, tested and approved. It defines the limits of legitimate deployment. When a system is used for purposes beyond its intended use — even by the organisation that built it — the governance protections designed for the intended use no longer apply.

SP-10 - Unintended Use

Unintended use occurs when a system is applied to purposes, populations or contexts it was not designed for. It is one of the most common sources of AI harm and governance failure. Accountability for unintended use rests with the deploying organisation, not the system's developer.

SP-11 - Third-Party Systems

Third-party systems are components — data feeds, models, APIs, platforms — built and maintained by external parties but incorporated into an organisation's AI deployment. Using components built by others does not transfer responsibility for what those components do. The deploying organisation remains accountable for the full system's outputs.

SP-12 - Vendor Models

A vendor model is an AI model built by an external supplier and deployed by the organisation using it. The governance obligations attached to that model's outputs — explainability, fairness, audit trails, human oversight — belong to the deploying organisation. Vendor contracts cannot reassign those obligations away.

SP-13 - System Inputs

System inputs are the data fed into a system at the point of making a decision. Input quality, provenance and completeness directly determine output quality. Governance requires that inputs are documented, validated and retained — because without the inputs, the decision cannot be reconstructed or challenged.

SP-14 - System Outputs

System outputs are the decisions, scores, classifications and recommendations an automated system produces. They are the point at which governance obligations become concrete — the point at which a person is affected, a right is engaged, and accountability becomes real. Output governance begins with knowing exactly what the system produces and when.

SP-15 - Transparency

Transparency in AI governance means being open — in specific, accessible terms — about what a system does, what data it uses, and how it reaches its outputs. Publishing a general description of a system is not transparency. Transparency requires specificity, timing and accessibility: the right information must reach the right person at the right moment.

SP-16 - Explainability

Explainability is the ability to account for a specific decision in terms the affected person can understand and act on. A technical description of how a model works is not an explanation. An explanation must address the specific decision, the specific person, and the specific factors that produced that outcome. Regulators increasingly require it as a legal obligation.

SP-17 - Model Performance

Model performance measures how accurately a system does what it is supposed to do. Performance at deployment is not performance over time. Systems degrade, data drifts, contexts change. Ongoing performance monitoring is a governance requirement — not because the system may fail suddenly, but because it may fail gradually and invisibly.

SP-18 - Third-Party Risk

Third-party risk is the governance exposure created when parts of an AI system are built, maintained or operated by external suppliers. Outsourcing a component does not outsource accountability for what it does. Organisations that cannot demonstrate control over their third-party components cannot demonstrate control over their systems.

SP-19 - Fairness

Fairness in automated decision making has three distinct dimensions: equal treatment, equal outcomes and procedural fairness. Meeting one does not satisfy the others. Organisations must assess fairness across all three dimensions and document that assessment — because discrimination law, consumer protection law and financial regulation each address a different dimension.

SP-20 - Data Drift

Data drift occurs when the real-world data a system encounters in production diverges from its training data. The system continues operating while its outputs become less reliable. Detecting and responding to drift is a monitoring obligation, not a technical afterthought.

SP-21 - Model Drift

Model drift is the progressive deterioration of a deployed system's accuracy over time — not because inputs have changed, but because the relationships the model learned no longer hold. Without ongoing performance monitoring, model drift is invisible until the harm it causes is not.

SP-22 - Provenance

Provenance is the documented history of data — where it came from, who created it, what was done to it before it reached the system. Without provenance, the reliability and legitimacy of a system's outputs cannot be established or defended under regulatory scrutiny.

SP-23 - Robustness

Robustness is a system's ability to perform reliably under conditions that deviate from the expected — adversarial inputs, edge cases, distributional shift, partial failures. A system that functions well in testing but degrades under real-world stress has not been governed for the conditions it actually faces.

SP-24 - Deployment Context

Deployment context is the specific environment in which an automated system operates — the population it affects, the decisions it influences, and the regulatory environment that applies. Governance obligations are not abstract. They attach to the specific context in which a system is deployed.

SP-25 - System Architecture

The governance implications of an automated system depend on its architecture. Explainability, reconstruction, oversight and third-party risk each operate differently across rules-based systems, machine learning models, large language models, RAG systems, fine-tuned models and API-accessed models.

SP-26 - Distribution Shift

Distribution shift occurs when the real-world environment a system operates in differs materially from its training environment. Unlike data drift, it can occur from the moment of deployment. A system trained on one population or context may produce unreliable outputs when applied to another.

SP-27 - Feedback Loops

Feedback loops arise when a system's outputs influence the data used to evaluate or retrain it. The system begins to reinforce its own past decisions, potentially entrenching bias and distorting performance metrics. Standard evaluation metrics will not detect this — the loop makes the bias appear validated.

SP-28 - Automation Bias

Automation bias occurs when human operators defer to system outputs without adequate scrutiny. The presence of a human reviewer does not automatically provide meaningful oversight — it may simply record that a human was present. Governance must design for genuine oversight, not nominal compliance.

SP-29 - Silent Degradation

Silent degradation occurs when system performance declines gradually without producing obvious failures. The system continues producing plausible outputs while accuracy deteriorates. No alarm sounds. The failure is only visible through active performance measurement against a defined baseline.

SP-30 - Deployment Authorisation

Deployment authorisation is the formal recorded decision to put an automated system into operation — documenting who approved it, on what basis, and under what conditions. Without it, no one can demonstrate who started the system running or whether the organisation exercised meaningful governance over that decision.

SP-31 - System Scope and Boundaries

System scope defines what an automated system acts on, what it connects to, and where its influence ends. Governance is only as good as the organisation's understanding of its system's actual scope — which is frequently narrower than assumed and rarely static.