Accountability and Assignment of Responsibility
Accountability is the obligation of a named person to answer for what an automated system does. It must be assigned explicitly, documented formally, and understood by the person named. Accountability that exists only in an organisation chart or a vendor contract — without genuine authority and information — is not accountability in any regulatory or legal sense.
Approval processes are the formal mechanisms through which decisions about an automated system — to build it, deploy it, change it or decommission it — are authorised. Formal sign-off distinguishes a governed decision from an assumed one. Without documented approval, accountability for deployment decisions cannot be established.
Human oversight means a person with authority and information actively engaged with what an automated system does — not merely present near it. A person who receives a system's outputs, approves them routinely, or monitors without the power to intervene is not providing oversight. The standard requires genuine capability to act.
Decision authority is the formal, documented power to make or approve specific decisions about an automated system. Authority that is assumed, shared informally or undocumented does not exist in governance terms. When something goes wrong, the absence of documented decision authority is the absence of accountability.
Role definition in AI governance means specifying the responsibilities, authorities and accountabilities attached to each position involved in a system's operation. General job descriptions are not sufficient. Governance requires roles defined specifically for each system, with named individuals who understand what is required of them.
Boards that approve AI deployment without understanding the risks, the system's capabilities, or the governance arrangements in place have not discharged their legal obligations as directors. Board responsibility for AI governance is not delegable. It requires informed decision making, documented scrutiny and ongoing oversight.
Management responsibility connects board-level accountability to the operational governance of individual systems. It requires that managers with day-to-day oversight of systems have the authority, information and resources to govern them — and that their accountability is documented rather than assumed from their general role.
An organisation that deploys an automated system is accountable for what that system does — regardless of whether it built the system itself or purchased it from a vendor. Vendor responsibility means the vendor may share some obligations, but the deploying organisation cannot contract its accountability away.
Delegated authority is the formal transfer of specific decision-making powers from one person to another. The person who delegates retains ultimate accountability. Delegation that is informal, undocumented or broader than intended creates responsibility gaps that become governance failures when examined under scrutiny.
Internal governance is the structure — committees, policies, reporting lines, review processes — through which an organisation directs and controls its automated systems. Effective internal governance requires that the structure is operational, not merely documented, and that the people within it have genuine authority over the systems they are responsible for.
The gap between policy and practice is where most governance failures actually occur. A written policy that staff do not follow, cannot access, or were never trained on is evidence of a governance gap rather than evidence of governance. Regulators assess whether policies are operational, not merely whether they exist.
Responsibility gaps are situations where no named person is clearly accountable for a specific aspect of an automated system's governance. They are the most common structural cause of AI governance failure. They arise most often at the boundaries between teams, between organisations and between the vendor relationship and internal oversight.
Most automated systems enter organisations through procurement rather than internal development. Vendors supply capability but not accountability. The governance question is how responsibility is defined, allocated and maintained when systems are purchased rather than built.
When an automated system produces an output, the system did not decide — a person delegated authority to it. Delegated decision authority explains how that delegation works, what limits must be defined, and why undocumented delegation is a structural governance failure.
An algorithmic impact assessment is a structured pre-deployment review that identifies the risks an automated system creates before it goes live. It documents what the system does, who it affects, what could go wrong and what controls are in place to prevent or limit harm. It is the formal evidence that a deployment decision was made with understood risk.
A conflict of interest arises when the person responsible for governing or overseeing an automated system has a personal, financial or organisational interest in a particular outcome. That interest distorts judgment. The governance mechanism may function correctly on paper. The decisions it produces serve the wrong purpose.
Where AI systems make or substantially influence decisions affecting employees, many jurisdictions require consultation with employee representatives before deployment. Deploying such systems without that consultation is a legal breach. The consultation requirement is not discharged by informing employees after deployment.