Skip to content

AI Governance Policy

Project: Sable AI Ltd — AI Governance Framework Stage: Stage 2 — Governance Foundation Status: Draft Version: v1 Date: 2026-03-01 Assumptions: Built on outline assumptions — not verified against real Sable AI Ltd data


1. Scope and Purpose

This policy governs the development, deployment, and operation of artificial intelligence systems by Sable AI Ltd. It applies to all staff, contractors, and third-party suppliers who develop, maintain, or use AI systems on behalf of Sable AI Ltd.

Current scope: This policy currently applies to Scout, Sable AI Ltd's AI-powered CV screening and candidate shortlisting tool. [ASSUMPTION A-002] Where additional AI systems are adopted, their governance requirements must be assessed against this policy before deployment.

Purpose: To ensure that Sable AI Ltd operates its AI systems: - In compliance with applicable UK law, including UK GDPR, the Data Protection Act 2018, the Data (Use and Access) Act 2025, and the Equality Act 2010 [ASSUMPTION A-006] - In accordance with ICO guidance on AI and data protection and the DSIT Responsible AI in Recruitment guide (March 2024) - In a manner that is fair, transparent, and accountable to the candidates and customers it affects

This policy supports Sable AI Ltd's obligations as a data processor acting on behalf of its customers. [ASSUMPTION A-008] It does not replace customers' own data governance obligations.


2. Governing Principles

Sable AI Ltd operates its AI systems in accordance with the following principles, drawn from the UK GDPR accountability principle (Article 5(2)), ICO AI and data protection guidance, and the DSIT AI regulatory principles.

Principle Regulatory basis Meaning for Scout
Lawfulness, fairness and transparency UK GDPR Art. 5(1)(a) Scout processes candidate personal data only where a lawful basis exists; candidates are informed that AI is used in screening
Data minimisation UK GDPR Art. 5(1)(c) Only CV text and job description text are passed to the underlying model [ASSUMPTION A-011]; no unnecessary personal data fields are ingested
Accuracy UK GDPR Art. 5(1)(d) Scout outputs are treated as indicative, not definitive; errors are addressed through human review [ASSUMPTION A-007]
Accountability UK GDPR Art. 5(2) Sable AI Ltd documents its compliance decisions and can demonstrate compliance to regulators
Fairness Equality Act 2010 s.19; ICO audit Nov 2024 Scout is monitored for bias and discriminatory outputs; inferred protected characteristics are not used as screening criteria
Human oversight UK GDPR Art. 22A(1)(a); DUA 2025 s.80 Scout outputs are subject to mandatory human review before any candidate is progressed, rejected, or contacted [ASSUMPTION A-007]
Contestability UK GDPR Art. 22C(2)(d) Candidates have the right to request human review of and to contest AI-assisted decisions affecting them

3. Accountability Structure

[ASSUMPTION] The following structure is based on assumed roles within Sable AI Ltd. Actual role titles and reporting lines must be verified before operational use.

Role AI Governance Responsibility
Founder / CEO Ultimate accountability for AI governance; approves this policy and any material changes to Scout's processing activities; approves DPIA sign-off
CTO [ASSUMPTION A-015 — acting DPO equivalent at this stage] Operational AI lead; maintains DPIA; oversees technical compliance with UK GDPR; acts as data protection contact; reviews bias monitoring outputs
Engineering Lead Implements technical controls specified in this policy; maintains Scout's data flow documentation; manages Anthropic sub-processor relationship at a technical level
Customer Success Lead Manages customer-facing compliance obligations; ensures customers receive required candidate transparency notices; handles data subject rights requests from customers

DPO position: [ASSUMPTION A-015] Sable AI Ltd does not currently employ a dedicated Data Protection Officer. The CTO is assumed to carry DPO-equivalent responsibilities at this stage. This must be reviewed as the company scales or if Scout's candidate processing volume increases materially. [LEGAL REVIEW REQUIRED — whether a dedicated DPO appointment is required under UK GDPR Article 37(1)(b) given the scale and nature of candidate data processing]

A full RACI breakdown for each responsibility area is provided in L1-2.5-Roles-and-Responsibilities-v1.md.


4. Approved Use Cases

The following uses of AI are approved under this policy, subject to the controls specified:

Use case System Required controls
CV screening against job description criteria Scout Mandatory human review of outputs before candidate contact [ASSUMPTION A-007]; customer to provide candidate transparency notice
Candidate shortlist generation Scout Shortlist treated as advisory, not binding; human reviewer exercises independent judgment [ASSUMPTION A-012]
Job description analysis (criteria extraction) Scout No candidate personal data involved; no special controls required beyond standard security measures

All approved uses are documented in L1-2.1-AI-System-Inventory-v1.md.


5. Prohibited Use Cases

The following uses are prohibited without prior written approval from the CTO and, where required, legal review:

Solely automated significant decisions: Using Scout's output as the sole basis for rejecting or progressing a candidate without genuine human review. This would constitute a decision based solely on automated processing under UK GDPR Article 22A(1)(a) (as inserted by the Data (Use and Access) Act 2025, section 80, in force from 5 February 2026). Where Scout's output constitutes a significant decision under Article 22A(1)(b), safeguards under Article 22C must be in place. [LEGAL REVIEW REQUIRED on whether Scout's shortlisting output constitutes a significant decision in any given deployment context]

Inferred protected characteristics — operational prohibition: Scout must not infer or estimate protected characteristics from candidate names, CV content, employment history, education history, location, language style, or other proxy indicators for the purpose of screening, ranking, profiling, or bias monitoring. This prohibition applies regardless of whether the inference is intended or incidental, and regardless of the apparent purpose. Using Scout or any other AI to infer, estimate, or derive candidates' race, ethnicity, disability, religion, sex, age, or other protected characteristics under Equality Act 2010 section 4 for any screening, scoring, filtering, ranking, profiling, or monitoring purpose is prohibited. The ICO has confirmed that inferred protected-characteristic data is not accurate enough to support lawful bias monitoring and will typically lack a valid lawful basis and additional condition (ICO AI in Recruitment Outcomes Report, November 2024).

Special category data without lawful basis: Processing candidates' special category personal data (UK GDPR Article 9(1)) without a valid Article 6 lawful basis and an additional condition under DPA 2018 Schedule 1.

Model training on candidate data: Using candidate personal data processed through Scout to train, fine-tune, or improve any AI model without a valid lawful basis, explicit customer authorisation, and candidate notice. [ASSUMPTION A-005 — the Anthropic DPA is assumed to prohibit training use of API-submitted data]

Repurposing candidate data: Using candidate CVs or screening outputs for any purpose incompatible with the original purpose for which they were collected, contrary to UK GDPR Article 5(1)(b).

Unassessed new AI systems: Deploying any new AI system that processes personal data without first completing a DPIA under UK GDPR Article 35 and obtaining CEO approval.


6. Third-Party AI Vendor Obligations

Scout currently operates using the Anthropic Claude API. [ASSUMPTION A-002, A-005]

Anthropic is a sub-processor of candidate personal data. The following requirements apply:

  • A valid Data Processing Agreement compliant with UK GDPR Article 28(3) must be in place before any candidate personal data is transmitted to Anthropic. [ASSUMPTION A-005]
  • The DPA must confirm that Anthropic does not use candidate personal data submitted via the API to train its models. [ASSUMPTION A-005]
  • Anthropic's processing of personal data outside the UK must be covered by an appropriate international transfer mechanism under UK GDPR Chapter V. [ASSUMPTION A-014]
  • Any change to Anthropic's sub-processing terms, or any proposal to change AI model provider, must be reviewed by the CTO before implementation.
  • The Engineering Lead must monitor Anthropic's service status, security advisories, and terms changes, and notify the CTO of any material changes.
  • Anthropic's obligations as a sub-processor must be flowed down in accordance with UK GDPR Article 28(3)(d).

Customer DPA obligations — including the flow-down of Article 28(3) terms to sub-processors and the question of joint controller arrangements — are addressed in L4-5.1-Data-Processing-Agreement-Template-v1.md (forthcoming).


7. Review Cycle

Trigger Action required Owner
Annual (minimum) Full policy review against current ICO guidance, regulatory developments, and Scout's processing activities CTO
Material change to Scout's processing (new data types, new customer segments, new model version) DPIA review; policy update if required CTO + Engineering Lead
ICO enforcement action or significant regulatory development in recruitment AI Ad hoc policy review CEO + CTO
Candidate complaint or identified bias incident Review of relevant policy sections; update if control gap identified CTO
Company growth beyond 25 employees or Series A funding Review of DPO requirement, governance structure, and resource allocation CEO
Addition of any new AI system processing personal data New DPIA; policy update to Approved Use Cases CTO + CEO

8. Regulatory Basis

This policy is designed to support compliance with:

  • UK GDPR (retained EU law), in particular: Articles 5 (data protection principles and accountability); 6 and 9 (lawful basis and special category conditions); 22A–22C (automated decision-making safeguards, as inserted by the Data (Use and Access) Act 2025); 28 (processor obligations); 35 (data protection impact assessment)
  • Data Protection Act 2018, in particular section 10 and Schedule 1 (special category conditions for employment and equality purposes)
  • Data (Use and Access) Act 2025, section 80 (ADM provisions in force from 5 February 2026; saving provision in SI 2026/82 regulation 5 for decisions taken before that date)
  • Equality Act 2010, sections 4 (protected characteristics), 19 (indirect discrimination), 39 (employer duties in recruitment)
  • ICO guidance on AI and data protection (2023, updated)
  • ICO AI in Recruitment Outcomes Report (November 2024) — primary enforcement benchmark for Scout's use case
  • DSIT Responsible AI in Recruitment guide (March 2024)

This policy requires review by a qualified UK data protection lawyer before operational use. It is a framework proposal built on assumed characteristics of Sable AI Ltd and has not been verified against real company data. See ASSUMPTIONS-LOG.md for a full register of unverified assumptions.