ICO AI in Recruitment Audit — Gap Analysis
Project: Sable AI Ltd — AI Governance Framework Stage: Stage 3 — Regulatory Alignment Status: Draft Version: v1 Date: 2026-03-01 Assumptions: Built on outline assumptions — not verified against real Sable AI Ltd data
Purpose
This document maps the findings and recommendations from the ICO's AI and Recruitment: Outcomes of our AI Audits report (November 2024) against the Sable AI Ltd governance framework. Its function is to confirm what the framework covers, identify where coverage is partial, and flag genuine gaps requiring additional action.
The ICO's November 2024 report is the most directly applicable enforcement benchmark for Scout: it reflects live audit findings from companies operating CV screening and candidate shortlisting AI tools — the exact use case for which Sable AI Ltd builds and sells Scout. Any recruitment AI provider that cannot demonstrate a response to this report's recommendations faces material regulatory exposure.
Source document: ICO, AI and Recruitment: Outcomes of our AI Audits, November 2024 (hereafter: ICO Recruitment Outcomes Report).
How to use this document: Sable AI Ltd and any organisation adapting this framework should work through the gap column and confirm, for each "Gap — action required" and "Partially addressed" row, what specific actions are planned and by when. This document is intended to be a living instrument, updated as framework documents are completed and operational controls are implemented.
About the ICO Audit Programme
The ICO conducted a programme of audits of AI recruitment tool providers and their customers. Key quantitative outcomes:
- 296 recommendations made across all audit engagements
- 42 advisory notes issued
- 97% of recommendations accepted and actioned by recipients
- Audits covered CV screening, candidate scoring, candidate sourcing, and related AI-enabled recruitment tools
The ICO identified seven headline recommendation themes applicable to all organisations designing and using AI recruitment tools. These form the primary structure of this gap analysis. Additional findings are addressed in the supplementary section.
Part 1 — Gap Analysis: ICO Seven Key Recommendation Themes
1. Fair Processing in AI — Bias and Accuracy Monitoring
ICO finding: Many providers monitored the accuracy and bias of their AI tools. However, the ICO witnessed instances where there was a lack of accuracy testing. The ICO stated that being "better than random" is insufficient to demonstrate that AI is processing personal information fairly. Providers and recruiters must monitor for potential or actual fairness, accuracy, or bias issues in AI and its outputs, and take appropriate action.
| ICO recommendation | Framework document(s) | Gap status |
|---|---|---|
| Monitor AI outputs for bias and accuracy issues | L3-4.2-Bias-Monitoring-Protocol-v1.md |
Addressed in framework design — protocol defines monitoring methodology, thresholds, and remediation workflow; operational implementation remains customer- and deployment-dependent |
| Establish alert thresholds and escalation for bias signals | L3-4.2-Bias-Monitoring-Protocol-v1.md, L3-4.1-Monitoring-Framework-v1.md |
Addressed in framework design — thresholds and escalation paths are documented; live calibration and resourcing remain implementation tasks |
| Take corrective action when bias or accuracy issues are detected | L3-4.2-Bias-Monitoring-Protocol-v1.md |
Addressed in framework design — remediation and investigation workflow is defined in the protocol |
| Demonstrate fairness beyond accuracy alone | L1-2.2-Risk-Classification-Framework-v1.md, L3-4.2-Bias-Monitoring-Protocol-v1.md |
Addressed in framework design — risk classification establishes the fairness obligation and the protocol operationalises monitoring beyond accuracy-only measures |
Overall status for this theme: Addressed at framework design level. Operational implementation — including calibration of thresholds, demographic monitoring data collection, and ongoing reporting — remains an implementation task for Sable AI Ltd.
2. Adequate and Accurate Bias Monitoring Data
ICO finding: AI providers and recruiters must ensure that any special category data processed to monitor for bias and discriminatory outputs is adequate and accurate enough to effectively fulfil this purpose. Inferred or estimated data will not be adequate and accurate enough, and will therefore not comply with data protection law. The ICO found that some providers inferred gender, ethnicity, and other protected characteristics from CV content or candidate names — this processing often occurred without a lawful basis and without the candidate's knowledge.
| ICO recommendation | Framework document(s) | Gap status |
|---|---|---|
| Do not use inferred or estimated protected characteristic data for bias monitoring | L2-3.2-Equality-Act-2010-Compliance-Map-v1.md (flags inference risk), L2-3.1-UK-GDPR-Mapping-Matrix-v1.md (Art. 9 controls) |
Partially addressed — the compliance map and GDPR matrix flag the risk and the legal constraint; a specific prohibition on inference-based monitoring requires operational implementation |
| Use only adequate and accurate data sources for bias monitoring | L3-4.2-Bias-Monitoring-Protocol-v1.md |
Addressed in framework design — protocol Section 3 explicitly prohibits inferred demographic data, specifies lawful collection routes (Option A voluntary monitoring), and prohibits Options C and D; operational implementation of the approved data collection mechanism remains an implementation task |
| Ensure a lawful basis exists before processing special category data for monitoring | L2-3.1-UK-GDPR-Mapping-Matrix-v1.md (Art. 9 analysis), L2-3.4-DPIA-Template-v1.md |
Partially addressed — lawful basis question is identified and flagged [LEGAL REVIEW REQUIRED]; no confirmed lawful basis route is yet implemented; legal advice required before live demographic monitoring commences |
Overall status for this theme: Partially addressed / Gap. This is the most legally complex recommendation in the report. The framework identifies the tension between bias monitoring (which requires demographic data) and UK GDPR Art. 9 (which prohibits processing of most special category data without an explicit lawful route). Resolution requires legal advice before the monitoring protocol can be finalised. [LEGAL REVIEW REQUIRED]
3. Transparency — Informing Candidates About AI Processing
ICO finding: Recruiters must ensure that they inform candidates how their personal information will be processed by AI. The ICO found that detailed privacy information for candidates was consistently absent or inadequate across many providers audited. Providers should either supply this information themselves or ensure the recruiting organisation provides it.
| ICO recommendation | Framework document(s) | Gap status |
|---|---|---|
| Provide candidates with detailed privacy information about AI processing | L4-5.2-Candidate-Transparency-Notice-v1.md |
Addressed in framework design — template provides full candidate-facing notice in plain English covering AI use, data types, rights, and complaint routes; deployment depends on customer implementation |
| Transparency notice must be provided before or at the point of CV processing | L4-5.2-Candidate-Transparency-Notice-v1.md |
Addressed in framework design — template includes timing guidance in the Adaptation Notes; delivery timing obligation is on the customer to operationalise |
| Disclose that AI is used in candidate screening (signposting obligation per DSIT guidance) | L4-5.2-Candidate-Transparency-Notice-v1.md, L1-2.4-Governance-Policy-v1.md |
Addressed in framework design — governance policy states the obligation; candidate notice provides the operational wording for disclosure |
| Address Art. 13/14 UK GDPR disclosure requirements for automated decision-making | L2-3.1-UK-GDPR-Mapping-Matrix-v1.md (Arts. 13, 14, 22C disclosure obligations) |
Addressed — GDPR matrix specifies disclosure obligations; L4-5.2 delivers the candidate-facing template |
Overall status for this theme: Addressed at framework design level. The ICO audit found candidate transparency to be a near-universal failing. The template addresses this gap; operational deployment requires customers to publish and serve the notice before processing commences.
4. Data Minimisation — Avoiding Excess Collection and Unlawful Repurposing
ICO finding: The ICO was concerned to find tools that collected far more personal information than was necessary. In some cases, personal information was scraped and combined with data from millions of profiles on job networking sites and social media, then used to build searchable candidate databases. Recruiters and candidates were rarely aware of this repurposing. The ICO confirmed this practice was unlawful.
| ICO recommendation | Framework document(s) | Gap status |
|---|---|---|
| Limit data collection to what is necessary for the stated purpose | L2-3.1-UK-GDPR-Mapping-Matrix-v1.md (Art. 5(1)(c) data minimisation control), L1-2.3-Data-Flow-Map-v1.md (data intake controls) |
Addressed — the GDPR matrix specifies data minimisation obligations; the data flow map defines what data enters Scout's processing pipeline |
| Do not scrape or aggregate candidate data from social media without a lawful basis | L1-2.3-Data-Flow-Map-v1.md (defines permissible data sources), L1-2.4-Governance-Policy-v1.md (prohibited use cases) |
Addressed — Scout's defined data inputs are CV documents submitted via customer workflows; social media aggregation is not part of Scout's defined scope [ASSUMPTION A-002] |
| Do not repurpose candidate data for uses incompatible with the original collection purpose | L2-3.1-UK-GDPR-Mapping-Matrix-v1.md (Art. 5(1)(b) purpose limitation control) |
Addressed — purpose limitation is addressed in the GDPR matrix |
| Ensure candidates are aware of how their data is being used and by whom | L4-5.2-Candidate-Transparency-Notice-v1.md |
Addressed in framework design — as per Theme 3 above |
Overall status for this theme: Addressed (within the scope of Scout's assumed architecture). The ICO's concern about social media aggregation applies to sourcing tools rather than inbound-application screeners. Scout's assumed data model — processing CVs submitted by candidates through customer workflows — does not include social media scraping. However, this assumption must be verified against the actual product [ASSUMPTION].
5. Risk Assessments — Completing DPIAs Adequately and Proactively
ICO finding: The majority of AI providers had completed a DPIA before processing personal information. However, in some cases DPIAs were completed retrospectively or just prior to the audit. The ICO found that in many cases DPIAs were not sufficiently detailed. Common deficiencies included: absence of a detailed data flow map; failure to consider how data protection principles would be met; inadequate necessity and proportionality assessment; and no consideration of alternative approaches that might achieve the same outcome with less personal information.
| ICO recommendation | Framework document(s) | Gap status |
|---|---|---|
| Complete a DPIA before commencing processing, not retrospectively | L2-3.4-DPIA-Template-v1.md (Section 7 sign-off requirement) |
Addressed — DPIA template includes a sign-off requirement confirming completion must precede live processing; Sable AI Ltd must populate the template and obtain sign-off before Scout processes production candidate data [ASSUMPTION] |
| DPIA must include a detailed map of data flows through the AI system | L2-3.4-DPIA-Template-v1.md (Section 1.7), L1-2.3-Data-Flow-Map-v1.md |
Addressed — data flow map is a dedicated Stage 2 document; DPIA Section 1.7 references it |
| DPIA must include meaningful assessment of necessity and proportionality | L2-3.4-DPIA-Template-v1.md (Section 2) |
Addressed — DPIA template Section 2 provides a full necessity and proportionality assessment framework |
| DPIA must consider alternative approaches using less personal information | L2-3.4-DPIA-Template-v1.md (Section 2.3) |
Addressed — alternatives assessed in DPIA Section 2.3 |
| DPIA must assess risks to data subjects' rights and freedoms | L2-3.4-DPIA-Template-v1.md (Section 3) |
Addressed — Section 3 identifies six risk categories with likelihood, severity, and mitigation analysis |
| DPIA must be kept current and reviewed when processing changes | L1-2.4-Governance-Policy-v1.md (review cycle obligation), L3-4.1-Monitoring-Framework-v1.md (M-18 DPIA currency metric) |
Addressed — governance policy specifies review cycle; monitoring framework metric M-18 tracks DPIA currency and triggers review on material change |
Overall status for this theme: Addressed (template level). The DPIA template provides a sufficiently detailed structure. Gap risk: Sable AI Ltd must ensure the DPIA is populated with real operational data (not assumed values) and completed before Scout processes live candidate data. [ASSUMPTION]
6. Role Allocation and Contracting
ICO finding: Several AI providers incorrectly defined themselves as processors rather than controllers, and had not complied with data protection principles as a result. Some attempted to pass all compliance responsibility to recruiters through deliberately broad or unclear contracts. Contracts were often too vague and failed to specify: what personal information would be processed and how; the responsibilities of each party; technical and organisational measures; or how data would be handled if the contract ended.
| ICO recommendation | Framework document(s) | Gap status |
|---|---|---|
| Correctly identify whether the AI provider is a controller, processor, or joint controller | L1-2.2-Risk-Classification-Framework-v1.md (controller/processor analysis), L1-2.3-Data-Flow-Map-v1.md (role identification in each data path) |
Addressed — risk classification framework includes controller/processor determination for each customer type; the data flow map addresses agency and in-house HR customer paths separately |
| Contracts must specify what personal data is processed and how | L4-5.1-Data-Processing-Agreement-Template-v1.md (Appendix A and B) |
Addressed in framework design — DPA template Appendices A and B specify data types, processing purposes, and sub-processor chain; legal tailoring required before execution |
| Contracts must clearly set out responsibilities of each party | L4-5.1-Data-Processing-Agreement-Template-v1.md, L1-2.5-Roles-and-Responsibilities-v1.md |
Addressed in framework design — internal RACI is complete; DPA template sets out controller/processor obligations for each customer type |
| Contracts must specify technical and organisational security measures | L4-5.1-Data-Processing-Agreement-Template-v1.md |
Addressed in framework design — DPA template includes security obligations clause; Sable AI Ltd must specify its actual technical and organisational measures before execution |
| Contracts must address what happens to personal data in AI models at contract end | L4-5.1-Data-Processing-Agreement-Template-v1.md |
Addressed in framework design — end-of-contract deletion and return provisions are included in both appendices |
| Address the joint controller question for agency customers | L1-2.3-Data-Flow-Map-v1.md (agency customer path and joint controller note), L4-5.1-Data-Processing-Agreement-Template-v1.md (Appendix A joint controller clauses) |
Addressed in framework design — data flow map identifies the joint controller risk; DPA Appendix A includes both a controller-processor structure and an Art. 26 Joint Controller Addendum; [LEGAL REVIEW REQUIRED] on which structure applies in each case |
Overall status for this theme: Addressed at framework design level. Legal tailoring and execution of customer-specific DPAs remain implementation tasks. The joint controller determination in each agency relationship requires legal advice.
7. Human Review of AI Outputs
ICO finding: The ICO recommended that organisations introduce robust and meaningful human reviews or quality checks of AI outputs so that issues are addressed at an early stage. The ICO also recommended implementing a feedback mechanism for recruiters to report errors. The ICO stated that recruiters should not use AI tools to make automated recruitment decisions where the AI is not designed for this purpose.
| ICO recommendation | Framework document(s) | Gap status |
|---|---|---|
| Introduce robust and meaningful human review of AI outputs | L1-2.2-Risk-Classification-Framework-v1.md (Art. 22 safeguards; mandatory human review as a control), L1-2.5-Roles-and-Responsibilities-v1.md (human review responsibility assigned) |
Addressed — mandatory human review before any candidate contact is a core assumption of Scout's design [ASSUMPTION A-007] and is embedded in the risk framework |
| Implement a feedback mechanism for recruiters to report AI errors | L3-4.1-Monitoring-Framework-v1.md (M-06 reviewer challenge rate; M-03 human override rate), L3-4.3-Incident-Response-Plan-v1.md (escalation pathway) |
Addressed in framework design — monitoring framework defines the reviewer challenge rate metric and escalation path; operational deployment requires customer onboarding and recruiter training |
| Do not allow AI to make autonomous recruitment decisions beyond its designed scope | L1-2.2-Risk-Classification-Framework-v1.md (use-case boundary controls), L1-2.4-Governance-Policy-v1.md (approved and prohibited use cases) |
Addressed — governance policy defines approved use cases; Scout is scoped as a shortlisting and screening support tool, not an autonomous decision-maker |
Overall status for this theme: Addressed (at policy and framework level). Operational implementation of the error-reporting mechanism requires Stage 4 monitoring documents.
Part 2 — Supplementary ICO Findings
Beyond the seven key recommendations, the ICO's report identified additional specific findings. These are addressed below.
2.1 Discriminatory Search Functionality
ICO finding: Features in some tools could lead to discrimination by having search functionality that allowed recruiters to filter out candidates with certain protected characteristics.
| ICO finding | Framework document(s) | Gap status |
|---|---|---|
| Do not build or offer filtering functionality that enables exclusion of candidates by protected characteristic | L1-2.4-Governance-Policy-v1.md (prohibited use cases), L2-3.2-Equality-Act-2010-Compliance-Map-v1.md (indirect discrimination risk) |
Addressed at policy level. Scout's assumed scope does not include explicit protected-characteristic filter controls [ASSUMPTION]. The governance policy must explicitly prohibit adding such features; the equality compliance map identifies the legal risk |
| Audit existing UI and API for unintended protected-characteristic filtering routes | L3-4.1-Monitoring-Framework-v1.md (M-23 — Protected-characteristic filtering route review) |
Addressed at framework design level — M-23 requires a minimum annual review of Scout's UI filters, API parameters, admin tools, search operators, and export logic, plus a pre-release review triggered by any major UI change or new filter/API feature |
2.2 Inferred Protected Characteristic Data
ICO finding: Some providers estimated or inferred gender, ethnicity, and other characteristics from CV content or candidate names. This inferred data was not accurate enough to monitor bias effectively and was often processed without a lawful basis or the candidate's knowledge.
| ICO finding | Framework document(s) | Gap status |
|---|---|---|
| Do not infer protected characteristics from name, CV content, or demographic proxies | L2-3.1-UK-GDPR-Mapping-Matrix-v1.md (Art. 9 prohibition), L2-3.2-Equality-Act-2010-Compliance-Map-v1.md (proxy discrimination risk), L1-2.4-Governance-Policy-v1.md (Section 5 — explicit operational prohibition) |
Addressed — governance policy explicitly prohibits inferring or estimating protected characteristics from candidate names, CV content, employment history, education history, location, language style, and other proxy sources, across all purposes including bias monitoring |
| Explicit prohibition on name-based ethnicity inference or age inference from career timeline | L3-4.2-Bias-Monitoring-Protocol-v1.md (Section 3 — prohibition on inferred demographic data), P2-Scout-System-Profile-v1.md (Section 8 — system prompt constraint) |
Addressed — the bias monitoring protocol prohibits inferred demographic data; the Scout system prompt, documented in the System Profile Section 8, explicitly prohibits the model from considering, referencing, or inferring from information that could be a proxy for a protected characteristic |
2.3 DPIA Completeness
ICO finding: DPIAs in many cases did not include: a detailed data flow map; consideration of data protection principles; meaningful necessity and proportionality assessment; or consideration of alternatives using less personal information.
| ICO finding | Framework document(s) | Gap status |
|---|---|---|
| DPIA must map data flows in detail | L2-3.4-DPIA-Template-v1.md (Section 1.7), L1-2.3-Data-Flow-Map-v1.md |
Addressed |
| DPIA must address necessity and proportionality | L2-3.4-DPIA-Template-v1.md (Section 2) |
Addressed |
| DPIA must consider alternatives to current approach | L2-3.4-DPIA-Template-v1.md (Section 2.3) |
Addressed |
| DPIA must not be completed retrospectively — must precede live processing | L2-3.4-DPIA-Template-v1.md (completion requirement) |
Addressed — explicitly noted in the DPIA template |
Part 3 — Summary Table
| ICO Theme | Gap Status | Remaining action |
|---|---|---|
| 1. Fair processing / bias monitoring | Addressed at framework level | Operational calibration and reporting — implementation task |
| 2. Adequate bias monitoring data | Partially addressed | Legal advice on Art. 9 lawful basis for voluntary monitoring data required before live demographic monitoring commences |
| 3. Transparency to candidates | Addressed at framework level | Customer deployment of notice — implementation task |
| 4. Data minimisation | Addressed | No further action at framework level |
| 5. DPIAs | Addressed | Sable AI Ltd must populate and sign off the DPIA before live processing |
| 6. Role allocation and contracting | Addressed at framework level | Legal tailoring and execution of customer DPAs — implementation task |
| 7. Human review | Addressed at framework level | Operational verification of recruiter review quality — implementation task |
| 2.1 Discriminatory search functionality | Addressed at framework level | M-23 periodic design review added to L3-4.1-Monitoring-Framework-v1.md; covers UI, API parameters, admin tools, and export logic |
| 2.2 Inferred protected characteristics | Addressed at framework level | Operational prohibition in L1-2.4-Governance-Policy-v1.md (Section 5); system-prompt constraint in P2-Scout-System-Profile-v1.md (Section 8) |
| 2.3 DPIA completeness | Addressed | No further action at framework level |
Part 4 — Framework Coverage Assessment
Addressed at framework design level (9 of 10 themes): Fair processing / bias monitoring (Theme 1), transparency (Theme 3), data minimisation (Theme 4), DPIA obligation and completeness (Theme 5 and Supplementary 2.3), role allocation and contracting (Theme 6), human review (Theme 7), discriminatory search functionality (Supplementary 2.1 — M-23 added to L3-4.1-Monitoring-Framework-v1.md), and inference of protected characteristics (Supplementary 2.2 — addressed by L1-2.4-Governance-Policy-v1.md Section 5 and P2-Scout-System-Profile-v1.md Section 8) are all addressed within the framework documents. Operational implementation, calibration, and customer deployment remain tasks for Sable AI Ltd.
Partially addressed — residual implementation gap (1 theme):
Theme 2 — Adequate bias monitoring data and lawful basis. The framework identifies the legal risk, establishes the approved approach (voluntary demographic monitoring with Art. 6 + Art. 9 conditions), and prohibits inferred data. However, no confirmed lawful basis route is yet implemented at an operational level. Legal advice required before live demographic monitoring commences. [LEGAL REVIEW REQUIRED]
Items Requiring Human Review Before Completion
- [LEGAL REVIEW REQUIRED] — Confirm the lawful basis under UK GDPR Art. 9 (and DPA 2018 Schedule 1) for processing special category data to monitor for bias in Scout's outputs. Inferred data has been confirmed as unlawful by the ICO. The permitted route (if any) for voluntary diversity monitoring data supplied by candidates requires legal advice.
- [ASSUMPTION] — Scout's data model is assumed not to include social media scraping or profile aggregation from external sources. This assumption must be verified against the actual product architecture before relying on the data minimisation "Addressed" conclusion.
- [ASSUMPTION A-007] — Mandatory human review before candidate contact is a core design assumption. This must be confirmed as a live operational control, not merely a policy commitment.
This document should be updated each time a new framework document is completed, as gap statuses will change. It is not a static record.