Risk Classification Framework — Scout
Project: Sable AI Ltd — AI Governance Framework Stage: Stage 2 — Governance Foundation Status: Draft Version: v1 Date: 2026-03-01 Assumptions: Built on outline assumptions — not verified against real Sable AI Ltd data
1. Purpose of This Document
This document assesses Scout's risk profile across three regulatory dimensions and assigns an overall risk tier. It informs the DPIA requirement (L2-3.4-DPIA-Template-v1.md), the monitoring and bias controls framework (L3-4.1-Monitoring-Framework-v1.md), and all downstream operational controls.
The classification draws on the system profile in L1-2.1-AI-System-Inventory-v1.md and the regulatory extracts in _input/stage-1-regulatory-extracts.md.
Important note on automated decision-making: The original UK GDPR Art. 22 (automated individual decision-making) has been replaced in UK law by Arts. 22A–22D, inserted/substituted by section 80 of the Data (Use and Access) Act 2025 (DUAA 2025). These provisions are in force for relevant decisions from 5 February 2026. All references in this document use the operative UK GDPR Arts. 22A–22D regime.
2. Risk Dimension 1 — Automated Decision-Making (Art. 22A–22C, DUAA 2025)
2.1 The Statutory Test
Under UK GDPR Article 22A (as substituted by section 80 of the Data (Use and Access) Act 2025), a controller must ensure safeguards are in place where a significant decision is based solely on automated processing.
Two thresholds must both be met before Art. 22A–22C obligations apply:
| Threshold | Statutory test | Assessment for Scout |
|---|---|---|
| Significant decision | A decision "produces a legal effect for the data subject, or it has a similarly significant effect" (Art. 22A(1)(b)) | Candidate rejection, exclusion from shortlist, or failure to progress to interview has a "similarly significant effect" on access to employment. This threshold is likely met for shortlisting and rejection decisions. [LEGAL REVIEW REQUIRED] |
| Solely automated | "No meaningful human involvement in the taking of the decision" (Art. 22A(1)(a)); when assessing this, a person must consider "the extent to which the decision is reached by means of profiling" (Art. 22A(2)) | This threshold is conditional — see analysis below |
2.2 The "Solely Automated" Analysis
[ASSUMPTION — A-007] states that Scout outputs are subject to mandatory human review before any candidate contact. If this review is genuinely meaningful, the "solely automated" threshold is not met and Art. 22A–22C obligations are not triggered.
However, the ICO has established that the quality of human involvement is determinative:
"The human involvement has to be active and not just a token gesture... A process won't be considered solely automated if someone weighs up and interprets the result of an automated decision before applying it to the individual." — ICO, Automated Decision-Making and Profiling guidance
The ICO's November 2024 AI in Recruitment Outcomes Report also examined "how AI outputs or decisions have been meaningfully reviewed and quality checked" and found that in many recruitment AI deployments, human review did not meet this standard.
Operative question for Scout: Is the mandatory human review (A-007) designed such that reviewers exercise genuine independent judgment — including the ability and inclination to override Scout's output — or do Scout's rankings effectively determine outcomes in practice?
This question cannot be answered from assumed company characteristics alone.
2.3 Art. 22A Decision Tree
Is Scout's output used to make or significantly influence a recruitment decision?
│
└─ YES
│
└─ Does that decision have a similarly significant effect on the candidate?
(e.g., rejection from shortlist, non-progression to interview)
│
└─ YES — this threshold is likely met for shortlisting / rejection
│ [LEGAL REVIEW REQUIRED]
│
└─ Is there meaningful human involvement before the decision takes effect?
(Reviewer exercises genuine independent judgment; can and does override
Scout's output; review is not a token gesture)
│
├─ YES — Art. 22A NOT triggered for this processing
│ However:
│ → Art. 35 DPIA still required (high-risk processing — see §3)
│ → EA 2010 indirect discrimination obligations remain (see §4)
│ → ICO monitoring and bias audit expectations remain
│ → Governance controls in this framework remain applicable
│
└─ NO — Art. 22A IS triggered
→ Art. 22C safeguards are MANDATORY:
• Provide data subject with information about the processing
• Enable data subject to make representations
• Enable human intervention on request
• Enable data subject to contest the decision
→ Art. 22B applies if special category data involved:
additional restrictions on solely automated significant
decisions based on special category data
→ Art. 35 DPIA required
→ EA 2010 obligations remain
[LEGAL REVIEW REQUIRED] — This determination has significant legal consequences.
It must be made by a qualified UK lawyer with direct knowledge of Scout's
technical operation and the actual conduct of the human review workflow.
2.4 Practical Design Implication
Whether or not Art. 22A–22C are triggered, Sable AI Ltd should design Scout's human review step to meet the "meaningful involvement" standard. This provides: - A defensible position on Art. 22A - Compliance with ICO expectations for AI in recruitment (Nov 2024 audit findings) - A meaningful safeguard against EA 2010 discrimination claims - A stronger position in any ICO investigation or EHRC enforcement action
The ICO recommends: "Complete both random and risk-based human reviews of AI outputs" including reviews triggered by "uncertain or ambiguous inputs", "unexpected or ungraded outputs", or where "performance metrics highlight potential bias."
Risk dimension 1 rating: CONDITIONAL HIGH — High risk if human review is not genuinely meaningful; lower (but still significant) risk if it is.
3. Risk Dimension 2 — ICO High-Risk Processing and DPIA Trigger (Art. 35)
Under UK GDPR Art. 35, a DPIA is required before processing "likely to result in a high risk to the rights and freedoms of natural persons," particularly where new technologies are used.
3.1 ICO High-Risk Criteria — Assessment for Scout
| ICO high-risk criterion | Applies to Scout? | Basis |
|---|---|---|
| Systematic evaluation or scoring of natural persons, including profiling | Yes | Scout systematically scores and ranks candidates based on CV content assessed against job criteria. The ICO's AI and Data Protection guidance identifies scoring candidates in hiring contexts as profiling. |
| Decisions with legal or similarly significant effects | Yes | Shortlisting and rejection decisions significantly affect access to employment — this criterion is met regardless of Art. 22A threshold. |
| Use of new or novel technologies | Yes | LLM-based CV screening is a new technology. The ICO's November 2024 audit examined this exact technology category. |
| Large-scale processing of personal data | Unknown | Data volume not confirmed [ASSUMPTION]. Relevant to scale assessment — may elevate risk further as Sable AI Ltd grows. |
| Processing involving special category data | Conditional | Scout does not intentionally process special category data, but CV content may reveal or allow inference of race, disability, religion, and other protected characteristics. The ICO has confirmed inferred characteristics are still special category data. |
3.2 DPIA Conclusion
The ICO has specifically confirmed that AI systems used in recruitment trigger the DPIA requirement. At least three of the five standard high-risk criteria are clearly met for Scout. A fourth (special category data) is conditional on how Scout handles inferred attributes.
A DPIA must be completed before Scout processes candidate personal data. The DPIA template is at L2-3.4-DPIA-Template-v1.md (forthcoming).
The DPIA must be: - Completed early in AI development or before any processing begins - Updated as the system develops or when processing changes - Documented and available for ICO inspection
Risk dimension 2 rating: HIGH — DPIA required; at minimum three high-risk criteria confirmed.
4. Risk Dimension 3 — Equality Act 2010 Discrimination Risk
4.1 The Discrimination Risk Mechanism
Equality Act 2010 s. 19 prohibits indirect discrimination: applying a provision, criterion or practice (PCP) that puts persons sharing a protected characteristic at a particular disadvantage, where that PCP cannot be shown to be a proportionate means of achieving a legitimate aim.
In the context of recruitment AI, the "PCP" is not a deliberate policy but an AI screening methodology that may produce systematically different outcomes for candidates sharing a protected characteristic — even where the methodology is facially neutral.
Protected characteristics at risk in CV screening AI (EA 2010 s. 4):
| Protected characteristic | Mechanism of risk | Source |
|---|---|---|
| Race / ethnic origin | Name-based or language-based inference; historical training data reflecting biased past hiring; CV formatting conventions associated with particular backgrounds | ICO Nov 2024 audit: systems found to estimate ethnicity from names without lawful basis |
| Disability | Employment gap inference; disclosure of health conditions in CV; timed or structured assessments disadvantaging neurodivergent candidates | EA 2010 Sch. 1 para. 5A(2): ability to participate in working life on equal basis; reasonable adjustments under s. 39(5) |
| Age | Graduation year, career length, and professional experience patterns used as screening proxies | DSIT Responsible AI in Recruitment (2024): age evaluated in worked bias audit example |
| Sex | Career break patterns; gendered language in CVs or job descriptions | ICO Nov 2024 audit: gender estimated or inferred from application content |
| Pregnancy / maternity | Maternity leave periods appearing as employment gaps in screening criteria | s. 39(1): employers must not discriminate in arrangements for deciding to whom to offer employment |
4.2 EA 2010 Obligations for Scout
| Obligation | EA 2010 provision | Application to Scout |
|---|---|---|
| Non-discrimination in recruitment arrangements | s. 39(1)(a) | Scout's screening criteria and outputs must not place candidates with protected characteristics at a particular disadvantage |
| Proportionate means / legitimate aim test | s. 19(2)(d) | Any screening criterion that produces disparate impact must be justified as a proportionate means of achieving a legitimate aim |
| Reasonable adjustments for disabled candidates | s. 39(5) | Where Scout's process disadvantages a disabled candidate, the employer must consider whether a reasonable adjustment can be made |
| Victimisation | ss. 39(3)–(4) | Scout must not be used in ways that identify or penalise candidates who have previously raised discrimination complaints or exercised protected acts |
4.3 Bias Monitoring Obligation
The ICO's November 2024 audit requires that AI providers and recruiters "test regularly for fairness, accuracy, or bias issues in AI tools or outputs" and "consider a wide range of characteristics when monitoring fairness and bias, including: gender and gender identity; racial or ethnic origin; disability; and other characteristics listed in UK GDPR recital 71 or protected characteristics in the UK Equality Act 2010."
Bias monitoring protocol is at L3-4.2-Bias-Monitoring-Protocol-v1.md (forthcoming).
Risk dimension 3 rating: HIGH — Multiple protected characteristics at risk across both customer types; ICO audit enforcement precedent directly applicable to Scout's use case.
5. Overall Risk Classification
5.1 Risk Matrix
| Risk dimension | Tier | Key basis |
|---|---|---|
| Art. 22A ADM threshold (DUAA 2025) | Conditional HIGH | Significant decision threshold likely met; solely automated test depends on human review quality [LEGAL REVIEW REQUIRED] |
| High-risk processing — Art. 35 DPIA trigger | HIGH | Profiling + significant employment decisions + new technology → DPIA required |
| Special category data — Art. 9 / DPA 2018 Sch. 1 | HIGH | Inferred race, disability, and other characteristics from CV content; no confirmed lawful basis or Sch. 1 condition |
| Equality Act 2010 discrimination risk | HIGH | Automated screening risks indirect discrimination across multiple protected characteristics; ICO enforcement precedent applies |
| Overall risk tier | HIGH | All confirmed risk dimensions are HIGH; no dimension is Low or Medium |
5.2 Rationale
Scout's overall HIGH risk tier is driven by three compounding factors:
-
The use case is directly within ICO enforcement scope. The November 2024 AI in Recruitment Outcomes Report reviewed tools performing exactly Scout's function and identified material non-compliance across the sector.
-
Candidate data includes implicit special category data. CV content routinely allows inference of race, disability, religion, and age — regardless of whether Scout is designed to use those attributes. This creates Art. 9 / Sch. 1 exposure that cannot be designed away without fundamental changes to the input data.
-
Employment decisions have significant real-world consequences. Incorrect or biased AI screening directly affects candidates' economic opportunities and livelihood — the stakes of failure are high for both candidates and Sable AI Ltd (ICO enforcement, EHRC action, litigation risk).
6. Consequences of HIGH Risk Classification
| Consequence | Document | Status |
|---|---|---|
| DPIA must be completed before processing | L2-3.4-DPIA-Template-v1.md |
Forthcoming |
| UK GDPR Art. 22C safeguards must be implemented (if Art. 22A triggered) | L2-3.1-UK-GDPR-Mapping-Matrix-v1.md |
Forthcoming |
| Bias monitoring protocol required | L3-4.2-Bias-Monitoring-Protocol-v1.md |
Forthcoming |
| Incident response plan required | L3-4.3-Incident-Response-Plan-v1.md |
Forthcoming |
| Candidate transparency notice required | L4-5.2-Candidate-Transparency-Notice-v1.md |
Forthcoming |
| Legal review required before operational use | — | [LEGAL REVIEW REQUIRED] throughout |
7. Assumptions Flagged in This Document
| Assumption ID | Statement | Status |
|---|---|---|
| A-007 | Scout outputs are subject to mandatory human review before any candidate contact | 🔴 Unverified |
| A-012 | The human review step is designed such that reviewers exercise genuine independent judgment and can override Scout's outputs | 🔴 Unverified |
This document is a draft built on assumed company characteristics. The Art. 22A threshold analysis is a preliminary assessment only — it must be reviewed by a qualified UK lawyer with direct knowledge of Scout's technical operation and human review workflow before any compliance conclusions are drawn from it.