Equality Act 2010 Compliance Map — Scout CV Screening System
Project: Sable AI Ltd — AI Governance Framework Stage: Stage 3 — Regulatory Alignment Status: Draft Version: v1 Date: 2026-03-01 Assumptions: Built on outline assumptions — not verified against real Sable AI Ltd data
1. Purpose and Scope
This document maps the compliance obligations and risks arising under the Equality Act 2010 (EA 2010) for the Scout CV screening system. It identifies, for each relevant protected characteristic, the mechanism by which algorithmic bias may create indirect discrimination risk, the applicable ICO November 2024 audit finding, the bias audit and monitoring obligations, EHRC enforcement exposure, and the required controls cross-referenced to this framework.
This is the primary equality law reference document in the Sable AI Ltd AI Governance Framework. It should be read alongside:
- L2-3.1-UK-GDPR-Mapping-Matrix-v1.md — for the UK GDPR Article 9 (special category data) implications of the same characteristics
- L3-4.2-Bias-Monitoring-Protocol-v1.md — for the operational bias detection and audit controls
- L2-3.3-ICO-Audit-Gap-Analysis-v1.md — for the ICO enforcement posture
Note on source material: The Equality Act 2010 statutory text is referenced through the EHRC Employment Statutory Code of Practice (the statutory code issued under EA 2010 s.14) and the DSIT Responsible AI in Recruitment Guide (2024). Where specific statutory sections are referenced (e.g., s.19, s.20), these are identified by section number but the verbatim text was not included in the Stage 3 extraction file. These references should be verified against the current text of the EA 2010 at legislation.gov.uk before legal reliance.
2. Indirect Discrimination: The Core Legal Framework
Equality Act 2010, section 19 (indirect discrimination) provides that a person discriminates against another if they apply a provision, criterion or practice (PCP) that:
- applies equally to persons not sharing a protected characteristic;
- puts, or would put, persons sharing the protected characteristic at a particular disadvantage compared to persons not sharing it; and
- cannot be shown to be a proportionate means of achieving a legitimate aim.
Relevance to recruitment AI: Scout's screening criteria, ranking algorithms and shortlisting methodology are PCPs within the meaning of s.19. If any element of Scout's design or training puts candidates sharing a protected characteristic at a particular disadvantage, this may constitute indirect discrimination by the recruiter customer, and potentially by Sable AI Ltd as the service provider enabling the discriminatory output.
EHRC Employment Statutory Code of Practice, para 4.10 identifies that "sometimes, a provision, criterion or practice is intrinsically liable to disadvantage a group with a particular protected characteristic." Algorithmic bias in CV screening tools is precisely such an intrinsic structural risk.
EHRC Employment Statutory Code of Practice, para 4.11 confirms that statistical evidence is sufficient to establish disadvantage — a claimant "would not need to explain the reason for the lower scores or how the lower scores are connected to his [characteristic]... it is sufficient for him to rely on the statistical information." An adverse impact ratio demonstrating lower shortlisting rates for candidates sharing a protected characteristic is sufficient to establish a prima facie case of indirect discrimination.
Objective justification (para 3.39): Even where disadvantage is established, a PCP is not unlawful if it is "a proportionate means of achieving a legitimate aim." The EHRC Code frames this as a two-stage test: 1. Is the aim legal, non-discriminatory, and a real, objective consideration? 2. Is the means proportionate — appropriate and necessary in all the circumstances?
Sable AI Ltd and its recruiter customers must be able to demonstrate that Scout's criteria are objectively justified against these two tests if challenged.
3. Protected Characteristics: Risk Analysis and Required Controls
3.1 Race and Ethnicity
| Element | Detail |
|---|---|
| Protected characteristic | Race — including colour, nationality, and ethnic or national origins (EA 2010 s.9) |
| Indirect discrimination mechanism | (1) Training data bias: if Scout's underlying model was trained on historical hiring data reflecting past racially discriminatory hiring decisions, it may replicate and amplify those patterns in candidate scoring. (2) Name-based inference: AI models may associate names with perceived ethnicity and apply differential treatment. (3) Proxy variable bias: criteria such as educational institution, address, or gap years may function as proxies for ethnic background. (4) Language and style bias: candidates whose first language is not English may score lower on CV writing style, even where their substantive experience is equivalent. |
| ICO November 2024 finding | "Others estimated or inferred people's gender, ethnicity, and other characteristics from their job application or even just their name, rather than asking candidates directly. This inferred information is not accurate enough to monitor bias effectively. It was often processed without a lawful basis and without the candidate's knowledge." Additionally: "features in some tools could lead to discrimination by having a search functionality that allowed recruiters to filter out candidates with certain protected characteristics." |
| EA 2010 obligation | s.19: Recruiter customers must not apply a PCP that puts candidates of a particular racial/ethnic group at a particular disadvantage without objective justification. Sable AI Ltd, as the tool provider, must not design Scout in a way that creates such a PCP. s.13 (direct discrimination): Where Scout infers or acts on an assumed ethnic characteristic, this may constitute direct discrimination. |
| EHRC enforcement risk | HIGH. Racial discrimination in employment is a priority enforcement area for the EHRC. Algorithmic tools that demonstrably produce racially differentiated shortlisting outcomes are susceptible to judicial review, Employment Tribunal claims, and EHRC formal investigation. Adverse impact evidence generated by Scout's own output data would be available to claimants. |
| Required controls | (1) Prompt controls: Instructions to the Claude API must explicitly prohibit evaluation based on, or inference of, racial/ethnic characteristics, names as proxies, or educational institutions as proxies. (2) Adverse impact monitoring: Measure shortlisting rates by name-based or self-disclosed ethnic group proxies where lawfully collected — see L3-4.2-Bias-Monitoring-Protocol-v1.md for methodology and lawfulness constraints. (3) Keyword audit: Review Scout's shortlisting criteria and any keyword-based filters for terms that function as racial proxies. (4) Customer contract: Prohibit customers from using Scout's filter functionality to exclude candidates on the basis of racial or ethnic characteristics. |
3.2 Disability
| Element | Detail |
|---|---|
| Protected characteristic | Disability — a physical or mental impairment that has a substantial and long-term adverse effect on the ability to carry out normal day-to-day activities (EA 2010 s.6) |
| Indirect discrimination mechanism | (1) Employment gap penalty: candidates with disabilities often have employment gaps due to illness, recovery or adjustments; AI tools that score employment gaps negatively disadvantage disabled candidates. (2) CV format bias: candidates who use non-standard CV formats due to cognitive, visual or motor impairments may score lower on parsing accuracy and content extraction. (3) Skills inference bias: where Scout infers likely skills from job history, candidates who worked in adjusted roles or part-time due to disability may score below their actual capability. (4) Accommodation-blind assessment: Scout does not (and cannot, absent explicit information) account for the context in which skills were acquired. |
| DSIT Reasonable Adjustments obligation | DSIT Responsible AI in Recruitment Guide: "In any recruitment process, applicants with disabilities, conditions, or impairments may require reasonable adjustments to the recruitment and hiring process to ensure that they are not disadvantaged. This is a legal obligation pursuant to section 20 of the Equality Act 2010. If/when AI systems are introduced into the recruitment process, these may bring about novel risks of disadvantage." DSIT further states: "Reasonable adjustments should be considered and planned, before the deployment of the technology, in case the system puts an applicant with a protected characteristic at a disadvantage because of that characteristic. If a reasonable adjustment cannot be made, this may require the system's removal from the interview process." |
| EA 2010 obligation | s.19: Indirect discrimination applies to disability (subject to objective justification). s.20–21: Positive duty to make reasonable adjustments — if Scout's use creates a substantial disadvantage for a disabled applicant compared to a non-disabled applicant, the duty to make a reasonable adjustment is triggered. In the recruitment AI context, the adjustment may include providing an alternative assessment route or having a human review the candidate's CV without Scout's score. |
| ICO November 2024 finding | ICO found "instances where there was a lack of accuracy testing" and found that AI tools were used to "make automated recruitment decisions, where the AI is not designed for this purpose." Disabled candidates are particularly vulnerable to inaccurate AI assessments where their CV departs from expected structure or timeline norms. |
| EHRC enforcement risk | HIGH. The duty to make reasonable adjustments is a positive obligation — failure to consider whether the AI creates a substantial disadvantage is itself a breach, regardless of whether harm has occurred. |
| Required controls | (1) Reasonable adjustment pathway: Sable AI Ltd must ensure recruiter customers have an operational alternative screening pathway for candidates who disclose a disability or request an adjustment. This must be included in customer-facing guidance. (2) Employment gap de-weighting: Scout's scoring criteria must not penalise unexplained or non-linear employment histories beyond what is objectively justified for the role. (3) Pre-deployment pilot: DSIT recommends piloting with "a diverse range of users, including employers as well as affected communities including jobseekers from different backgrounds and experiences." This should include disabled candidates. (4) Candidate self-identification: Provide candidates with a clear channel to notify the recruiter of a disability and request a reasonable adjustment before or during AI screening. See L4-5.2-Candidate-Transparency-Notice-v1.md. |
3.3 Age
| Element | Detail |
|---|---|
| Protected characteristic | Age — all ages including older and younger workers (EA 2010 s.5); note: age discrimination in recruitment can be justified by objective justification, unlike other characteristics |
| Indirect discrimination mechanism | (1) Experience-length heuristics: criteria requiring a minimum number of years' experience directly disadvantage younger workers; AI tools calibrated on such criteria replicate this disadvantage. (2) Technology-era bias: older workers may reference legacy technologies or use CV conventions from earlier decades; Scout may score these lower by comparison to more recently formatted CVs. (3) Qualification equivalence: older candidates hold qualifications that predate current classification systems (e.g., O Levels vs. GCSEs). An AI tool trained primarily on CVs with modern qualifications may not recognise equivalence. EHRC Code para 4.16: "If an employer were to advertise a position requiring at least five GCSEs at grades A to C without permitting any equivalent qualifications, this criterion would put at a particular disadvantage everyone born before 1971." The same logic applies to AI-driven qualification matching. (4) Graduation year inference: a model that infers age from graduation year and penalises older candidates creates direct or indirect age discrimination. |
| ICO November 2024 finding | The ICO found some tools contained "search functionality that allowed recruiters to filter out candidates with certain protected characteristics." Age is a protected characteristic. Filtering by graduation year range or years of experience range are common proxies for filtering by age. |
| EA 2010 obligation | s.19: A PCP that puts candidates of a particular age group at a disadvantage is indirect age discrimination unless objectively justified. Unlike other protected characteristics, age discrimination can be objectively justified by a broader range of legitimate aims — but the means must still be proportionate. Where Scout's criteria apply rigid experience thresholds without considering equivalent experience in older workers, objective justification may not be established. |
| EHRC enforcement risk | MEDIUM. Age discrimination claims in recruitment are less commonly litigated than race or disability claims, but statistical adverse impact evidence (lower shortlisting rates for candidates over 50, for example) would be directly actionable. |
| Required controls | (1) Qualification equivalence mapping: Scout's criteria must recognise equivalent qualifications across different educational eras; where the AI cannot reliably assess this, human review must cover it. (2) Experience flexibility: Avoid rigid minimum-years-of-experience criteria in Scout's evaluation logic; instead, assess substantive skills demonstrated, which can be acquired over varying timeframes. (3) Age proxy audit: Review for and prohibit use of graduation year ranges, years-since-qualification, or similar proxies that function as age filters. (4) Objective justification review: Any age-correlated criterion in Scout must be reviewed for objective justification before deployment. [LEGAL REVIEW REQUIRED] |
3.4 Sex (Gender)
| Element | Detail |
|---|---|
| Protected characteristic | Sex — male or female (EA 2010 s.11) |
| Indirect discrimination mechanism | (1) Career gap penalty: women are statistically more likely to have career gaps due to maternity leave, caring responsibilities or part-time periods. AI tools that score employment continuity as a positive signal disadvantage female candidates. (2) Language and vocabulary bias: AI models trained on male-dominated CV corpora may score CV language and self-presentation styles lower when they reflect female-typical communication patterns. (3) Sector and role history bias: if Scout is calibrated on historical hiring data from male-dominated industries or roles, it may underrate experience from female-dominated sectors even where competencies are transferable. EHRC Code para 4.11 example: A consultancy reviews psychometric tests and discovers "men tend to score lower than women." It is sufficient to rely on statistical evidence of the differential. |
| ICO November 2024 finding | ICO found that "features in some tools could lead to discrimination by having a search functionality that allowed recruiters to filter out candidates with certain protected characteristics." Sex is a protected characteristic. ICO also found tools inferred gender "from their job application or even just their name." |
| EA 2010 obligation | s.19: A PCP that puts women at a particular disadvantage (e.g., scoring career continuity; penalising part-time periods; inferring and acting on gender from names) is indirect sex discrimination unless objectively justified. s.13: Where Scout infers gender and treats candidates differently on that basis, this may constitute direct sex discrimination. |
| EHRC enforcement risk | HIGH. Sex discrimination in recruitment is a core EHRC enforcement priority. Employment Tribunal statistics consistently show sex discrimination as one of the highest-volume claim categories. A demonstrable pattern of lower shortlisting rates for female candidates through Scout's outputs would generate significant litigation and regulatory risk. |
| Required controls | (1) Name and gender-neutralisation: Scout must not infer or act on gender derived from names, pronoun use, or other signals. Prompt instructions should explicitly exclude gender inference. (2) Career gap neutral treatment: Part-time periods and career gaps must not be scored negatively per se; context and the transferable skills developed during these periods must be considered. (3) Bias testing by gender: Where lawfully possible, conduct adverse impact analysis of Scout's shortlisting rates across gender groups. See L3-4.2-Bias-Monitoring-Protocol-v1.md. (4) Language model audit: Where Scout uses a general-purpose language model, assess whether the model has known sex-based output biases and implement mitigations. |
3.5 Pregnancy and Maternity
| Element | Detail |
|---|---|
| Protected characteristic | Pregnancy and maternity — protected throughout and after a pregnancy, and during maternity leave (EA 2010 ss.17–18) |
| Indirect discrimination mechanism | (1) Maternity leave gaps: candidates who disclose or whose CV reflects a maternity leave period may be scored lower if Scout treats extended employment gaps negatively. (2) Reduced hours history: returning mothers who worked reduced hours before returning full-time may have a lower-volume work history in the maternity period, which an AI tool could score against. (3) Inference risk: where candidates disclose pregnancy or maternity in a covering letter or CV note (e.g., "currently on maternity leave — available from [date]"), Scout must not act on this information. |
| ICO November 2024 finding | The ICO found that AI inference of protected characteristics from application data is unlawful where conducted without lawful basis. Pregnancy/maternity status inferred from CV content (dates, gaps, explicit statements) and acted upon would fall into this category. |
| EA 2010 obligation | Pregnancy and maternity discrimination does not require a comparator — it is unlawful per se to treat a candidate unfavourably because of pregnancy or maternity. An AI tool that scores a maternity leave gap negatively, or that treats a returning candidate's reduced-hours history as a performance signal, may constitute unfavourable treatment on grounds of pregnancy or maternity. |
| EHRC enforcement risk | HIGH. Pregnancy and maternity discrimination is among the most clearly protected categories in UK employment law. Algorithmic tools that create structural disadvantage for candidates returning from maternity leave expose recruiter customers (and potentially Sable AI Ltd) to significant liability. |
| Required controls | (1) Maternity leave gap treatment: Scout's scoring must treat clearly labelled maternity leave periods as non-negative — equivalent to continuous employment for the relevant period. (2) Prompt instructions: Explicit instructions to the Claude API must prohibit any scoring signal derived from pregnancy or maternity information disclosed in the CV. (3) Output audit: Periodic review of Scout outputs for cases where a candidate has disclosed maternity leave to confirm the output is not adversely affected. (4) Transparency notice: Candidates must be told that AI screening is used and how to request human review. See L4-5.2-Candidate-Transparency-Notice-v1.md. |
4. Cross-Cutting Obligations
4.1 Positive Action (EA 2010, s.158)
Section 158 EA 2010 permits, but does not require, employers to take proportionate action to enable people sharing a protected characteristic to overcome a disadvantage, or to meet different needs, where the employer reasonably thinks that persons sharing the characteristic are at a disadvantage compared to those who do not. This is positive action, not positive discrimination — Scout must not be used to give automatic preference to candidates based on protected characteristics, even in a diversity programme context.
Sable AI Ltd should advise recruiter customers that Scout outputs cannot be used as a mechanism for positive discrimination. Where positive action is lawfully used in recruitment, it must be implemented through tightly controlled human decision-making under the Equality Act framework, not through algorithmic adjustment of Scout shortlisting scores.
4.2 Positive Action in Recruitment and Promotion (EA 2010, s.159)
Section 159 EA 2010 is the specific positive action provision for recruitment and promotion decisions. It permits, but does not require, a person to take a protected characteristic into account at the point of making a recruitment or promotion decision, but only where the statutory conditions are satisfied: (1) the person reasonably thinks that persons sharing the characteristic are disadvantaged or under-represented; (2) the candidates are "as qualified as" each other; and (3) the action is proportionate. [LEGAL REVIEW REQUIRED]
Distinction from s.158: Section 158 is the general positive action provision (enabling proportionate action to overcome disadvantage or meet different needs). Section 159 is the narrower recruitment/promotion rule, limited to the equal-merit tie-break scenario. Both are permissive, not mandatory.
Implication for Scout:
Scout must not be configured to apply an automatic score uplift, ranking preference, or decision rule based on protected characteristics in order to achieve a diversity outcome. That would exceed the narrow s.159 framework and create substantial direct discrimination risk under EA 2010 s.13.
Operational rule for any reliance on s.159:
Any reliance on s.159 must be: (1) human-led; (2) case-specific; (3) applied only at the final recruitment or promotion decision stage; (4) supported by evidence of disadvantage or under-representation; and (5) used only where candidates are genuinely of equal merit.
Framework consequence:
Scout may support lawful recruitment processes by reducing arbitrary bias and generating structured candidate comparisons, but it must not itself implement positive action logic through protected-characteristic-based scoring or ranking.
4.3 Objective Justification Test
Per EHRC Employment Statutory Code of Practice, para 3.39, the objective justification test for any potentially discriminatory PCP has two stages:
- Legitimate aim: Is the aim of the PCP legal, non-discriminatory, and does it represent a real, objective consideration?
- Proportionality: Is the means of achieving the aim proportionate — appropriate and necessary in all the circumstances?
Where any of Scout's screening criteria produce an adverse impact on a protected group, Sable AI Ltd and its customers must be able to demonstrate objective justification against both stages. Sable AI Ltd should document the intended aims of each screening criterion and the evidence base for proportionality as part of its AI governance documentation.
[LEGAL REVIEW REQUIRED] — The objective justification analysis for any criterion producing an adverse impact requires legal advice on the facts.
4.4 Reasonable Adjustments Duty (EA 2010, s.20)
The duty to make reasonable adjustments applies in recruitment. Where Scout's use puts a disabled candidate at a substantial disadvantage, the recruiter customer must make a reasonable adjustment. DSIT guidance states that if a reasonable adjustment cannot be made, "this may require the system's removal from the interview process" for that candidate.
Sable AI Ltd must ensure that its customer-facing documentation and product design support an alternative manual assessment route for candidates who require it.
5. Bias Audit Obligations
5.1 Why Bias Audits Are Required
The ICO November 2024 AI in Recruitment Outcomes Report states: "AI providers and recruiters must ensure that they process personal information fairly by AI. This includes monitoring for potential or actual fairness, accuracy, or bias issues in the AI and its outputs, and taking appropriate action to address these."
The DSIT Responsible AI in Recruitment Guide states: "Bias audits should be regularly repeated once the model is in live operation to ensure the system continues to deliver fair outcomes."
Bias auditing is therefore both a UK GDPR obligation (processing must be fair — Art. 5(1)(a)) and an EA 2010 obligation (evidence of compliance with the non-discrimination duty).
5.2 Bias Audit Data: The Lawfulness Constraint
The ICO November 2024 report identifies a critical tension: "AI providers and recruiters must also ensure any special category data processed to monitor for bias and discriminatory outputs is adequate and accurate enough to effectively fulfil this purpose. They must also ensure this processing complies with data protection law. Inferred or estimated data will not be adequate and accurate enough, and will therefore not comply with data protection law."
This means: - Bias monitoring using inferred protected characteristics is unlawful - Bias monitoring requires lawfully collected, candidate-consented, and sufficiently accurate demographic data - Where such data is unavailable, bias monitoring must use proxy methods (e.g., name-based surname analysis, adversarial testing with synthetic CVs) which are imperfect but do not rely on processing real candidates' special category data
This tension is addressed in detail in L3-4.2-Bias-Monitoring-Protocol-v1.md.
5.3 Adverse Impact Ratio
Where bias monitoring data is available, the standard test for adverse impact is the 4/5 (80%) rule: a selection rate for a protected group that is less than 80% of the rate for the highest-scoring group triggers a review requirement. This is a US EEOC standard that the EHRC Code does not explicitly adopt, but it provides a widely recognised statistical benchmark and a reasonable starting point for bias monitoring thresholds pending further UK regulatory guidance.
[LEGAL REVIEW REQUIRED] — UK courts and the EHRC apply a facts-and-circumstances approach to adverse impact; the 80% threshold is a reference point, not a legal safe harbour.
6. EHRC Enforcement Risk Summary
| Protected Characteristic | Enforcement Risk Level | Primary Enforcement Route |
|---|---|---|
| Race / Ethnicity | HIGH | Employment Tribunal (s.19 indirect discrimination); EHRC formal investigation |
| Disability | HIGH | Employment Tribunal (s.19 / s.20 failure to adjust); EHRC investigation |
| Sex | HIGH | Employment Tribunal (s.19 indirect discrimination); EHRC investigation |
| Pregnancy / Maternity | HIGH | Employment Tribunal (ss.17-18, no comparator needed) |
| Age | MEDIUM | Employment Tribunal (s.19, subject to objective justification) |
| Religion / Belief | MEDIUM (not covered above but relevant if CV data discloses religious affiliation via institution names or cultural indicators) | Employment Tribunal (s.19) |
| Sexual Orientation | MEDIUM (risk if name-based inference or lifestyle inference is present) | Employment Tribunal (s.19) |
7. Legal Review Points Summary
The following positions in this document require review by a qualified UK lawyer before operational reliance:
-
Objective justification of Scout criteria: Any Scout screening criterion that produces an adverse impact on a protected group requires a documented objective justification reviewed by legal counsel.
-
Direct vs. indirect discrimination: Where Scout infers a protected characteristic and acts on it (e.g., gender from name), this may constitute direct discrimination (s.13 EA 2010) rather than indirect discrimination. Legal analysis is required.
-
Positive action implementation: The parameters of lawful positive action under s.158 (general) and s.159 (recruitment and promotion) EA 2010 in the context of AI-assisted shortlisting require legal advice before any such programme is implemented. In particular, the equal-merit assessment required by s.159 must not be operationalised through Scout's scoring or ranking logic.
-
Adverse impact threshold: The appropriate adverse impact threshold for UK employment law purposes, and the point at which Scout outputs would require suspension pending investigation, requires legal advice.
-
Sable AI Ltd's own liability: The extent to which Sable AI Ltd (as a tool provider) shares liability with the recruiter customer for discriminatory outputs generated by Scout requires legal analysis, particularly where Sable AI Ltd was aware of the risk of bias and did not take adequate steps to mitigate it.
8. Cross-References
| Referenced Document | Relationship |
|---|---|
L1-2.2-Risk-Classification-Framework-v1.md |
Equality Act risk tier classification and discrimination risk threshold analysis |
L2-3.1-UK-GDPR-Mapping-Matrix-v1.md |
UK GDPR Art. 9 obligations relating to the same protected characteristics |
L2-3.3-ICO-Audit-Gap-Analysis-v1.md |
ICO enforcement posture on AI discrimination in recruitment |
L3-4.2-Bias-Monitoring-Protocol-v1.md |
Operational bias detection and audit methodology |
L4-5.2-Candidate-Transparency-Notice-v1.md |
Candidate notice covering right to human review and right to contest |
This document forms part of the Sable AI Ltd AI Governance Framework. It is a proposal for compliance design and does not constitute legal advice. Review by a qualified UK lawyer is required before operational use. All assumptions about Sable AI Ltd are unverified as at the date of this document.