Skip to content

Data Protection Impact Assessment — Template and Scout Draft

Project: Sable AI Ltd — AI Governance Framework Stage: Stage 3 — Regulatory Alignment Status: Draft Version: v1 Date: 2026-03-01 Assumptions: Built on outline assumptions — not verified against real Sable AI Ltd data


Important Notes Before Use

This document has two purposes:

  1. As a template — it provides the full ICO-compliant DPIA structure for any organisation using AI in recruitment. Sections marked [TO BE COMPLETED BY OPERATOR] require real operational data before the DPIA is valid.

  2. As a Scout draft — it is pre-populated for Sable AI Ltd's Scout system wherever the assumed product architecture permits. All pre-populated values are marked [ASSUMPTION] and must be verified against actual company data before this DPIA is relied upon.

DPIA must be completed before live processing commences. The ICO's November 2024 recruitment AI audit found that completing DPIAs retrospectively or just prior to an audit is a significant compliance failing. This document must be finalised, signed off, and on record before Scout processes production candidate personal data.

[LEGAL REVIEW REQUIRED] throughout this document, in particular for the lawful basis assessment (Section 1.6), the Article 22A–D ADM threshold analysis (Section 1.8), the special category data assessment (Section 3.2), and the residual risk conclusion (Section 5).

DPIA trigger confirmed: The ICO has stated that AI used in recruitment triggers a mandatory DPIA. The DSIT Responsible AI in Recruitment Guide (2024) confirms: "Completing a DPIA is required for all development and deployment of AI systems that involve personal data." UK GDPR Article 35(3)(a) applies to systematic and extensive evaluation of personal aspects using automated processing, including profiling, on which decisions are based that produce legal effects or similarly significantly affect the individual. Scout's CV screening and shortlisting workflow falls within this description.


Section 1 — Description of the Processing

1.1 Controller and Processor Identification

Role Organisation Contact
Data controller Customer organisation (recruitment agency or in-house HR team using Scout) [ASSUMPTION — see note below] [TO BE COMPLETED BY OPERATOR]
Data processor Sable AI Ltd [ASSUMPTION A-001] [TO BE COMPLETED: registered address, DPO contact]
Sub-processor Anthropic (Claude API) [ASSUMPTION A-005] See Anthropic Data Processing Agreement
Hosting provider AWS UK (eu-west-2) [ASSUMPTION A-004] See AWS Data Processing Agreement

Controller/processor determination note: Sable AI Ltd's role as processor (rather than controller or joint controller) is an assumed position. The ICO's November 2024 recruitment AI audit found that some providers incorrectly identified themselves as processors when they were in fact controllers or joint controllers. The role determination must be reviewed by a qualified UK lawyer before relying on this structure. [LEGAL REVIEW REQUIRED] See also L1-2.2-Risk-Classification-Framework-v1.md and L1-2.3-Data-Flow-Map-v1.md.

1.2 Name and Description of the Processing Activity

System name: Scout

Description: Scout is an AI-powered CV screening and candidate shortlisting tool. [ASSUMPTION A-002] It accepts candidate CVs as input, analyses CV content against a job description and specified shortlisting criteria, and produces a structured output indicating candidate suitability and ranking relative to the role criteria. Scout is designed to support human decision-making in recruitment, not to make autonomous final decisions.

Intended user: Recruiters and HR professionals employed by, or contracted to, the controller organisation. [ASSUMPTION A-003]

1.3 Purpose of the Processing

Purpose Description Lawful basis (see Section 1.6)
CV screening and shortlisting Automated analysis of candidate CVs against job criteria to produce a ranked shortlist Legitimate interests or contract performance [ASSUMPTION — see Section 1.6]
Structured assessment output Generation of a structured suitability score or ranking to assist recruiter review As above
Human review support Provision of AI-generated shortlist to recruiter for review and decision As above

1.4 Data Subjects

Category Description
Primary data subjects Job applicants and candidates whose CVs are processed by Scout
Secondary data subjects Referees or third parties named in CV content (incidental processing only)
Customer end users Recruiters and HR professionals who interact with Scout outputs — their usage data may be processed as a separate category [ASSUMPTION]

1.5 Categories of Personal Data Processed

Data category Specific data types Special category?
Identity data Full name, contact details (email, phone, address) [ASSUMPTION] No
Professional history Work experience, employer names, job titles, employment dates, responsibilities No
Educational history Qualifications, institutions, dates, grades No
Skills and competencies Stated skills, professional certifications, languages No
Free text content Personal statement, covering letter content, any free-form CV sections Potentially — free text may contain or imply health conditions, disability, religion, ethnicity or other special category data. See Section 3.2.
AI-generated output Suitability score, shortlisting recommendation, extracted skill matches No (derived data, but linked to the candidate)
Technical / system data API request logs, processing timestamps, error logs No

Special category data risk: Candidate CVs regularly contain or imply information about protected characteristics — for example: employment gaps (potential health condition or carer role), nationality, religious holidays mentioned, disability-related adjustments. Scout does not intentionally collect or target special category data. However, its inputs (free-text CVs) may contain such data incidentally. See risk assessment in Section 3.2. [ASSUMPTION A-002]

1.6 Lawful Basis for Processing

[LEGAL REVIEW REQUIRED] — The lawful basis analysis below is assumed and has not been confirmed by a qualified UK lawyer. The legal basis must be confirmed before Scout processes live candidate data.

Primary lawful basis considered: UK GDPR Article 6(1)(f) — Legitimate interests.

Legitimate interests assessment (LIA) — summary: [ASSUMPTION A-016]

LIA element Analysis
Purpose test — Is there a legitimate interest? Candidates have applied for a role. The customer organisation has a legitimate interest in efficiently and consistently evaluating applications. Sable AI Ltd has a legitimate interest in providing this service commercially.
Necessity test — Is processing necessary for the purpose? Automated CV screening is a proportionate and efficiency-enabling method of processing a large volume of applications consistently. It supports (rather than replaces) human judgment. However, necessity requires that less privacy-intrusive alternatives are considered — see Section 2.3.
Balancing test — Do interests override data subject rights? Candidates applying for roles reasonably expect their CVs to be reviewed. However, they may not reasonably expect AI-assisted analysis without disclosure. The balancing test is qualified by: the requirement for human review before any candidate decision (A-007); the availability of rights to explanation and contestation (Art. 22C); and candidate transparency. The balance is tentatively assessed as not overriding data subject interests, subject to: (i) candidate transparency notice being provided; (ii) human review being robustly implemented; (iii) the Art. 22A threshold analysis below.

Alternative lawful basis considered: UK GDPR Article 6(1)(b) — Contract performance (pre-contractual steps). This basis may apply to the extent that CV processing is necessary to take steps at the request of the candidate prior to entering a contract of employment. However, this basis is contested in the context of third-party AI processing. [LEGAL REVIEW REQUIRED]

Special category data lawful basis: UK GDPR Article 9 — No intentional special category data is collected. To the extent that CVs incidentally contain special category data, this must be addressed through a DPA 2018 Schedule 1 condition (if any applies) or through technical controls to prevent inadvertent processing of such data. [LEGAL REVIEW REQUIRED — no confirmed lawful basis route exists for special category data at the time of drafting]

1.7 Data Flows

The following describes the end-to-end data flow for Scout's CV processing workflow. For full detail see L1-2.3-Data-Flow-Map-v1.md.

[Candidate] --> submits CV --> [Customer's recruitment platform / ATS]
      |
      v
[Scout API] <-- CV document + job description + shortlisting criteria
      |
      v
[Anthropic Claude API] <-- CV content sent for AI analysis [ASSUMPTION A-005]
      |
      v
[Scout API] <-- structured output (suitability score, extracted criteria matches)
      |
      v
[Scout UI] <-- ranked shortlist presented to recruiter
      |
      v
[Recruiter] <-- mandatory human review --> [Recruitment decision / candidate contact]

Data flow summary table:

Stage Data transmitted Recipient Basis for transmission
CV ingestion CV document, job description, criteria Scout API (Sable AI Ltd systems) Processor acting on controller's instructions
AI analysis CV content, job description Anthropic Claude API [ASSUMPTION A-005] Sub-processing under DPA with Anthropic
Output delivery Structured shortlist and scores Scout UI, recruiter Processor returning results to controller
Human review Shortlist output Recruiter Internal to controller organisation
Retention CV documents, AI outputs AWS eu-west-2 [ASSUMPTION A-004] Storage under controller's retention policy

Data not transmitted: Candidate CVs are not used to train Anthropic's models. [ASSUMPTION A-005 — this must be verified against the current Anthropic API terms and DPA.]

1.8 Retention Period

[ASSUMPTION] Retention periods have not been confirmed for Sable AI Ltd. The following are indicative values pending legal and operational confirmation.

Data type Assumed retention period Basis
CV documents Duration of recruitment campaign + [TO BE CONFIRMED] months Legitimate interests, storage limitation principle
AI-generated shortlist outputs Duration of campaign + [TO BE CONFIRMED] months As above
API/system logs [TO BE CONFIRMED] months Operational necessity, security
Unsuccessful candidate records [TO BE CONFIRMED] — typically 6–12 months after role close ICO guidance on unsuccessful applicant retention

1.9 Automated Decision-Making Assessment (UK GDPR Articles 22A–22D, as substituted by Data (Use and Access) Act 2025 s.80)

[LEGAL REVIEW REQUIRED]

Key threshold question: Does Scout make decisions that "produce legal effects" or "similarly significantly affect" candidates within the meaning of UK GDPR Article 22A?

Factor Assessment
Is processing automated? Yes — Scout uses AI to analyse and rank CVs without real-time human input during processing
Does Scout make the final recruitment decision? No [ASSUMPTION A-007] — Scout's output is subject to mandatory human review before any candidate contact or decision is made
Does Scout's output significantly affect candidates? Potentially yes — a candidate ranked low by Scout may not be progressed. Whether this constitutes a "similarly significant effect" under Art. 22A is a legal question [LEGAL REVIEW REQUIRED]
Art. 22A safeguards required? If the threshold is met, the controller must provide: the right to obtain human review of the AI's assessment; the right to express a point of view; the right to contest the outcome; and meaningful information about the logic involved (Art. 22C)

Current position (assumed): Scout's mandatory human review step is designed to prevent the system from making solely automated decisions. If human review is robustly implemented in practice — not a rubber-stamp — the Art. 22A threshold may not be met. However, this conclusion requires legal confirmation and operational verification. [LEGAL REVIEW REQUIRED] The safeguards required by Art. 22C should be implemented regardless, as a matter of good practice.


Section 2 — Necessity and Proportionality Assessment

2.1 Necessity

Question Assessment
Is the processing necessary to achieve the stated purpose? AI-assisted screening addresses a genuine operational need: consistent, efficient evaluation of large application volumes against defined criteria. Manual review of every CV in a high-volume campaign would be impractical for the customer.
Is the scale of data processing proportionate to the purpose? Scout processes CV content in full. The necessity of processing the full CV (rather than a subset) should be reviewed. Where job criteria can be evaluated against specific CV sections, limiting processing to those sections would be more proportionate. [ASSUMPTION]
Is the processing limited to what is necessary? Scout is assumed not to aggregate data from external sources or build candidate profiles beyond the current application. [ASSUMPTION A-002] This assumption must be verified.

2.2 Proportionality

Principle Application to Scout
Data minimisation (Art. 5(1)(c)) Only CV content, job description, and shortlisting criteria are processed. No social media data, inferred characteristics, or external enrichment data is included. [ASSUMPTION]
Purpose limitation (Art. 5(1)(b)) CV data is used only for the shortlisting purpose for which it was submitted. Data is not used to train Anthropic models. [ASSUMPTION A-005]
Accuracy (Art. 5(1)(d)) Scout's outputs are AI-generated and may contain errors. Human review is the control to catch inaccuracies before they affect candidates. See L3-4.1-Monitoring-Framework-v1.md (M-01 shortlisting accuracy rate; M-11 canary accuracy test) for accuracy monitoring controls.
Storage limitation (Art. 5(1)(e)) Retention periods are to be defined. [ASSUMPTION — see Section 1.8]

2.3 Alternatives Considered

Per ICO guidance, a DPIA must consider whether alternative approaches could achieve the same outcomes with less impact on data subjects.

Alternative Assessment Decision
Manual-only review Feasible for low-volume campaigns but not scalable. Removes AI risk but introduces human bias and inconsistency. Does not address the proportionality question because manual review also processes CVs in full. Not adopted as default; remains available as fallback for candidates requiring reasonable adjustments [ASSUMPTION A-018]
Keyword-only filtering Less data-intensive than full AI analysis. Does not consider context or suitability holistically. Risk of excluding strong candidates on keyword absence. Not adopted — Scout's value proposition is contextual analysis beyond keyword matching
Structured application forms Reduces free-text personal data in favour of structured fields, reducing inadvertent special category data exposure. Possible supplementary control, but requires customer workflow changes [ASSUMPTION]
Anonymised screening Removing name and contact details before AI processing could reduce name-based bias risk. [ASSUMPTION] Not currently implemented; recommend considering as a technical control — see L3-4.2-Bias-Monitoring-Protocol-v1.md Section 6 (keyword and language bias controls)

Section 3 — Risk Identification

3.1 Risk Assessment Framework

For each identified risk: likelihood is rated Low / Medium / High before mitigation; severity is rated Low / Medium / High; risk score is Likelihood × Severity (both on a 1–3 scale, where Low = 1, Medium = 2, High = 3).

3.2 Identified Risks

Risk 1 — Discriminatory Shortlisting Output (Algorithmic Bias)

Field Detail
Risk description Scout's AI model generates shortlisting outputs that systematically disadvantage candidates sharing a protected characteristic under the Equality Act 2010 (e.g., race, sex, disability, age, religion)
Mechanism The underlying model may reflect historical hiring biases present in training data. Shortlisting criteria specified by customers may encode indirect discrimination. Free-text CV analysis may assign lower scores based on name, vocabulary style, or employment pattern proxies for protected characteristics.
Data subjects affected All candidates whose CVs are processed by Scout
Rights and freedoms at risk Right to equal treatment (EA 2010); right to fair processing (UK GDPR Art. 5(1)(a)); right to human review (Art. 22C); right to contest AI output
Likelihood (pre-mitigation) High — the ICO found discriminatory outputs in AI recruitment tools across its audit programme
Severity High — discrimination in hiring has serious, potentially irreversible consequences for candidates
Pre-mitigation risk score 9/9
Mitigations Mandatory human review (A-007); bias monitoring protocol (L3-4.2); customer training on criteria specification (L1-2.4); equality compliance controls (L2-3.2); incident response plan (L3-4.3)
Post-mitigation risk Medium — bias risk cannot be fully eliminated; it is managed through monitoring and human oversight

Risk 2 — Inadvertent Processing of Special Category Data

Field Detail
Risk description Candidate CVs contain or imply special category data (health conditions, disability, religion, ethnicity, political views) that Scout processes without a lawful basis under UK GDPR Art. 9
Mechanism Free-text CV sections routinely contain personal disclosures (e.g., disability-related career gaps, religious observance, union membership). Scout processes the full CV content.
Data subjects affected Candidates whose CVs contain or imply special category data
Rights and freedoms at risk Right to lawful processing (UK GDPR Art. 9); right to dignity and non-discrimination
Likelihood (pre-mitigation) High — inadvertent special category data in CVs is common and foreseeable
Severity High — unlawful processing of special category data
Pre-mitigation risk score 9/9
Mitigations Technical controls to detect and flag potential special category data in CV inputs; customer guidance on managing CV submissions; policy to exclude Art. 9 data from scoring criteria; [LEGAL REVIEW REQUIRED] on lawful basis route or avoidance strategy
Post-mitigation risk Medium — technical controls can reduce but not eliminate risk; legal basis must be confirmed

Risk 3 — Unlawful Automated Decision-Making (Art. 22A)

Field Detail
Risk description Scout's output is treated as a final decision without meaningful human review, constituting a solely automated decision with significant effect under UK GDPR Art. 22A
Mechanism Human review steps exist as policy but may become rubber-stamp processes in practice. High-volume campaigns may reduce the time recruiters spend reviewing AI outputs.
Data subjects affected All candidates processed by Scout
Rights and freedoms at risk Right to human review and to contest AI decisions (Art. 22C); right to explanation of logic
Likelihood (pre-mitigation) Medium — human review is a policy requirement, but operational compliance is unverified [ASSUMPTION A-007]
Severity High — unlawful ADM with significant effect on employment prospects
Pre-mitigation risk score 6/9
Mitigations Mandatory human review as operational control, not just policy; monitoring of human review compliance rate (L3-4.1 M-02); customer onboarding training; Arts. 22A–22C rights surfaced in candidate transparency notice (L4-5.2)
Post-mitigation risk Low–Medium — dependent on operational implementation of human review controls

Risk 4 — Candidate Transparency Failure

Field Detail
Risk description Candidates are not informed that AI is being used to screen their applications, depriving them of rights under UK GDPR Arts. 13/14 and Art. 22C and preventing them from contesting adverse AI assessments
Mechanism Customer organisations may not provide candidates with adequate privacy notices disclosing Scout's use. The ICO's audit found this as a near-universal failing.
Data subjects affected All candidates processed by Scout
Rights and freedoms at risk Right to transparency (Art. 5(1)(a)); right to information (Arts. 13, 14); right to explanation (Art. 22C)
Likelihood (pre-mitigation) High — the ICO found transparency failures consistently across audited providers [ASSUMPTION A-019]
Severity Medium — serious rights failure but generally reversible through notice and retrospective disclosure
Pre-mitigation risk score 6/9
Mitigations Candidate-facing transparency notice template (L4-5.2); contractual obligation on customers to deploy notice before processing (L4-5.1); audit of customer notice deployment (L3-4.1 M-09)
Post-mitigation risk Low — if notice is contractually required and customers are audited for compliance

Risk 5 — Personal Data Breach Involving Candidate CVs

Field Detail
Risk description Unauthorised access to, or exfiltration of, candidate CV data held by Sable AI Ltd (e.g., via API vulnerability, misconfiguration, or Anthropic sub-processor breach)
Data subjects affected All candidates whose CVs are stored or in-flight in the Scout processing pipeline
Rights and freedoms at risk Right to security of personal data (Art. 5(1)(f), Art. 32); right to breach notification (Art. 34)
Likelihood (pre-mitigation) Low–Medium — hosting on AWS UK with standard security controls reduces likelihood; sub-processor (Anthropic) chain adds a node
Severity High — CVs contain significant personal data including contact details, employment and education history
Pre-mitigation risk score 4–6/9
Mitigations AWS security controls and encryption in transit/at rest [ASSUMPTION A-004]; Anthropic DPA [ASSUMPTION A-005]; incident response plan (L3-4.3); 72-hour ICO notification under Art. 33
Post-mitigation risk Low–Medium

Risk 6 — Role Misclassification (Controller vs Processor Confusion)

Field Detail
Risk description Sable AI Ltd acts as a controller or joint controller in practice but contractually positions itself as a processor, resulting in failure to comply with controller obligations
Mechanism If Sable AI Ltd determines the purposes or means of processing (e.g., by setting shortlisting criteria, training the model on customer data, or building candidate profiles), it may be a controller or joint controller regardless of contractual position. The ICO found this as a significant failing in its audit.
Rights and freedoms at risk Full UK GDPR compliance obligations for controller; failure to provide data subject rights
Likelihood (pre-mitigation) Medium — the joint controller question for agency customers is live and unresolved [LEGAL REVIEW REQUIRED]
Severity High — incorrect role classification vitiates the entire compliance structure
Pre-mitigation risk score 6/9
Mitigations Controller/processor analysis in L1-2.3-Data-Flow-Map-v1.md; legal review of role determination; DPA templates covering both customer types (L4-5.1)
Post-mitigation risk Low–Medium — legal review is the critical control

Section 4 — Mitigation Controls

4.1 Controls Summary Table

Risk Control Owner [ASSUMPTION] Status Framework document
R1 — Algorithmic bias Mandatory human review before candidate contact CTO / Engineering Lead Policy — operational verification pending L1-2.2, L1-2.5
R1 — Algorithmic bias Bias monitoring protocol with alert thresholds CTO Defined in framework; operational calibration required L3-4.2
R1 — Algorithmic bias Customer training on non-discriminatory criteria Customer Success Lead Defined in governance policy; operational delivery required L1-2.4
R2 — Special category data Technical detection of special category data in inputs Engineering Lead Not yet implemented [ASSUMPTION] L3-4.2
R2 — Special category data Legal advice on lawful basis route CTO / legal counsel [LEGAL REVIEW REQUIRED]
R3 — Unlawful ADM Human review compliance monitoring CTO Defined in framework; operational verification required L3-4.1
R3 — Unlawful ADM Arts. 22A–22C rights in candidate notice Customer Success Lead Defined in template; customer completion and deployment required L4-5.2
R4 — Transparency failure Candidate transparency notice template Customer Success Lead Defined in template; customer completion and deployment required L4-5.2
R4 — Transparency failure Contractual obligation on customers to deploy notice CTO / legal counsel Present in Stage 5 contract pack; legal tailoring required L4-5.1
R5 — Data breach AWS encryption and security controls Engineering Lead [ASSUMPTION — verify] L1-2.3
R5 — Data breach Incident response plan with 72h ICO notification CTO Defined in framework L3-4.3
R6 — Role misclassification Legal review of controller/processor determination CTO / legal counsel [LEGAL REVIEW REQUIRED] L1-2.2, L4-5.1

4.2 Residual Risk Controls — Art. 22C Safeguards

Regardless of the Art. 22A threshold conclusion, the following safeguards should be implemented as standard:

  • Candidates must be informed that AI is used in their application process
  • Candidates must have the right to request human review of any AI-generated assessment
  • Candidates must have the right to express a point of view about the assessment
  • Candidates must have the right to contest the outcome
  • The logical basis of Scout's assessment must be explainable in plain language to any candidate who requests it

Implementation: L4-5.2-Candidate-Transparency-Notice-v1.md must surface these rights. Customer contracts (L4-5.1) must impose an obligation to provide access to these rights on request.


Section 5 — Residual Risk Assessment

[LEGAL REVIEW REQUIRED] — The residual risk assessment below reflects assumed controls. A qualified UK lawyer and, where appropriate, an external privacy auditor should review this assessment before Scout processes live candidate data.

Risk Pre-mitigation score Post-mitigation score Residual risk level Accepted?
R1 — Algorithmic bias 9/9 4–6/9 Medium [TO BE COMPLETED BY OPERATOR]
R2 — Special category data 9/9 4–6/9 Medium [TO BE COMPLETED BY OPERATOR]
R3 — Unlawful ADM 6/9 2–3/9 Low–Medium [TO BE COMPLETED BY OPERATOR]
R4 — Transparency failure 6/9 1–2/9 Low [TO BE COMPLETED BY OPERATOR]
R5 — Data breach 4–6/9 2–3/9 Low [TO BE COMPLETED BY OPERATOR]
R6 — Role misclassification 6/9 2–3/9 Low–Medium [TO BE COMPLETED BY OPERATOR]

Overall residual risk assessment: On the basis of assumed controls, the overall residual risk is assessed as Medium. This assessment is contingent on:

  1. Legal review confirming the Art. 22A threshold conclusion and lawful basis for processing
  2. Operational implementation of mandatory human review as a live control (not just a policy)
  3. Completion and deployment of the candidate transparency notice
  4. Resolution of the special category data lawful basis question
  5. Finalisation of customer DPA terms

ICO prior consultation: If, following this DPIA, residual risk cannot be reduced to an acceptable level, the controller must consult the ICO before commencing processing under UK GDPR Article 36. [LEGAL REVIEW REQUIRED] — The controller must assess whether any remaining high-risk items trigger this consultation obligation.


Section 6 — Consultation Record

6.1 Internal Consultation

Stakeholder Role Consulted? Date Notes
CTO DPO-equivalent / AI system owner [ASSUMPTION A-015] [TO BE COMPLETED] [DATE]
Engineering Lead System architecture, technical controls [TO BE COMPLETED] [DATE]
Customer Success Lead Customer-facing obligations, transparency notice [TO BE COMPLETED] [DATE]
Founder/CEO Ultimate accountability sign-off [TO BE COMPLETED] [DATE]

6.2 External Consultation

Stakeholder Reason for consultation Date Outcome
UK-qualified data protection lawyer Lawful basis confirmation; Art. 22A threshold; Art. 9 special category route; joint controller determination [TO BE COMPLETED] [TO BE COMPLETED]
Anthropic (sub-processor) Confirm no candidate data used for model training; confirm DPA is current and adequate [TO BE COMPLETED] [TO BE COMPLETED]

6.3 ICO Consultation (if required)

If residual high risk remains after mitigations, prior ICO consultation is required under UK GDPR Article 36 before processing commences. [LEGAL REVIEW REQUIRED — confirm whether any residual high-risk item triggers this obligation.]

ICO consultation Required? Date submitted ICO reference Outcome
Art. 36 prior consultation [TO BE CONFIRMED BY LEGAL ADVISER]

Section 7 — Sign-Off and Review

7.1 DPIA Sign-Off

Role Name Signature Date
DPO / DPO-equivalent [CTO name] [ASSUMPTION A-015] [SIGNATURE] [DATE]
Founder/CEO [CEO name] [SIGNATURE] [DATE]

This DPIA must be signed off before Scout processes live candidate personal data.

7.2 DPIA Review Triggers

This DPIA must be reviewed and, if necessary, updated when any of the following occur:

Trigger Review required?
Material change to Scout's architecture or data flows Yes — full review
Addition of a new data source or sub-processor Yes — Section 1 and risk assessment update
Change to the lawful basis for processing Yes — full review
Change to the customer base (e.g., new sector or international customers) Yes — Section 1 and risk assessment update
Regulatory change (new ICO guidance, legislation amendment) Yes — targeted review
Bias monitoring alert indicating systematic discriminatory output Yes — risk assessment update and mitigation review
Personal data breach Yes — Section 3.5 (breach risk) and controls update
Annual scheduled review Yes — full review

7.3 Version History

Version Date Author Change description
v1 2026-03-01 Sable AI Ltd (framework draft) Initial DPIA template — assumed values only

Appendix A — Regulatory References

Provision Relevance
UK GDPR Article 5(1)(a) Lawfulness, fairness and transparency
UK GDPR Article 5(1)(b) Purpose limitation
UK GDPR Article 5(1)(c) Data minimisation
UK GDPR Article 5(1)(d) Accuracy
UK GDPR Article 5(1)(e) Storage limitation
UK GDPR Article 5(1)(f) Integrity and confidentiality
UK GDPR Article 6(1)(b) and (f) Lawful basis candidates
UK GDPR Article 9 Special category data prohibition and exceptions
UK GDPR Articles 13–14 Transparency information to data subjects
UK GDPR Article 22A–22D (as substituted by Data (Use and Access) Act 2025 s.80, in force 5 February 2026) Automated decision-making obligations
UK GDPR Article 28 Controller-processor contract requirements
UK GDPR Article 32 Security of processing
UK GDPR Article 33 72-hour breach notification to ICO
UK GDPR Article 35 DPIA obligation
UK GDPR Article 36 Prior consultation with ICO if residual high risk
DPA 2018 Schedule 1 Conditions for special category data processing
Equality Act 2010 sections 4, 19 Protected characteristics and indirect discrimination
ICO AI in Recruitment Outcomes Report, November 2024 Enforcement benchmark and DPIA adequacy standard
DSIT Responsible AI in Recruitment Guide, March 2024 Good practice standard for DPIA completion

Appendix B — Assumptions Relied On In This Document

All of the following assumptions are unverified and marked [ASSUMPTION] throughout the document. They must be verified before this DPIA is relied upon.

# Assumption
A-001 Sable AI Ltd is a UK-incorporated early-stage company
A-002 Scout processes CVs submitted via customer workflows; no social media scraping or external aggregation
A-003 Customers are UK recruitment agencies and in-house HR teams
A-004 Hosting is on AWS UK (eu-west-2)
A-005 Anthropic is a sub-processor; candidate data is not used for model training; a valid DPA with Anthropic is in place
A-007 Scout outputs are subject to mandatory human review before candidate contact
A-015 CTO acts as DPO-equivalent at this stage
A-016 Art. 6(1)(f) legitimate interests is the assumed primary lawful basis; a Legitimate Interests Assessment has not been documented
A-018 No alternative manual assessment pathway currently exists for candidates requiring reasonable adjustments
A-019 Customers are not currently contractually required to provide candidates with a transparency notice