Data Protection Impact Assessment
Project: Pickles GmbH — AI Governance Framework Stage: Stage 3 — Regulatory Alignment Status: Draft Version: v1 Date: 2026-02-26 Assumptions: Built on outline assumptions — not verified against real Pickles GmbH data
Purpose
This document contains:
-
Part A — DPIA Threshold Assessment: An evaluation of all four assumed Pickles GmbH AI systems against the GDPR Article 35 threshold criteria. Determines which systems require a full DPIA.
-
Part B — Full DPIA for SYS-04 (Legal Analysis Tool): A complete Data Protection Impact Assessment for SYS-04, which is treated as high-risk for the purposes of this framework following the project owner's direction. Covers processing description, necessity and proportionality, risk identification, mitigation controls, and residual risk.
[ASSUMPTION] This DPIA is based entirely on assumed system architecture, assumed data flows (L2-5.1), and assumed processing activities. Before this document has any compliance effect, it must be reviewed and completed using verified data about real Pickles GmbH systems.
[LEGAL REVIEW REQUIRED] A DPIA is a legal compliance document. This draft constitutes a working framework. It must be completed and reviewed by a qualified data protection practitioner, and the DPO must be consulted in accordance with GDPR Article 35(2) and BDSG Section 67(3) before the document is finalised.
Regulatory Basis
| Instrument | Provision | Topic |
|---|---|---|
| GDPR | Article 35 | Data Protection Impact Assessment |
| GDPR | Article 36 | Prior consultation with supervisory authority |
| GDPR | Article 9 | Special categories of personal data |
| GDPR | Article 22 | Automated individual decision-making |
| BDSG | Section 67 | DPIA requirements under German law |
| EDPB WP251 | Section VI; Annex 2 | DPIA and automated decision-making guidance |
| EU AI Act | Article 26(9) | High-risk AI systems and GDPR Article 35 linkage |
PART A — DPIA THRESHOLD ASSESSMENT
A.1 When Is a DPIA Required?
GDPR Article 35(1) requires a DPIA where processing "using new technologies, and taking into account the nature, scope, context and purposes of the processing, is likely to result in a high risk to the rights and freedoms of natural persons."
Article 35(3) specifies three categories that always require a DPIA:
| Category | Description | Reference |
|---|---|---|
| 35(3)(a) | Systematic and extensive evaluation of personal aspects based on automated processing (including profiling) producing legal or similarly significant effects | Article 35(3)(a) |
| 35(3)(b) | Processing on a large scale of special categories of data (Article 9) or criminal conviction data (Article 10) | Article 35(3)(b) |
| 35(3)(c) | Systematic monitoring of a publicly accessible area on a large scale | Article 35(3)(c) |
EU AI Act Article 26(9) linkage:
"Deployers of high-risk AI systems shall use the information provided under Article 13 of this Regulation to comply with their obligation to carry out a data protection impact assessment under Article 35 of Regulation (EU) 2016/679."
SYS-04 has been confirmed as a high-risk AI system for the purposes of this framework. Article 26(9) requires deployers of high-risk AI systems to use the Article 13 technical information provided by the system provider when complying with any applicable DPIA obligation under GDPR Article 35. The DPIA obligation itself arises from GDPR Article 35, assessed against its own threshold criteria (Section A.1 below); Article 26(9) does not independently create that obligation — it specifies what information the deployer must draw on when the Article 35 threshold is already met.
EDPB WP251 note on Article 35(3)(a):
The EDPB Guidelines on automated decision-making (WP251) confirm that Article 35(3)(a) applies to evaluations "based on automated processing" — not only "solely automated" processing. Where automated processing significantly shapes an output that a human then endorses, this may still trigger DPIA requirements even if final decisions are made by a lawyer.
A.2 System-by-System DPIA Threshold Assessment
SYS-01 — Legal Research Assistant
| Threshold Factor | Assessment | Notes |
|---|---|---|
| New technology? | Yes — LLM-based legal research is a new technology | Article 35(1) factor |
| Systematic evaluation of personal aspects? | Low — primarily retrieves legal texts; personal data in queries is incidental | Article 35(3)(a) |
| Legal or similarly significant effects on data subjects? | Low — outputs are legal research, not decisions about individuals | Article 35(3)(a) |
| Large-scale processing of special categories? | Possible — legal documents submitted may contain special category data [ASSUMPTION] | Article 35(3)(b) |
| Systematic monitoring? | No | Article 35(3)(c) |
| DPIA threshold met? | Borderline — depends on volume of queries and whether special category data routinely appears in submitted documents [ASSUMPTION] | Recommend conducting lightweight DPIA |
| Immediate action | Assess query volume and data categories handled; if special category data processed at scale, full DPIA required | [LEGAL REVIEW REQUIRED] |
SYS-02 — Document Drafting Tool
| Threshold Factor | Assessment | Notes |
|---|---|---|
| New technology? | Yes | Article 35(1) factor |
| Systematic evaluation of personal aspects? | Low — generates draft text; does not evaluate individual profiles | Article 35(3)(a) |
| Legal or similarly significant effects on data subjects? | Moderate — drafted documents may affect legal rights of parties [ASSUMPTION] | Article 35(3)(a) |
| Large-scale processing of special categories? | Possible — contract drafting may involve personal data of natural persons [ASSUMPTION] | Article 35(3)(b) |
| Systematic monitoring? | No | Article 35(3)(c) |
| DPIA threshold met? | Borderline to Low — depends on processing scale and data content [ASSUMPTION] | Recommend screening assessment |
| Immediate action | Confirm whether special category data routinely appears in drafting tasks; if yes, DPIA required | [LEGAL REVIEW REQUIRED] |
SYS-03 — Document Summarisation Tool
| Threshold Factor | Assessment | Notes |
|---|---|---|
| New technology? | Yes | Article 35(1) factor |
| Systematic evaluation of personal aspects? | Low-to-Moderate — summaries of legal documents; may synthesise personal data from source documents | Article 35(3)(a) |
| Legal or similarly significant effects on data subjects? | Moderate — summaries of judgments, contracts, or case files may affect legal positions [ASSUMPTION] | Article 35(3)(a) |
| Large-scale processing of special categories? | Possible — same reasoning as SYS-01/02 | Article 35(3)(b) |
| Systematic monitoring? | No | Article 35(3)(c) |
| DPIA threshold met? | Borderline | Recommend screening assessment |
| Immediate action | Same as SYS-01 | [LEGAL REVIEW REQUIRED] |
SYS-04 — Legal Analysis Tool (Confirmed High-Risk)
| Threshold Factor | Assessment | Notes |
|---|---|---|
| New technology? | Yes — LLM-based legal analysis is a new technology | Article 35(1) factor |
| Systematic evaluation of personal aspects? | Yes — analyses legal facts about persons; produces risk assessments and legal interpretations affecting individuals [ASSUMPTION] | Article 35(3)(a) triggered |
| Legal or similarly significant effects on data subjects? | Yes — legal risk analysis directly informs legal decisions affecting parties' rights [ASSUMPTION]; WP251: effects need not be solely automated | Article 35(3)(a) triggered |
| Large-scale processing of special categories? | Very likely — legal analysis routinely involves health, financial, criminal, or other special category data [ASSUMPTION] | Article 35(3)(b) triggered |
| Automated decision-making (Article 22)? | Possible — if analysis outputs are routinely followed without meaningful independent review | Article 22; WP251 guidance |
| EU AI Act high-risk linkage? | Yes (qualified) — Article 26(9) requires deployers of high-risk AI systems to use Article 13 information "where applicable" to comply with their existing GDPR Article 35 DPIA obligation; Article 26(9) supports compliance with an existing obligation but does not itself create the DPIA duty, which is determined solely by the GDPR Article 35 threshold | EU AI Act Article 26(9) |
| DPIA threshold met? | Yes — full DPIA required | Proceed to Part B |
PART B — FULL DPIA: SYS-04 LEGAL ANALYSIS TOOL
B.1 System Identification
| Field | Value |
|---|---|
| System name | Legal Analysis Tool (SYS-04) |
| Provider | Pickles GmbH [ASSUMPTION] |
| Classification | High-risk AI system (confirmed by project owner for this framework) |
| DPIA prepared by | [PLACEHOLDER — name and role] |
| DPO consulted | [PLACEHOLDER — name, date — mandatory per Article 35(2)] |
| Date of DPIA | 2026-02-26 (framework draft) |
| Next review date | [PLACEHOLDER — at minimum when processing changes; annually recommended] |
| Supervisory authority consultation required? | [See Section B.7] |
B.2 Processing Description
B.2.1 Purpose of Processing
[ASSUMPTION] SYS-04 is an AI system used by lawyers to analyse legal risks, interpret legal provisions, and identify legal issues in documents or fact patterns submitted by the user. Outputs are intended to assist (not replace) lawyer professional judgment.
Intended purposes of processing personal data: - Analysing factual circumstances in legal matters to identify applicable legal issues [ASSUMPTION] - Interpreting contractual provisions and identifying risks for parties [ASSUMPTION] - Summarising legal risk profiles based on document content [ASSUMPTION] - Supporting lawyers in due diligence, litigation preparation, or advisory work [ASSUMPTION]
B.2.2 Categories of Data Subjects
[ASSUMPTION] Personal data processed through SYS-04 may relate to:
| Data Subject Category | Examples | Notes |
|---|---|---|
| Clients of lawyer users | Natural persons whose legal matters are analysed | Primary data subjects — no direct relationship with Pickles GmbH |
| Counterparties in legal proceedings | Opposing parties, witnesses, relevant third parties | May have no knowledge of AI processing |
| Employees (in employment law matters) | Individuals named in employment disputes | May include special category data |
| Patients (in medical or personal injury matters) | Health information [ASSUMPTION] | Special category data — Article 9 |
| Individuals in family law matters | Family circumstances, financial situations [ASSUMPTION] | Potentially highly sensitive |
B.2.3 Categories of Personal Data
[ASSUMPTION] SYS-04 may process the following categories of personal data, depending on the legal matter submitted:
| Category | GDPR Classification | Sensitivity |
|---|---|---|
| Names and identifying information | Standard personal data | Moderate |
| Professional and financial information | Standard personal data | Moderate-High |
| Health and medical data | Special category (Article 9(1)) | Very high |
| Data relating to criminal convictions or offences | Article 10 data | Very high |
| Sexual orientation or gender identity | Special category (Article 9(1)) | Very high |
| Racial or ethnic origin (in discrimination matters) | Special category (Article 9(1)) | Very high |
| Trade union membership (in employment matters) | Special category (Article 9(1)) | Very high |
| Biometric or genetic data (if included in documents) | Special category (Article 9(1)) | Very high |
B.2.4 Operational Description
[ASSUMPTION]
- Input: Lawyer submits document(s) or fact pattern to SYS-04 platform, which may contain personal data about third parties
- Processing: System analyses input; constructs prompt; sends to AI inference layer (in-house model or third-party model API [ASSUMPTION A-004])
- Output: System generates legal analysis — risk assessment, legal issue identification, or interpretation — returned to lawyer
- Human action: Lawyer reviews output and applies professional judgment; no automatic submission to third parties [ASSUMPTION]
- Logging: System logs session data and output per EU AI Act Article 12 [ASSUMPTION]
B.2.5 Lawful Basis for Processing
[ASSUMPTION] Pickles GmbH acts as a data processor for its lawyer clients when processing data contained in submitted documents. Pickles GmbH does not independently determine the purposes or means of processing that data — the lawyer client (controller) does.
[LEGAL REVIEW REQUIRED] However, the following processing activities may involve Pickles GmbH acting as an independent controller: - Logging and retaining query data for operational monitoring - Use of query data for model improvement or fine-tuning [ASSUMPTION — if this occurs, it requires independent lawful basis and client consent or contractual authorisation]
For lawyer clients acting as data controllers, the following lawful bases may apply to their use of SYS-04:
| Processing Activity | Potential Lawful Basis | Notes |
|---|---|---|
| Processing client personal data for legal advice | Article 6(1)(b) — performance of contract (with their client); or Article 6(1)(f) — legitimate interests | Depends on client engagement terms |
| Processing special category data | Article 9(2)(f) — establishment, exercise or defence of legal claims | Most appropriate for legal proceedings context |
| Processing criminal conviction data | Article 10 — requires authorisation under Member State law | [LEGAL REVIEW REQUIRED — confirm BDSG basis] |
B.3 Assessment of Necessity and Proportionality
B.3.1 Necessity
| Question | Assessment |
|---|---|
| Is processing necessary to achieve the stated purpose? | Yes — AI-assisted legal analysis requires processing of the personal data in submitted legal documents to produce relevant outputs [ASSUMPTION] |
| Could the purpose be achieved with less personal data (data minimisation)? | Partially — lawyer users should be trained to submit only essential information; anonymisation/pseudonymisation of documents before submission should be promoted [ASSUMPTION] |
| Is the processing adequate for the purpose? | Yes — the AI system analyses only what is submitted [ASSUMPTION] |
| Is the processing excessive relative to the purpose? | Risk: if system retains submitted documents or query logs containing personal data beyond operational necessity, this is likely excessive per Article 5(1)(c) GDPR |
B.3.2 Proportionality
| Factor | Assessment |
|---|---|
| Do benefits justify the risk? | Yes — AI-assisted legal analysis has legitimate efficiency and accuracy benefits for legal services delivery [ASSUMPTION] |
| Are safeguards proportionate to the risk? | Partially — safeguards must be confirmed and implemented; this DPIA identifies the required safeguards (Section B.5) |
| Are data subjects' interests adequately protected? | Risk: data subjects whose personal data is in submitted documents receive no direct notice of AI processing; this is inherent to the processor model but must be addressed in client contracts and DPAs |
| Is the processing transparent? | Risk: transparency to data subjects (GDPR Articles 13/14) is the lawyer client's responsibility as controller, but Pickles GmbH should contractually require clients to meet these obligations |
B.4 Risk Identification
B.4.1 Risk Register
| # | Risk Description | Likelihood [ASSUMPTION] | Severity [ASSUMPTION] | Overall Risk |
|---|---|---|---|---|
| R1 | AI hallucination / inaccurate legal analysis leading to flawed legal advice and harm to client's legal interests | High (inherent in LLM technology) | High (legal consequences for data subjects) | HIGH |
| R2 | Confidential information disclosure — client data transmitted to third-party model provider without adequate contractual controls | Medium (depends on architecture) | Very High (professional liability; criminal exposure under §203 StGB) | HIGH |
| R3 | International data transfer without adequate safeguards — personal data transferred to non-EEA model provider | Medium (depends on provider) | High (GDPR violation; supervisory authority fine) | HIGH |
| R4 | Special category data processed without Article 9 basis — health, criminal, or other sensitive data in submitted documents | Medium (inherent in legal practice) | Very High | HIGH |
| R5 | Excessive data retention — query logs or submitted documents retained longer than necessary | Medium | High | HIGH |
| R6 | Automation bias — lawyer over-relies on AI analysis without adequate independent review; errors not caught | Medium-High (well-documented risk) | High | HIGH |
| R7 | Unauthorised access to platform — interception of queries or outputs containing sensitive legal data | Low-Medium | Very High | MEDIUM-HIGH |
| R8 | AI system used beyond intended purpose — lawyer submits data for use cases outside the system's tested and validated scope | Medium | Medium-High | MEDIUM |
| R9 | Model training on client data without consent or lawful basis — if Pickles GmbH uses submitted data to improve its models | Low (if policy controls in place) | Very High | HIGH (if it occurs) |
| R10 | Data subject unable to exercise rights — third-party individuals whose data is in submitted documents have no knowledge of or access to the processing | High (structural — inherent to processor model) | Medium | MEDIUM-HIGH |
B.4.2 Automated Decision-Making Risk (Article 22 and WP251)
[ASSUMPTION] SYS-04 produces legal analysis outputs for lawyer review. The key question is whether this constitutes "automated processing" under Article 22.
Per EDPB WP251 guidance: Article 22 applies where decisions are "based solely" on automated processing. However, DPIA obligations under Article 35(3)(a) apply more broadly to evaluations "based on" automated processing. The fact that a lawyer reviews the output does not automatically eliminate the DPIA obligation.
Article 22 assessment: - If lawyers routinely endorse AI analysis without meaningful independent verification → may constitute de facto "solely automated" processing (WP251 warns against "fabricating human involvement") - If lawyers conduct genuine independent review with authority to change the conclusion → Article 22(1) prohibition does not apply - [LEGAL REVIEW REQUIRED] Pickles GmbH's human oversight policy (L1-3.4) must be designed to ensure genuine, meaningful human oversight rather than token review
B.5 Mitigation Controls
B.5.1 Technical Controls
| Control | Risk Addressed | Implementation [ASSUMPTION] | Priority |
|---|---|---|---|
| Prompt minimisation / anonymisation guidance | R1, R2, R4 | Train users to anonymise submitted documents; implement in-UI guidance prompting minimal data submission | High |
| No-training guarantee — contractual prohibition on using submitted data for model training | R9 | Include in DPA with lawyer clients; include in contract with third-party model provider | Critical |
| End-to-end encryption — data in transit and at rest | R2, R7 | TLS 1.3+ in transit; AES-256 at rest; implement before market placement | High |
| Short log retention — query logs deleted after minimum operational period | R5 | Implement automated deletion; default to 90-day retention maximum [ASSUMPTION] | High |
| No document retention — submitted documents not stored beyond session | R5 | Architecture: process in-memory only; no persistent storage of submitted documents [ASSUMPTION] | Critical |
| AI output labelling — all SYS-04 outputs marked as AI-generated (Article 50(2)) | R6 | In-product labelling; machine-readable marking | High |
| Mandatory review gate — UI prevents direct use/forwarding of output without user action | R6 | UX design: require explicit user review confirmation before output can be exported or shared | Medium-High |
| Access controls — role-based access; MFA; principle of least privilege | R7 | Implement at infrastructure and application layer | High |
| Separation of production and training data | R9 | Architectural separation; DBA/engineer access controls | Critical |
B.5.2 Contractual Controls
| Control | Risk Addressed | Implementation |
|---|---|---|
| Data Processing Agreement (DPA) with lawyer clients — specifying Pickles GmbH's processor obligations, no-training guarantee, sub-processor disclosure | R2, R9, R10 | Include in standard client contract terms; see L2-5.3 |
| Sub-processor DPA with third-party model provider — including GDPR Article 28 terms and §43e BRAO minimum content | R2, R3 | Required before any client data is transmitted to model provider; see L2-5.3 |
| Standard Contractual Clauses (SCCs) — if model provider is non-EEA | R3 | Execute EU SCCs per Commission Decision 2021/914; conduct Transfer Impact Assessment |
| Lawyer client obligations — contract terms requiring lawyer clients to inform their own data subjects about AI processing (GDPR Articles 13/14 obligation) | R10 | Include in DPA / Terms of Service |
| §43e BRAO-compliant service agreement — with third-party model provider where attorney-client confidential information may be transmitted | R2 | Confirm provider will execute §43e-compliant terms |
B.5.3 Organisational Controls
| Control | Risk Addressed | Implementation [ASSUMPTION] |
|---|---|---|
| Human oversight policy (L1-3.4) — mandatory independent review of SYS-04 outputs before reliance | R6 | Implemented at governance level; reinforce through client-facing documentation |
| AI competence training — lawyer users trained on SYS-04 limitations, hallucination risk, and review requirements (EU AI Act Article 4) | R1, R6 | Include in onboarding and annual training [ASSUMPTION] |
| DPO involvement — DPO consulted on this DPIA and monitors ongoing compliance | R1–R10 | Mandatory per Article 35(2) and BDSG §67(3) |
| Periodic DPIA review — DPIA reviewed annually or when processing changes materially | All risks | Schedule next review — [PLACEHOLDER date] |
| Incident response procedure — procedures for data breach notification if personal data in queries is compromised (L3-6.2) | R7 | Cross-reference L3-6.2 Incident Response Playbook |
| User guidance — guidance to lawyer users on not submitting unnecessary personal data, and on when to anonymise before submission | R2, R4 | Include in onboarding documentation and in-product help |
B.6 Residual Risk Assessment
After applying the mitigation controls in Section B.5:
| # | Risk | Residual Likelihood | Residual Severity | Residual Risk | Accepted? |
|---|---|---|---|---|---|
| R1 | AI hallucination / inaccurate analysis | Medium (LLMs inherently hallucinate) | Medium (mitigated by mandatory lawyer review) | MEDIUM | Acceptable with continued monitoring |
| R2 | Confidential data disclosure to third-party provider | Low (if DPA and §43e controls implemented) | High (liability does not change) | MEDIUM | Acceptable with robust contractual controls |
| R3 | International transfer without adequate safeguards | Low (if SCCs and TIA completed) | High | MEDIUM | Acceptable if SCCs executed |
| R4 | Special category data without Article 9 basis | Low (managed via processor model and client DPA obligations) | High | MEDIUM | Acceptable — controller (lawyer client) carries primary obligation |
| R5 | Excessive data retention | Low (if no-document-retention architecture implemented) | Medium | LOW | Acceptable |
| R6 | Automation bias | Medium (cultural/behavioural risk) | Medium (mitigated by oversight requirements) | MEDIUM | Acceptable with ongoing monitoring and training |
| R7 | Unauthorised access | Low (if security controls implemented) | High | MEDIUM | Acceptable with penetration testing programme |
| R8 | Use outside intended purpose | Low-Medium | Medium | LOW-MEDIUM | Acceptable with terms enforcement |
| R9 | Model training on client data | Very Low (if no-training guarantee implemented) | Very High | LOW | Acceptable if guarantee contractually enforced |
| R10 | Data subject unable to exercise rights | Medium (structural; inherent to processor model) | Medium | MEDIUM | Acceptable — contractor (lawyer client) responsible; requires DPA clause |
Overall residual risk assessment: MEDIUM — acceptable for deployment subject to implementation of all mitigation controls in Section B.5 and ongoing monitoring. [LEGAL REVIEW REQUIRED]
B.7 Prior Consultation — GDPR Article 36
GDPR Article 36 requires prior consultation with the supervisory authority (in Germany: the Bundesdatenschutzbeauftragte or relevant Landesbeauftragte) where the DPIA indicates that processing would result in a high residual risk in the absence of controller measures.
Assessment: The residual risk for SYS-04 is assessed as MEDIUM after mitigation controls. Prior consultation under Article 36 is therefore not currently indicated, provided the mitigation controls in Section B.5 are implemented before deployment.
[LEGAL REVIEW REQUIRED] If any mitigation control in Section B.5 cannot be implemented, or if additional risks are identified during architecture review, the residual risk may be higher and prior consultation may become required. DPO must confirm this assessment.
Note: BDSG Section 67(3) requires the Federal Commissioner to be involved in the DPIA process under the BDSG framework. [LEGAL REVIEW REQUIRED] Confirm whether BDSG §67 applies to this processing in addition to GDPR Article 35.
B.8 DPO Consultation Record
| Field | Value |
|---|---|
| DPO name | [PLACEHOLDER — must be designated per BDSG §38 [ASSUMPTION A-008]] |
| Date of consultation | [PLACEHOLDER] |
| DPO opinion | [PLACEHOLDER — record DPO opinion on processing risks and mitigation adequacy] |
| DPO recommendations | [PLACEHOLDER] |
| Action taken on recommendations | [PLACEHOLDER] |
B.9 DPIA Sign-Off
| Role | Name | Date | Signature |
|---|---|---|---|
| DPIA Author | [PLACEHOLDER] | [PLACEHOLDER] | [PLACEHOLDER] |
| DPO | [PLACEHOLDER — mandatory] | [PLACEHOLDER] | [PLACEHOLDER] |
| AIRO / Head of Product | [PLACEHOLDER] | [PLACEHOLDER] | [PLACEHOLDER] |
| CEO (if prior consultation required) | [PLACEHOLDER — if Article 36 triggered] | [PLACEHOLDER] | [PLACEHOLDER] |
Document Control
| Field | Detail |
|---|---|
| Document ID | L2-5.2 |
| Applies to | SYS-04 (full DPIA); SYS-01, SYS-02, SYS-03 (threshold assessment only — full DPIAs to be conducted per system) |
| Next review | Before SYS-04 deployment; annually thereafter; when processing changes materially |
| Cross-references | L2-5.1 (Data Flow Map), L2-5.3 (Vendor Risk Assessment), L1-3.4 (Human Oversight Policy), L3-6.2 (Incident Response) |
| Regulatory basis | GDPR Articles 35, 36, 9, 22; BDSG §67; EDPB WP251; EU AI Act Article 26(9) |
| Assumptions relied upon | A-001, A-002, A-004, A-005, A-007, A-008 |