Human Oversight Policy
Project: Pickles GmbH — AI Governance Framework Stage: Stage 2 — Governance Foundation Status: Draft Version: v1 Date: 2026-02-22 Assumptions: Built on outline assumptions — not verified against real Pickles GmbH data
Purpose
This policy defines the minimum human oversight requirements for all AI systems operated or deployed by Pickles GmbH [ASSUMPTION — A-001, A-003]. It establishes that no AI output produced by any Pickles GmbH system may constitute final legal advice delivered to a client without review and authorisation by a qualified human lawyer.
The policy exists because:
- The EU AI Act (Article 14) requires that high-risk AI systems be designed and operated to allow effective human oversight throughout their lifecycle.
- BRAK professional standards (Position Paper, December 2024) require that AI outputs in legal practice be subject to human review and that responsibility for legal advice cannot be delegated to an AI system.
- GDPR Article 22 prohibits decisions based solely on automated processing that produce legal or similarly significant effects on individuals, without appropriate safeguards.
- Professional liability under German law rests with the qualified lawyer (Rechtsanwalt), not with the AI system or its provider.
Regulatory basis: - EU AI Act Article 14 — Human oversight obligations for high-risk AI systems - EU AI Act Article 13 — Transparency obligations including instructions for human oversight - EU AI Act Article 50 — Disclosure when interacting with natural persons via AI; AI-generated content labelling obligations - GDPR Article 22 — Right not to be subject to solely automated decisions with legal or significant effects - BRAK Position Paper (December 2024) Section 2.1 — Lawyer responsibility; human review requirements - BRAO Section 43a(1) — Lawyer's duty of professional independence - BRAO Section 43e — IT service obligations for German law practices [ASSUMPTION — A-002] - ISO/IEC 42001 Clause A.6 — Human oversight in AI system design
Section 1: Core Principle
Pickles GmbH AI systems are tools to support qualified human legal professionals. They do not replace human legal judgement. No AI output from any Pickles GmbH system may be delivered to a client as final legal advice or as the basis for a decision with legal or significant effects on an individual without a competent human lawyer having reviewed, verified, and taken professional responsibility for that output.
This principle applies regardless of: - The quality or apparent accuracy of the AI output - The risk tier of the system - Time pressure or operational convenience - Client expectations or requests
Regulatory basis summary:
| Principle | Regulatory Basis |
|---|---|
| No solely automated legal advice | GDPR Article 22(1); BRAK Position Paper Section 2.1 |
| Lawyer retains professional responsibility | BRAO Section 43a(1); BRAK Position Paper |
| Human oversight designed into system | EU AI Act Article 14; ISO/IEC 42001 Clause A.6 |
| Transparency about AI involvement | EU AI Act Articles 13, 50; BRAK Position Paper Section 3 |
Section 2: Prohibited Practices
The following practices are prohibited across all Pickles GmbH systems and must be technically and contractually prevented where feasible.
| Prohibited Practice | Basis |
|---|---|
| Delivering AI output directly to a client as legal advice without lawyer review | GDPR Article 22(1); BRAK Position Paper Section 2.1; BRAO Section 43a(1) |
| Designing a system workflow that permits AI output to bypass a human review step before client delivery | EU AI Act Article 14; BRAK Position Paper Section 2.1 |
| Representing AI output as the independent opinion of a qualified lawyer without disclosure that AI was used in its preparation | EU AI Act Article 50; BRAK Position Paper Section 3 |
| Using AI to make decisions on client matters — including decisions to proceed, settle, or advise — without lawyer oversight | GDPR Article 22(1); BRAO Section 43a(1) |
| Disabling or bypassing AI output labelling or transparency disclosures for customer-facing systems | EU AI Act Article 50 [ASSUMPTION — A-006] |
| Deploying a Tier 1 (High Risk) system without documented human oversight design reviewed by Legal | EU AI Act Article 14; L1-3.3 Gate 4 |
[LEGAL REVIEW REQUIRED] The precise boundaries of GDPR Article 22 in the context of AI-assisted legal work require legal analysis in specific deployment contexts. The obligation not to make decisions based solely on automated processing applies where the decision has legal or similarly significant effects — the application of this threshold to AI-assisted legal research or drafting outputs must be assessed per use case.
Section 3: Permitted Uses of AI Output
The following uses of AI output are permitted, subject to the review requirements in Section 4:
| Permitted Use | Conditions | Regulatory Note |
|---|---|---|
| Legal drafting assistance — generating draft clauses, letters, or documents | Mandatory lawyer review and amendment before use; AI involvement disclosed | BRAK Position Paper; EU AI Act Article 14 |
| Legal research — retrieving case law, legislation, commentary | Lawyer verifies all citations before reliance; output not delivered directly to client as research | BRAK Position Paper Section 2.2 |
| Document summarisation — condensing contracts, judgments, or regulatory texts | Lawyer reviews summary and confirms accuracy before relying on it | EU AI Act Article 14 |
| Internal analysis — pattern identification, administrative automation | Standard professional judgement applies; no client-facing output without review | EU AI Act Article 14 |
| Client-facing AI interaction — chatbots or research portals interacting with natural persons [ASSUMPTION — A-006] | Must comply with EU AI Act Article 50 disclosure; must not produce specific legal advice for specific facts | EU AI Act Articles 13, 50 |
Section 4: Review Requirements by Risk Tier
Tier 1 — High Risk Systems
For every AI output produced by a Tier 1 system that is to be relied upon in a legal matter or delivered to a client:
-
Mandatory documented review. The reviewing lawyer must record: (a) that they have reviewed the AI output; (b) the date of review; (c) any amendments made; and (d) that they take professional responsibility for the output as reviewed and amended.
-
Competence requirement. The reviewing lawyer must have sufficient competence in the relevant area of law to evaluate the AI output critically. Reliance on AI output in an area where the reviewing lawyer lacks competence does not satisfy this requirement. [LEGAL REVIEW REQUIRED — professional competence obligations under BRAO; BRAK Position Paper Section 2.3]
-
Override capability. The system must permit the reviewing lawyer to reject, amend, or override any AI output at any stage. No workflow design may prevent override.
-
Hallucination and citation check. For legal research or drafting outputs, the reviewing lawyer must verify all cited cases, statutes, and legal propositions independently before reliance. AI-generated citations must not be assumed accurate.
-
No automated client delivery. Tier 1 system outputs must not be automatically forwarded to clients. A human step must intervene between AI output generation and any client delivery.
Minimum review record format (Tier 1):
| Field | Content |
|---|---|
| System ID | e.g., SYS-001 |
| Date of AI output | [date] |
| Reviewing lawyer | [name] |
| Date of review | [date] |
| Amendments made | [describe amendments, or "none — output used as generated after verification"] |
| Professional responsibility accepted | Yes / No |
| Notes | [any concerns flagged for Compliance Lead] |
Tier 2 — Medium Risk Systems
For Tier 2 systems, human review is required before any output is relied upon, but the documentation requirement is less intensive than Tier 1:
- Review before reliance. Outputs must be reviewed by a qualified professional before being relied upon in legal work or delivered to a client.
- No formal review record required unless the output is used in a Tier 1 context (in which case Tier 1 requirements apply).
- Error reporting. Any suspected error, hallucination, or unreliable output must be reported to the system owner or Compliance Lead using the incident pathway in L3-6.2.
- AI involvement disclosed where required under EU AI Act Article 50 (customer-facing systems).
Tier 3 — Low Risk Systems
Standard professional judgement applies. No additional review requirements beyond those applicable to any professional tool. Errors should be reported to Engineering Lead.
Section 5: Mandatory Disclaimer Requirements
5.1 Output Disclaimers
All AI outputs produced by Pickles GmbH systems that are made available to lawyer clients or their end clients must carry a disclaimer meeting the following minimum requirements [ASSUMPTION — A-002]:
Minimum required disclaimer content: 1. The output was generated or assisted by an artificial intelligence system 2. The output has not been independently verified by a qualified lawyer unless that has occurred 3. The output does not constitute legal advice 4. The recipient should seek qualified legal advice before relying on the output for any legal matter
Suggested standard disclaimer text (to be reviewed by Legal before operational use) [LEGAL REVIEW REQUIRED]:
This document was produced with the assistance of an artificial intelligence system operated by Pickles GmbH. It has not been independently verified by a qualified lawyer unless explicitly stated. It does not constitute legal advice and should not be relied upon as such. Recipients should seek advice from a qualified lawyer before acting on any content in this document.
5.2 AI Output Labelling (EU AI Act)
Under EU AI Act Article 50(1), any system that interacts with natural persons must disclose that they are interacting with an AI system, unless this is obvious from the context [ASSUMPTION — A-006].
Under EU AI Act Article 50(2) and (5), AI-generated content must be labelled in a machine-readable format.
Required labelling for customer-facing systems:
| Obligation | Trigger | Required Action | Regulatory Basis |
|---|---|---|---|
| Disclosure of AI interaction | Any system interacting with natural persons | Disclosure at or before first interaction | EU AI Act Article 50(1) |
| AI-generated content label | Any AI-generated output including text, audio, image | Machine-readable label from August 2026 | EU AI Act Article 50 |
| Instructions for use | High-risk systems | User-facing documentation explaining purpose, limitations, and oversight requirements | EU AI Act Article 13 |
| Transparency to lawyer clients | All customer-facing systems [ASSUMPTION — A-006] | Contractual disclosure in service agreement; BRAK Position Paper alignment | BRAK Position Paper Section 3 |
Section 6: Responsibility Allocation
6.1 Summary Table
| Responsibility | Pickles GmbH | Lawyer Client [ASSUMPTION — A-002] |
|---|---|---|
| AI system design and operation | ✓ | — |
| Accuracy of AI model outputs (best efforts) | ✓ | — |
| Disclosure that AI is used in service delivery | ✓ | — |
| Compliance with EU AI Act obligations as provider/deployer | ✓ | — |
| Data processing obligations as processor [ASSUMPTION — A-007] | ✓ | — |
| Review and verification of AI output before reliance | — | ✓ |
| Professional responsibility for legal advice given to clients | — | ✓ |
| Compliance with BRAO/BRAK professional obligations | — | ✓ |
| Decision to use AI output in a specific client matter | — | ✓ |
| Disclosure to end clients that AI was used | Provides disclosure tools and templates | ✓ (final responsibility) |
6.2 Pickles GmbH Obligations
Pickles GmbH, as the operator of AI systems used by German legal professionals, accepts the following responsibilities:
- System design: Design AI systems with mandatory human oversight steps that cannot be bypassed
- Transparency: Disclose clearly in service agreements and product interfaces that AI systems are used and their limitations
- Documentation: Maintain technical documentation, risk assessments, and monitoring records as required under EU AI Act and GDPR
- Incident response: Maintain and activate an Incident Response Playbook (L3-6.2) for AI system failures, errors, and data breaches
- Limitation disclosure: Proactively communicate known model limitations, hallucination risks, and known error types to lawyer clients
6.3 Lawyer Client Obligations [ASSUMPTION — A-002]
Pickles GmbH's service agreements must require that lawyer clients using Pickles GmbH AI systems accept the following obligations [ASSUMPTION — A-007] [LEGAL REVIEW REQUIRED]:
- Mandatory review: All AI outputs used in legal matters are reviewed by a competent qualified lawyer before reliance
- Professional responsibility: The lawyer accepts professional responsibility for any output they rely upon or deliver to their client
- Disclosure: The lawyer discloses to their end clients that AI was used in the preparation of any advice or document, where required by BRAK guidelines or professional rules
- Training: Lawyers using Pickles GmbH systems have sufficient competence to evaluate AI outputs critically in the relevant area of law
- Anwaltgeheimnis compliance: Lawyers comply with their professional secrecy obligations when inputting client data into Pickles GmbH systems
Section 7: Special Considerations
7.1 Client-Facing AI Interaction [ASSUMPTION — A-006]
Where a Pickles GmbH system allows direct AI-to-human interaction with lawyer clients or their end clients (e.g., a legal research portal or chatbot interface):
- The system must disclose at the start of every interaction that the user is communicating with an AI (EU AI Act Article 50(1))
- The system must be designed to refuse to answer questions that require specific legal advice for specific facts unless a qualified human lawyer is in the loop
- Client-facing systems must be classified as Tier 2 or Tier 1 (not Tier 3) and must comply with all relevant transparency requirements
7.2 AI in Criminal Proceedings [LEGAL REVIEW REQUIRED]
The use of AI in criminal proceedings warrants heightened caution. BRAK notes that the stakes in criminal matters (personal liberty) are particularly high. Any Pickles GmbH system used in criminal proceedings must be classified as Tier 1 (High Risk) regardless of other criteria, and the human oversight requirements in Section 4 (Tier 1) must be applied with particular rigour.
7.3 Third-Party Model Limitations [ASSUMPTION — A-004]
Where Pickles GmbH uses a third-party AI model provider, Pickles GmbH cannot fully control model behaviour, training data, or model updates. Lawyer clients must be informed that:
- The underlying model may change over time
- Pickles GmbH monitors for performance degradation per L3-6.1 but cannot guarantee consistent output quality across all model versions
- Human oversight is the primary safeguard against model error or degradation
7.4 Override and Refusal
No system designed or operated by Pickles GmbH may prevent a human reviewer from: - Refusing to use an AI output - Amending an AI output before use - Raising concerns about an AI output through the incident pathway (L3-6.2) - Switching off or suspending their use of the system pending clarification
Regulatory Basis Appendix
| Policy Section | Requirement | Regulatory Basis |
|---|---|---|
| Section 1 (Core Principle) | No solely automated legal decisions | GDPR Article 22(1) |
| Section 1 (Core Principle) | Human oversight designed in | EU AI Act Article 14; ISO/IEC 42001 Clause A.6 |
| Section 1 (Core Principle) | Lawyer retains professional responsibility | BRAO Section 43a(1); BRAK Position Paper Section 2.1 |
| Section 2 (Prohibited Practices) | No bypass of review step | EU AI Act Article 14 |
| Section 2 (Prohibited Practices) | No solely automated decisions with legal effects | GDPR Article 22(1) |
| Section 2 (Prohibited Practices) | No suppression of AI disclosure | EU AI Act Article 50 |
| Section 4 (Tier 1 Review) | Documented human oversight | EU AI Act Article 14; BRAK Position Paper |
| Section 4 (Tier 1 Review) | Override capability | EU AI Act Article 14(4) |
| Section 5 (Disclaimers) | Disclosure of AI use | EU AI Act Article 50; BRAK Position Paper Section 3 |
| Section 5 (Disclaimers) | AI-generated content labelling | EU AI Act Article 50 (from August 2026) |
| Section 5 (Disclaimers) | Instructions for use | EU AI Act Article 13 |
| Section 6 (Responsibility) | Provider obligations before deployment | EU AI Act Articles 16-23 |
| Section 6 (Responsibility) | Processor obligations | GDPR Article 28 [ASSUMPTION — A-007] |
| Section 6 (Responsibility) | IT service obligations | BRAO Section 43e [ASSUMPTION — A-002] |
This policy is a governance control document. It is binding on all Pickles GmbH staff and must be incorporated by reference into service agreements with lawyer clients [ASSUMPTION — A-002].
[LEGAL REVIEW REQUIRED] This policy requires review by a qualified German lawyer with expertise in EU AI Act, GDPR, and BRAO/BRAK professional obligations before operational implementation. In particular: the application of GDPR Article 22 to AI-assisted legal tools; the scope of Section 43e BRAO obligations where third-party model providers are used; and the appropriate disclaimer text for client-facing AI outputs.