Skip to content

Transparency Disclosure Framework

Project: Pickles GmbH — AI Governance Framework Stage: Stage 3 — Regulatory Alignment Status: Draft Version: v1 Date: 2026-02-26 Assumptions: Built on outline assumptions — not verified against real Pickles GmbH data


Purpose

This framework defines Pickles GmbH's transparency obligations under the EU AI Act and related regulatory instruments, and sets out how those obligations are operationalised across Pickles GmbH's products, client communications, and website.

Transparency in this context operates at three levels: 1. System-level transparency — what Pickles GmbH must disclose to deployers (its lawyer clients) about how its AI systems work (EU AI Act, Article 13) 2. User-level transparency — what must be disclosed to persons interacting with AI systems (EU AI Act, Article 50) 3. Professional transparency — what lawyers using Pickles GmbH tools must disclose to their own clients under BRAK professional rules (BRAK AI Position Paper, Section 4)

[ASSUMPTION] This framework is built on the assumed product portfolio (A-001) and assumed client base of legal professionals (A-002). All disclosure obligations must be reviewed against the actual product architecture and actual client contracts.

[LEGAL REVIEW REQUIRED] This framework constitutes a planning document, not legal advice. Transparency obligations under the EU AI Act are subject to phased implementation (see Section 2.1). A qualified practitioner must confirm applicability to each Pickles GmbH product before reliance.


1. Regulatory Basis

Instrument Relevant Provisions Topic
EU AI Act (Regulation (EU) 2024/1689) Article 13 Transparency to deployers of high-risk systems
EU AI Act Article 50(1) Disclosure that user is interacting with an AI system
EU AI Act Article 50(2) Machine-readable marking of AI-generated synthetic content
EU AI Act Article 50(5) Timing of disclosure (first interaction or exposure)
EU AI Act Article 4 AI competence — providers and deployers must ensure staff have sufficient AI literacy
EU AI Act Article 6(4) Provider's self-assessment documentation (non-high-risk Annex III systems)
GDPR (Regulation (EU) 2016/679) Articles 13, 14 Data subject information at collection
GDPR Article 22 Disclosure of automated decision-making (if applicable)
BRAK AI Position Paper (December 2024) Section 4 Professional transparency obligations for lawyers using AI
BRAK AI Position Paper Section 3 Attorney-client confidentiality — indirect transparency obligation

2. Application Timeline

2.1 EU AI Act Implementation Dates

Transparency obligations are being phased in. The following dates are relevant to Pickles GmbH's planning:

Date Obligation Basis
2 February 2025 (active) Article 4 AI competence obligations apply Article 113, point (a); Article 4
2 August 2026 Article 50 transparency obligations fully apply (AI interaction disclosure, synthetic content marking) Article 113 (general application date)
2 August 2026 Full Chapter III high-risk obligations apply (incl. Article 13 deployer information) Article 113 (general application date)

Practical implication: Pickles GmbH should implement Article 50 disclosures by 2 August 2026 at the latest, and should treat earlier voluntary implementation as best practice demonstrating good governance. Article 4 (AI competence) obligations apply now.


3. System-Level Transparency — Instructions to Deployers (Article 13)

3.1 When Article 13 Applies

Article 13 of the EU AI Act applies to high-risk AI systems only. It requires providers to accompany their systems with instructions for use in digital format, containing specific information for deployers.

[ASSUMPTION] Based on the provisional risk classification in L2-4.1, SYS-01, SYS-02, and SYS-03 are provisionally classified as limited-risk and are not subject to Article 13 in its full form. SYS-04 (Legal Analysis Tool) carries a classification uncertainty and may be subject to Article 13 if determined high-risk [LEGAL REVIEW REQUIRED].

For limited-risk systems, Pickles GmbH should nonetheless provide equivalent information as a matter of good practice and to satisfy client due diligence expectations.

3.2 Required Content — Article 13(3) Information for Deployers

The following information must be provided in deployer-facing documentation (user manuals, onboarding materials, technical documentation) for any system classified as high-risk, and is recommended for all systems:

Article 13(3) Element Requirement Pickles GmbH Implementation
13(3)(a) — Provider identity Name, contact details of provider Include in all product documentation and Terms of Service [ASSUMPTION]
13(3)(b)(i) — Intended purpose Clear description of what the system is for Included in product Terms of Service and Technical Documentation (L2-4.2 Section 1.1) [ASSUMPTION]
13(3)(b)(ii) — Accuracy metrics Level of accuracy, robustness, cybersecurity declared; known circumstances affecting performance Publish accuracy and limitation disclosures per system — see Section 5 of this document
13(3)(b)(iii) — Known risks Known/foreseeable circumstances that may lead to health, safety or fundamental rights risks Include in deployer-facing documentation — see Section 5
13(3)(b)(iv) — Explainability Technical capabilities to explain output (if applicable) Describe output interpretation guidance in user documentation [ASSUMPTION]
13(3)(b)(v) — Performance by group Performance for specific persons/groups (if applicable) Disclose if performance varies by language, jurisdiction, document type [ASSUMPTION]
13(3)(b)(vi) — Input data specs Training/validation/testing data information Reference Technical Documentation Pack (L2-4.2 Section 2.4)
13(3)(b)(vii) — Output interpretation Information enabling deployers to interpret outputs appropriately Include in user guides and in-product help [ASSUMPTION]
13(3)(d) — Human oversight measures Technical measures facilitating output interpretation and human oversight Describe in user documentation; cross-reference L1-3.4 Human Oversight Policy
13(3)(e) — Computational needs and lifecycle Hardware requirements; expected system lifetime; maintenance schedule Include in technical specifications and Service Level Agreement [ASSUMPTION]
13(3)(f) — Log collection Description of logging mechanisms for deployers to collect and interpret logs Include in technical documentation and onboarding materials [ASSUMPTION]

3.3 Format and Delivery of Deployer Information

[ASSUMPTION] Pickles GmbH delivers client-facing documentation via: - Onboarding documentation package (digital format per Article 13(2)) - In-product help centre and knowledge base - Product-specific Terms of Service and Acceptable Use Policy

Required minimum format: Article 13(2) requires "concise, complete, correct and clear information that is relevant, accessible and comprehensible to deployers" in "an appropriate digital format or otherwise."

[LEGAL REVIEW REQUIRED] Accessibility requirements under Directives (EU) 2016/2102 and (EU) 2019/882 apply to information provided to deployers. Confirm documentation meets accessibility standards.


4. User-Level Transparency — Article 50 Obligations

4.1 Article 50(1) — Disclosure That a Person is Interacting with an AI System

Trigger: The system is "intended to interact directly with natural persons."

[ASSUMPTION] Pickles GmbH's AI systems interact directly with lawyers and paralegals as users. Article 50(1) applies to all four assumed AI systems.

Obligation: Persons must be informed that they are interacting with an AI system, at the latest at the time of the first interaction.

Exemption: Disclosure is not required where "it is obvious from the point of view of a natural person who is reasonably well-informed, observant and circumspect, taking into account the circumstances and the context of use."

[LEGAL REVIEW REQUIRED] Whether the AI nature of a legal AI tool is "obvious" to lawyer users requires legal assessment. Given that (i) lawyers are sophisticated users, (ii) the product is marketed as an AI tool, and (iii) the EU AI Act's "reasonable person" test applies, it is possible that the exemption applies. However, Pickles GmbH should implement disclosure regardless as a conservative compliance approach.

Implementation Requirements for Article 50(1):

Implementation Point Requirement Action
First login / onboarding Disclosure must be made at first interaction Display AI disclosure during account setup / first session [ASSUMPTION]
Disclosure language Clear, distinguishable, accessible Use plain language — see Section 4.4 for model wording
Accessibility Must conform to accessibility requirements (Article 50(5)) Ensure disclosure is accessible to users with disabilities [ASSUMPTION]
Timing At the latest at first interaction Do not delay until after initial engagement with outputs

4.2 Article 50(2) — Marking of AI-Generated Synthetic Content

Trigger: The AI system "generates synthetic audio, image, video or text content."

[ASSUMPTION] The following Pickles GmbH systems generate synthetic text content: - SYS-02 (Document Drafting Tool) — generates draft text - SYS-03 (Document Summarisation Tool) — generates summary text - SYS-04 (Legal Analysis Tool) — generates analysis text

Obligation: Outputs must be "marked in a machine-readable format and detectable as artificially generated or manipulated." Technical solutions must be effective, interoperable, robust, and reliable.

Exemption — Article 50(2) assistive function carve-out: The marking obligation does not apply where the AI system "performs an assistive function for standard editing or does not substantially alter the input data provided by the deployer or the semantics thereof."

[LEGAL REVIEW REQUIRED] For SYS-02 (Document Drafting): where the system completes or reformats text provided by the user, this exemption may partially apply. Where the system generates substantively new text, it does not. The boundary must be determined per system and per use case.

Implementation Requirements for Article 50(2):

Implementation Point Requirement Action
Machine-readable marking AI-generated outputs marked in machine-readable format Implement metadata tagging on AI-generated content [ASSUMPTION — technical architecture review required]
User-visible marking Good practice: human-readable indicator alongside machine-readable marking Display "AI-generated" label on outputs in UI [ASSUMPTION]
Technical standard Must be "interoperable, robust, and reliable" Monitor for published EU technical standards and codes of practice (Article 50(7))
Scope Applies per output, not per session Implement marking at output level, not session level

4.3 Article 50(3) and 50(4) — Other Transparency Obligations

These provisions address emotion recognition systems, biometric categorisation systems, and deep fake generators. [ASSUMPTION] These do not apply to the assumed Pickles GmbH product portfolio. If any system is modified to include these functions, this assessment must be updated.

4.4 Model Disclosure Wording

The following model wording is suggested for use in Pickles GmbH's user interfaces and documentation. [LEGAL REVIEW REQUIRED] — wording should be reviewed by a legal practitioner before finalisation.

For AI interaction disclosure (Article 50(1)):

"You are interacting with an AI system. The responses you receive are generated by artificial intelligence and may contain errors or inaccuracies. You should always independently verify any legal information before relying on it in professional advice or submissions."

For AI-generated content label (Article 50(2)):

"AI-generated content. This output was produced by an AI system. Please review carefully before use."

For account onboarding (combining both):

"Pickles GmbH uses artificial intelligence to provide legal research, drafting, and analysis assistance. All outputs are AI-generated and must be independently reviewed by a qualified lawyer before use in professional advice, submissions, or client-facing materials. By using this service, you confirm that you understand the role of AI in generating outputs and will apply appropriate professional judgement."


5. Client-Facing Limitation Disclosures

5.1 Purpose

Beyond the Article 50 trigger-based disclosures, Pickles GmbH should publish structured limitation disclosures for each product. These serve multiple purposes: - Satisfy Article 13(3)(b)(ii) and (iii) requirements for deployer information - Support Pickles GmbH clients' own professional obligations under BRAK Section 2.1 (lawyer duty of independent review) - Manage professional liability expectations - Demonstrate good governance for enterprise procurement

5.2 Standard Limitation Disclosure Structure

Each AI system should have a published limitation disclosure covering:

Disclosure Element Description
Hallucination risk AI systems can generate plausible but factually incorrect content. Frequency and circumstances to be disclosed per system.
Jurisdictional scope Whether the system covers German law only, EU law, or other jurisdictions; and known gaps in coverage [ASSUMPTION]
Temporal currency Training data cut-off date; how frequently the system is updated with new case law or legislation [ASSUMPTION]
Language limitations Whether the system operates in German, English, or other languages; quality differences by language [ASSUMPTION]
Output type limitations What the system does and does not produce; e.g., "does not provide legal advice; does not predict outcomes"
Human review requirement Explicit statement that outputs require independent lawyer review before use
Known failure modes Documented cases where the system performs below expected accuracy (e.g., regional courts, minority languages, specialist practice areas) [ASSUMPTION]

[ASSUMPTION] The specific limitation disclosures for each system must be completed based on actual system performance data, which is not available at the time of this document.

5.3 BRAK Transparency Context for Lawyer Clients

The BRAK AI Position Paper (Section 4) notes that while neither the BRAO nor BORA currently impose a professional obligation on lawyers to inform clients about their use of AI tools, transparency obligations may arise from: - Contract law (duty to inform clients about essential aspects of case handling) - The Unfair Competition Act (UWG) - Best practice and risk management

The BRAK paper advises: "it is advisable to use AI tools transparently and, in case of doubt, to include a contractual provision with clients."

Implication for Pickles GmbH: Pickles GmbH's client-facing materials should support its lawyer clients in meeting their own evolving transparency obligations. This may include: - Template client disclosure language for law firms using Pickles GmbH tools [ASSUMPTION] - Guidance on how to explain AI-assisted services to clients [ASSUMPTION] - Client-facing summaries suitable for lawyer websites or engagement letters [ASSUMPTION]

[ASSUMPTION] This support has not been confirmed as part of the Pickles GmbH product or service offering.


6. Website and Public-Facing Disclosures

[ASSUMPTION] Pickles GmbH operates a website visible to prospective and existing clients. The following disclosures are recommended as a matter of good practice and to support client due diligence:

Disclosure Location Content
AI system description Product pages Description of AI components in each product, intended purpose, and human oversight model
Data handling summary Privacy Policy How client and end-user data is processed (feeds into L2-5.1 and L2-5.2)
EU AI Act compliance statement Governance / Trust page Statement of Pickles GmbH's approach to EU AI Act compliance
Limitation disclosures Product documentation / Help centre Per-system limitation disclosures (Section 5.2)
Sub-processor list Privacy Policy List of third-party AI model providers and other sub-processors (GDPR Article 13/14 and Article 28 requirement)

6.2 EU AI Act Compliance Statement — Suggested Content

[ASSUMPTION] The following suggested content for a public governance or trust page is a starting point only and must be verified against Pickles GmbH's actual compliance posture before publication:

"Pickles GmbH is committed to responsible AI development and compliance with the EU AI Act (Regulation (EU) 2024/1689). Our AI systems have been assessed against the EU AI Act's risk classification framework. We operate transparency practices including [AI disclosure at user interaction / AI-generated content labelling]. Our governance framework includes an AI Risk and Information Officer, human oversight policies, and a risk management process. For further information, please contact [contact details]."

[LEGAL REVIEW REQUIRED] A public compliance statement must be accurate and not misleading. It should only be published after legal review and after verifying that the stated practices are in place.


7. Article 4 — AI Competence Obligations and Transparency

Article 4 requires Pickles GmbH to ensure that its personnel and persons operating AI systems on its behalf have sufficient AI competence. This includes:

Internal: Pickles GmbH staff working with AI systems must be trained on AI capabilities, risks, and limitations. See L1-3.4 Human Oversight Policy.

External (client-facing): Pickles GmbH has a commercial and governance interest in supporting its lawyer clients' own AI competence. BRAK Section 5.1 describes the Article 4 competence obligation as applying to lawyers as operators. Pickles GmbH's products and documentation should: - Clearly explain what the AI system does and does not do - Highlight the need for qualified review - Not imply a level of accuracy or reliability that the system cannot sustain

[ASSUMPTION] Training materials or AI competence resources for clients have not been confirmed as part of the Pickles GmbH product offering.


8. Compliance Checklist — Transparency Obligations

Obligation Regulatory Basis Implementation Status Target Date Owner [ASSUMPTION]
Ensure Pickles GmbH staff have AI competence EU AI Act Article 4 (active from 2 Feb 2025) ☐ Not confirmed Immediate CEO / Head of HR [ASSUMPTION]
Disclose to users that they are interacting with AI EU AI Act Article 50(1) (applies from 2 Aug 2026) ☐ Not yet implemented By 2 Aug 2026 Head of Product [ASSUMPTION]
Mark AI-generated text outputs in machine-readable format EU AI Act Article 50(2) (applies from 2 Aug 2026) ☐ Not yet implemented By 2 Aug 2026 Head of Engineering [ASSUMPTION]
Assess Article 50(2) assistive exemption per system EU AI Act Article 50(2) ☐ Not yet assessed Before 2 Aug 2026 Legal / Head of Product [ASSUMPTION]
Publish deployer information per Article 13(3) (if high-risk) EU AI Act Article 13 (applies from 2 Aug 2026) ☐ Pending SYS-04 risk classification After risk classification Legal / Head of Product [ASSUMPTION]
Complete Article 6(4) self-assessment documentation per system EU AI Act Article 6(4) ☐ Not yet completed Before market placement Legal [ASSUMPTION]
Publish website AI governance statement Good practice / BRAK ☐ Not yet published Before next product launch CEO / Marketing [ASSUMPTION]
Publish per-system limitation disclosures Good practice / Article 13(3) ☐ Not yet completed Rolling — per system Head of Product [ASSUMPTION]
Provide lawyer clients with BRAK-aligned AI use guidance Good practice / BRAK Section 4 ☐ Not yet implemented [PLACEHOLDER] Legal / Client Success [ASSUMPTION]
Implement GDPR transparency information (Articles 13/14) GDPR Articles 13, 14 ☐ See L2-5.2 DPIA Assessment See L2-5.2 DPO [ASSUMPTION]

9. Cross-References

Document Relevance
L2-4.1-EU-AI-Act-Risk-Mapping-Matrix-v1.md Risk classification that determines Article 13 and Article 50 applicability
L2-4.2-Technical-Documentation-Pack-Template-v1.md Contains Article 13(3) disclosure elements (Section 3 above)
L2-5.1-Data-Flow-Map-v1.md GDPR data transparency context
L2-5.2-DPIA-Assessment-v1.md Article 35 GDPR — DPIA transparency implications
L1-3.4-Human-Oversight-Policy-v1.md Human oversight — referenced in Article 13(3)(d) disclosures
L3-6.2-Incident-Response-Playbook-v1.md Transparency to clients and regulators in incident situations

Document Control

Field Detail
Document ID L2-4.3
Next review After SYS-04 risk classification; before 2 August 2026 implementation deadline
Assumptions relied upon A-001, A-002, A-003