Skip to content

Technical Documentation Pack — Template

Project: Pickles GmbH — AI Governance Framework Stage: Stage 3 — Regulatory Alignment Status: Draft Version: v1 Date: 2026-02-26 Assumptions: Built on outline assumptions — not verified against real Pickles GmbH data


How to Use This Template

This is a reusable template for producing EU AI Act-compliant technical documentation for each AI system operated by Pickles GmbH. It is structured to satisfy the minimum content requirements of Annex IV of the EU AI Act (Regulation (EU) 2024/1689), as mandated by Article 11.

When to complete this template: - Before any new AI system is placed on the market or put into service (Article 11(1)) - Whenever an existing system undergoes a substantial modification that affects its classification or compliance status (Article 6(2)) - During conformity assessment procedures (Article 43) — if a system is or becomes high-risk

How to use it: 1. Complete one copy of this template per AI system 2. Replace all [PLACEHOLDER] fields with real system data 3. Mark any section where data is unavailable with [DATA REQUIRED — reason] 4. Store the completed document in the Pickles GmbH document management system with version control 5. Update the document whenever a change occurs that affects any of the declared fields

Template ID convention: TECH-DOC-[SYSTEM-ID]-v[VERSION] Example: TECH-DOC-SYS-01-v1

[LEGAL REVIEW REQUIRED] Technical documentation for high-risk systems is a legal compliance requirement under Article 11. Incomplete or inaccurate documentation constitutes non-compliance. A qualified EU AI Act practitioner must review any completed technical documentation before it is relied upon for conformity assessment.


SECTION 0 — Document Header

Field Value
System Name [PLACEHOLDER — e.g., Legal Research Assistant]
System ID [PLACEHOLDER — e.g., SYS-01]
Document ID [PLACEHOLDER — e.g., TECH-DOC-SYS-01-v1]
Version [PLACEHOLDER — e.g., v1]
Date of Initial Documentation [PLACEHOLDER — DD-MM-YYYY]
Date of Last Update [PLACEHOLDER — DD-MM-YYYY]
Prepared by [PLACEHOLDER — name and role]
Reviewed by [PLACEHOLDER — name, role, and date]
EU AI Act Risk Classification [PLACEHOLDER — Tier 1 / Tier 2 (High-risk) / Tier 3 (Limited-risk) / Tier 4 (Minimal-risk)]
Annex III Category (if high-risk) [PLACEHOLDER — e.g., Annex III Point 8(a) — or "Not applicable"]
Provider Name Pickles GmbH
Provider Legal Address [PLACEHOLDER — registered address]
Provider Contact [PLACEHOLDER — contact email]
Authorised Representative (if applicable) [PLACEHOLDER — or "Not applicable"]

SECTION 1 — General Description of the AI System

Basis: Annex IV, Point 1

1.1 Intended Purpose

Instructions: Describe what the AI system is designed to do, for whom, and in what context. Be specific — this description defines the legal scope of the system's obligations. Reference Article 13(3)(b)(i).

[PLACEHOLDER — Example: "This system is designed to assist qualified lawyers and paralegals at German law firms and in-house legal departments in searching, retrieving, and summarising publicly available case law, legislative texts, and legal commentary. It is not designed to provide legal advice and is not intended for use as the sole basis for legal decisions."]

Intended users: [PLACEHOLDER — e.g., qualified lawyers, trainees, paralegals] Intended deployment context: [PLACEHOLDER — e.g., law firm intranet, SaaS platform, API integration] Use cases explicitly excluded from intended purpose: [PLACEHOLDER — e.g., judicial decision-making, automated legal advice without lawyer review]

1.2 System Identification and Version

Instructions: Provide sufficient detail for a competent authority to uniquely identify the system and distinguish it from other versions.

Field Value
System name [PLACEHOLDER]
Version number [PLACEHOLDER]
Relationship to previous versions [PLACEHOLDER — e.g., "First release" or "Replaces v1.2 — changes: updated model weights, new document type support"]
Date of version [PLACEHOLDER]

1.3 Software and Hardware Interfaces

Basis: Annex IV, Point 1(b)(c)

Software integrations: [PLACEHOLDER — list other software, APIs, or AI systems this system interacts with, e.g., "Integrates with client document management system via REST API; uses [third-party model provider] API for language model inference [ASSUMPTION]"]

Hardware requirements for deployment: [PLACEHOLDER — e.g., "Runs on cloud infrastructure; client requires minimum browser version [X]; no dedicated hardware"]

Relevant software version requirements: [PLACEHOLDER]

Update requirements: [PLACEHOLDER — e.g., "System auto-updates; major version changes require client notification under Service Agreement"]

1.4 Forms in Which the System Is Placed on the Market or Put into Service

Basis: Annex IV, Point 1(d)

[PLACEHOLDER — e.g., "Delivered as SaaS platform via web browser interface and REST API. No embedded hardware component."]

1.5 User Interface Description

Basis: Annex IV, Point 1(g)(h)

User interface summary: [PLACEHOLDER — brief description of the interface provided to users/deployers]

Instructions for use provided to deployers: [PLACEHOLDER — confirm format: e.g., "Digital documentation provided at onboarding; in-application help system; dedicated knowledge base at [URL]"]


SECTION 2 — Development and Technical Architecture

Basis: Annex IV, Point 2

2.1 Development Methods and Steps

Instructions: Describe how the AI system was built. If third-party models were used, specify how they were integrated, modified, or fine-tuned. This is particularly important for Pickles GmbH's obligations under Article 25(4) regarding value chain responsibilities.

[PLACEHOLDER — Example: "The system was developed using a combination of [in-house developed components] and a pre-trained large language model provided by [third-party provider, ASSUMPTION]. The third-party model was [fine-tuned / integrated via API without modification / used as a base for retrieval-augmented generation (RAG)]. Development methodology followed [Agile / specified framework]. Testing was conducted in [specify environment]."]

Third-party models or tools used:

Component Provider Integration Method Modification Made
[PLACEHOLDER — e.g., Language model] [PLACEHOLDER — e.g., Third-party provider name [ASSUMPTION]] [PLACEHOLDER — e.g., API, fine-tuning, RAG] [PLACEHOLDER — e.g., None / prompt engineering / domain fine-tuning]

[ASSUMPTION] The use of third-party AI model providers has not been confirmed (A-004). This section must be completed based on actual architecture.

2.2 Design Specifications

Basis: Annex IV, Point 2(b)

General logic of the AI system: [PLACEHOLDER — e.g., "The system uses a retrieval-augmented generation (RAG) architecture: user queries are embedded and matched against a legal document index; retrieved documents are passed to an LLM which generates a response grounded in those documents."]

Key design choices and rationale: [PLACEHOLDER — explain key architecture decisions, e.g., "RAG architecture was selected to reduce hallucination risk by grounding outputs in verified source documents. This reflects the legal context where citation accuracy is critical."]

Persons/groups the system is intended for: [PLACEHOLDER — see Section 1.1]

What the system is designed to optimise for: [PLACEHOLDER — e.g., "Accuracy of legal citation, relevance of retrieved documents, coherence of summaries"]

Expected output description and quality: [PLACEHOLDER — e.g., "Output: natural language summaries with citations to source documents. Quality target: [X]% citation accuracy against source corpus."]

Trade-offs made in design to meet compliance requirements: [PLACEHOLDER — e.g., "Output length is constrained to reduce risk of unsupported legal conclusions; system does not generate case outcome predictions."]

2.3 System Architecture

Basis: Annex IV, Point 2(c)

[PLACEHOLDER — Describe how software components interact. Include a high-level architecture diagram or reference to one. Example: "User query → front-end UI → API gateway → retrieval engine (semantic search over legal document index) → LLM inference engine → response formatter → user. Components are hosted on [cloud provider, ASSUMPTION A-005]."]

Computational resources used for development, training, testing, and validation: [PLACEHOLDER]

2.4 Training Data (where applicable)

Basis: Annex IV, Point 2(d); Article 10

If the system uses a pre-trained third-party model with no further training by Pickles GmbH, indicate this and describe what is known about the third-party model's training data. If Pickles GmbH performs fine-tuning, this section must be completed in full.

Element Description
Training data sources [PLACEHOLDER — e.g., "Base model trained by [third-party provider] on [published training corpus]. Pickles GmbH fine-tuning used [legal document corpus — ASSUMPTION]."]
Data provenance and collection method [PLACEHOLDER]
Data selection and filtering criteria [PLACEHOLDER]
Annotation / labelling procedures (if applicable) [PLACEHOLDER]
Bias examination performed [PLACEHOLDER — describe what bias checks were conducted per Article 10(2)(f)]
Bias mitigation measures [PLACEHOLDER — per Article 10(2)(g)]
Data gaps identified [PLACEHOLDER — per Article 10(2)(h)]
Special categories of personal data processed in training [PLACEHOLDER — if applicable, justify under Article 10(5)]
Validation and testing data sets [PLACEHOLDER — describe sets used, distinct from training data]

2.5 Human Oversight Assessment

Basis: Annex IV, Point 2(e); Article 14

[PLACEHOLDER — Describe the human oversight mechanisms built into the system at design stage. Example: "The system produces outputs labelled as AI-generated. Users are required to independently verify any legal citations before use in submissions. The system does not support automated submission of outputs to external parties without user confirmation. Override is always possible — users can ignore, edit, or discard any output."]

Reference: Full human oversight policy is documented in L1-3.4-Human-Oversight-Policy-v1.md.

2.6 Pre-Determined Changes

Basis: Annex IV, Point 2(f)

[PLACEHOLDER — Describe any changes to the system that have been pre-approved as part of the conformity assessment (e.g., model weight updates within defined performance bounds). If none: "No pre-determined changes have been defined for this version. All updates will be subject to change management procedures under L3-6.3."]

2.7 Validation and Testing Procedures

Basis: Annex IV, Point 2(g); Article 9(6)(8)

Element Description
Validation methodology [PLACEHOLDER — e.g., "Internal evaluation against [X] benchmark legal queries; expert review by practising lawyers [ASSUMPTION]"]
Testing data sets used [PLACEHOLDER]
Accuracy metrics [PLACEHOLDER — per Article 15(3) — declare accuracy metrics, e.g., citation accuracy rate, factual error rate]
Robustness testing [PLACEHOLDER — e.g., adversarial input testing, edge case testing]
Testing against prohibited outputs [PLACEHOLDER — e.g., tested for generation of content violating Article 5 prohibitions]
Test logs and reports [PLACEHOLDER — reference to test logs. Test logs and reports must be dated and signed by responsible person per Annex IV Point 2(g)]
Frequency of testing [PLACEHOLDER — per Article 9(8): "testing shall be performed at any time throughout the development process, and in any event, prior to market placement"]

2.8 Cybersecurity Measures

Basis: Annex IV, Point 2(h); Article 15

[PLACEHOLDER — Describe technical cybersecurity measures implemented. Should cover at minimum: data poisoning prevention (Article 15(5)), adversarial example resistance, model evasion countermeasures, access controls, encryption. Example: "Input validation and prompt injection defences implemented at API layer. Model inference is isolated from training environment. API access requires authenticated credentials. Data in transit and at rest encrypted using AES-256 / TLS 1.3."]


SECTION 3 — Monitoring, Functioning, and Control

Basis: Annex IV, Point 3; Articles 12, 13, 14, 15

3.1 Capabilities and Limitations

[PLACEHOLDER — Declare the system's known performance capabilities and limitations. Example: "The system achieves [X]% citation accuracy on [benchmark]. Accuracy decreases for pre-2010 legal sources [ASSUMPTION]. The system does not interpret ambiguous legislative provisions — it retrieves and presents options. The system cannot predict judicial outcomes and is not designed to do so. Known limitations: hallucination risk on queries outside the training corpus; limited coverage of regional court decisions [ASSUMPTION]."]

3.2 Foreseeable Unintended Outcomes and Risk Sources

[PLACEHOLDER — List foreseeable risks per Article 9(2)(a). Example: "Foreseeable risks include: (1) citation of outdated law (mitigation: source currency metadata displayed); (2) lawyer over-reliance on AI output without independent verification (mitigation: outputs labelled as AI-generated; independent review required per L1-3.4); (3) inclusion of confidential client data in prompts (mitigation: data handling training; privacy by design architecture)."]

3.3 Human Oversight Mechanisms (Operational)

[PLACEHOLDER — Describe the human oversight measures that deployers can use in practice, per Article 14(4). Example: "Users are able to: (a) review all output before use; (b) edit or discard any output; (c) flag outputs as inaccurate via in-product reporting tool; (d) halt any query session at any time. No output is automatically transmitted to third parties."]

3.4 Input Data Specifications

[PLACEHOLDER — Describe what input data the system accepts and any constraints. Example: "System accepts natural language text queries in German and English. Document upload accepts PDF and DOCX formats up to [X]MB. System does not accept audio, image, or biometric data."]


SECTION 4 — Performance Metrics Appropriateness

Basis: Annex IV, Point 4; Article 15

[PLACEHOLDER — Explain why the chosen accuracy and performance metrics are appropriate for this AI system's intended purpose. Example: "Citation accuracy rate is the primary metric because the system's intended purpose is to surface legally accurate references. Factual error rate measures hallucination frequency. Both metrics are appropriate because the system is used for legal research where source accuracy is the primary client requirement. Metrics are calculated against a curated benchmark of [X] annotated legal queries reviewed by qualified lawyers [ASSUMPTION]."]


SECTION 5 — Risk Management System

Basis: Annex IV, Point 5; Article 9

[PLACEHOLDER — Provide a summary reference to the risk management system. For high-risk systems, a full description is required. Example: "The risk management system for this AI system is documented in [risk management system document]. It follows Article 9 requirements: continuous iterative process; covers identification, estimation, evaluation, and mitigation of known and foreseeable risks throughout the system lifecycle. Risk management measures address risks identified in Section 3.2 above."]

Reference to risk management documentation: [PLACEHOLDER — document reference]


SECTION 6 — Changes Made Through the Lifecycle

Basis: Annex IV, Point 6; Article 6(2)

Instructions: This section is updated throughout the system's operational life. Maintain a log of all substantive changes.

Date Change Description Version Substantial Modification? Responsible Person
[DD-MM-YYYY] [PLACEHOLDER — e.g., "Initial release"] v1 N/A [PLACEHOLDER]

Note on substantial modifications: A change that affects the system's risk classification (Article 6), its intended purpose, or its performance in ways that would affect Annex IV documentation triggers re-assessment. This is addressed in the Model Change Management Protocol (L3-6.3).


SECTION 7 — Standards and Technical Specifications

Basis: Annex IV, Point 7

Standard / Specification Status Notes
ISO/IEC 42001:2023 — AI Management System [PLACEHOLDER — Applied / Partially applied / Under assessment] Relevant to AI management system governance [ASSUMPTION]
ISO/IEC 27001 — Information Security Management [PLACEHOLDER] Relevant to Article 15 cybersecurity requirements [ASSUMPTION]
ENISA AI Cybersecurity Guidelines [PLACEHOLDER] Non-binding but referenced for Article 15 implementation [ASSUMPTION]
Harmonised EU AI Act standards (pending CEN/CENELEC publication) Not yet published as of 2026-02-26 [LEGAL REVIEW REQUIRED] — monitor for published harmonised standards under Article 40

SECTION 8 — EU Declaration of Conformity

Basis: Annex IV, Point 8; Article 47 (high-risk systems only)

This section applies only to high-risk AI systems. For limited-risk systems, this section is marked not applicable.

[PLACEHOLDER — For high-risk systems: "A copy of the EU declaration of conformity as required by Article 47 is appended to this document as Annex A." For limited-risk systems: "Not applicable — system classified as limited-risk."]


SECTION 9 — Post-Market Monitoring Plan

Basis: Annex IV, Point 9; Article 72 (high-risk systems only)

For high-risk systems, Article 72 requires a post-market monitoring plan. For limited-risk systems, good practice is to maintain operational monitoring — this is addressed in L3-6.1 (AI Monitoring Framework).

[PLACEHOLDER — For high-risk systems: "Post-market monitoring plan is appended as Annex B. It covers: monitoring metrics (see L3-6.1), reporting frequency, trigger events for remedial action, notification obligations to competent authorities under Article 73, and serious incident reporting under Article 73." For limited-risk systems: "Post-market monitoring follows the operational monitoring framework in L3-6.1. Incident response follows L3-6.2."]


SECTION 10 — Logging and Record-Keeping

Basis: Article 12; Article 26(6) (deployer obligations)

[PLACEHOLDER — Describe logging capabilities. For high-risk systems, Article 12 requires automatic event logging. For all systems, logging supports governance and incident response. Example: "The system automatically logs: session start and end timestamps; query inputs and system outputs (retained for [X] days [ASSUMPTION]); error events; override/discard events by users. Logs are stored on [EU-based infrastructure, ASSUMPTION A-005]. Log retention period is [X] days, compliant with applicable data protection law [LEGAL REVIEW REQUIRED — confirm against GDPR Article 5(1)(e) storage limitation]."]


Appendices to Complete When Finalising This Template

Appendix Description Required For
Annex A EU Declaration of Conformity High-risk systems only
Annex B Post-Market Monitoring Plan High-risk systems only
Annex C Test Logs and Signed Test Reports All systems (good practice)
Annex D Data Processing Agreement with Third-Party Model Provider Where applicable (A-004)
Annex E Bias Assessment Report Where applicable

Document Control

Field Detail
Template ID L2-4.2
Template applies to All Pickles GmbH AI systems
Cross-references L2-4.1 (Risk Mapping Matrix), L1-3.1 (AI System Inventory), L1-3.4 (Human Oversight Policy), L3-6.1 (Monitoring Framework), L3-6.2 (Incident Response), L3-6.3 (Model Change Management)
Regulatory basis EU AI Act Article 11, Annex IV
Assumptions relied upon A-001, A-002, A-004, A-005