Skip to content

EU AI Act Risk Mapping Matrix

Project: Pickles GmbH — AI Governance Framework Stage: Stage 3 — Regulatory Alignment Status: Draft Version: v1 Date: 2026-02-26 Assumptions: Built on outline assumptions — not verified against real Pickles GmbH data


Purpose

This document maps Pickles GmbH's assumed AI systems against the EU AI Act (Regulation (EU) 2024/1689) risk classification framework. For each system, it identifies: the applicable risk tier, relevant articles, documentation requirements, and transparency obligations.

Scope of this document: EU AI Act obligations only. GDPR/BDSG data protection obligations are addressed in L2-5.1 through L2-5.3.

[ASSUMPTION] This matrix is built entirely on the assumed product line described in CLAUDE.md Section 2 (A-001). The actual Pickles GmbH product portfolio has not been verified. Before this matrix is used operationally, each system must be reviewed against the real product architecture.

[LEGAL REVIEW REQUIRED] Risk classification under the EU AI Act is a legal determination. This matrix constitutes a preliminary analytical framework, not a compliance certification. A qualified legal practitioner familiar with EU AI Act implementation must review classifications before reliance.


1. EU AI Act Risk Tier Framework

The EU AI Act uses a four-tier risk model. The applicable tier determines which obligations apply.

Tier Risk Level Primary Trigger Applicable Articles
1 Prohibited Systems posing unacceptable risks to fundamental rights Article 5
2 High-risk Systems listed in Annex III, or safety components under Annex I legislation Articles 8–27, 49
3 Limited-risk (transparency obligations) AI systems interacting with humans, generating synthetic content, or processing biometric/emotional data Article 50
4 Minimal-risk All other AI systems Voluntary codes only

2. Annex III High-Risk Classification Assessment

Article 6(2) designates systems in Annex III as high-risk. For legal AI providers, the most relevant Annex III entry is:

Annex III, Point 8(a): "AI systems intended to be used by a judicial authority or on their behalf to assist a judicial authority in researching and interpreting facts and the law and in applying the law to a concrete set of facts, or to be used in a similar way in alternative dispute resolution."

Critical interpretation for Pickles GmbH:

The note at Annex III Point 8(a) in the stage-3 regulatory extracts confirms that this classification applies to judicial authorities — courts, tribunals, arbitrators acting in a judicial capacity. It does not automatically apply to AI systems used by lawyers advising clients, even where those lawyers subsequently appear before judicial authorities.

[ASSUMPTION] Pickles GmbH's clients are law firms and in-house legal departments (A-002), not judicial authorities. Subject to verification, Pickles GmbH's systems provisionally do not fall under Annex III Point 8(a).

However, the following circumstances could change this assessment: - If Pickles GmbH markets to, or sells to, judicial bodies or court-appointed experts [LEGAL REVIEW REQUIRED] - If output from Pickles GmbH systems is directly integrated into judicial decision workflows without intermediate lawyer review [LEGAL REVIEW REQUIRED]

Article 6(3) Derogation Check:

Even if a system appeared in Annex III, Article 6(3) provides that it shall not be considered high-risk where it: - Performs a narrow procedural task (Article 6(3)(a)) - Improves the result of a previously completed human activity (Article 6(3)(b)) - Detects patterns without replacing human assessment without proper human review (Article 6(3)(c)) - Performs a preparatory task to an assessment (Article 6(3)(d))

Note: Article 6(3) does not apply if the system performs profiling of natural persons (Article 6(3) final subparagraph). Legal research and drafting systems are unlikely to perform profiling, but this must be confirmed per system.


3. AI System Risk Classification Matrix

3.1 Assumed Pickles GmbH AI Systems

[ASSUMPTION] The following four AI system categories are assumed to represent the Pickles GmbH product portfolio (A-001). Each must be validated against the actual product architecture.

System ID System Name Description Primary Users
SYS-01 Legal Research Assistant AI-powered case law, legislation, and commentary search and summarisation Lawyers, paralegals [ASSUMPTION]
SYS-02 Document Drafting Tool AI-assisted generation and auto-completion of legal documents, contracts, and briefs Lawyers [ASSUMPTION]
SYS-03 Document Summarisation Tool AI summarisation of lengthy legal documents, judgments, and contracts Lawyers, in-house legal [ASSUMPTION]
SYS-04 Legal Analysis Tool AI analysis of legal risk, interpretation, and issue identification in documents or fact patterns Lawyers [ASSUMPTION]

3.2 Full Risk Matrix

Dimension Assessment Regulatory Basis
Prohibited (Tier 1)? No — does not meet any Article 5 prohibition criteria Article 5
High-risk (Tier 2)? Provisionally No — Annex III Point 8(a) applies to judicial authorities; Pickles GmbH clients are lawyers [ASSUMPTION] Article 6(2), Annex III Point 8(a)
Article 6(3) derogation applicable? Likely yes — performs preparatory task (Article 6(3)(d)); improves result of human activity (Article 6(3)(b)) [LEGAL REVIEW REQUIRED] Article 6(3)
Profiling of natural persons? Provisionally No — research tool, not user-profiling [ASSUMPTION — confirm against real system] Article 6(3) final subparagraph
Provisional risk tier Tier 3 — Limited-risk Article 50
Article 6(4) self-assessment required? No — Article 6(4) applies only where a provider concludes that an Annex III system is not high-risk by reason of the Article 6(3) derogation; SYS-01 is assessed as not falling within Annex III Point 8(a) at all (clients are lawyers, not judicial authorities), so Article 6(4) documentation is not triggered Article 6(4)
Article 50 triggers Article 50(1): if system interacts directly with end users (lawyers querying the system) — disclosure required that they are interacting with an AI [ASSUMPTION] Article 50(1)
Documentation requirements Article 50 compliance documentation Article 50
Transparency to users Must inform users they are interacting with AI system unless obvious Article 50(1)
AI competence requirement Yes — Article 4 requires Pickles GmbH to ensure staff and users have sufficient AI competence Article 4

SYS-02 — Document Drafting Tool

Dimension Assessment Regulatory Basis
Prohibited (Tier 1)? No Article 5
High-risk (Tier 2)? Provisionally No — not within Annex III categories for lawyer-facing use [ASSUMPTION] Article 6(2), Annex III
Article 6(3) derogation applicable? Likely yes — assistive function improving result of human activity (Article 6(3)(b)) [LEGAL REVIEW REQUIRED] Article 6(3)
Profiling of natural persons? Provisionally No [ASSUMPTION — confirm against real system] Article 6(3) final subparagraph
Provisional risk tier Tier 3 — Limited-risk Article 50
Article 6(4) self-assessment required? No — SYS-02 is assessed as not falling within Annex III; Article 6(4) applies only where an Annex III system is assessed as not high-risk under the Article 6(3) derogation and is not a blanket requirement for limited-risk systems Article 6(4)
Article 50 triggers Article 50(1): direct user interaction — AI disclosure required; Article 50(2): if generating synthetic text content (drafts, clauses) — machine-readable marking required Articles 50(1), 50(2)
Article 50(2) exemption? Possible — Article 50(2) exempts systems performing "assistive function for standard editing" or not "substantially altering input data" [LEGAL REVIEW REQUIRED] Article 50(2)
Documentation requirements Article 50(2) technical marking records Article 50
Transparency to users AI interaction disclosure (Article 50(1)); AI-generated content marking unless assistive exemption applies (Article 50(2)) Article 50
AI competence requirement Yes — Article 4 Article 4

SYS-03 — Document Summarisation Tool

Dimension Assessment Regulatory Basis
Prohibited (Tier 1)? No Article 5
High-risk (Tier 2)? Provisionally No Article 6(2), Annex III
Article 6(3) derogation applicable? Strongly likely — narrow procedural task (Article 6(3)(a)); preparatory task (Article 6(3)(d)) Article 6(3)
Profiling of natural persons? Provisionally No [ASSUMPTION] Article 6(3) final subparagraph
Provisional risk tier Tier 3 — Limited-risk (possibly Tier 4 — Minimal-risk) [LEGAL REVIEW REQUIRED] Article 50
Article 6(4) self-assessment required? No — SYS-03 is assessed as not falling within Annex III; Article 6(4) applies only where an Annex III system is assessed as not high-risk under the Article 6(3) derogation Article 6(4)
Article 50 triggers Article 50(1): if direct user interaction — AI disclosure; Article 50(2): if generating synthetic text (summaries) — marking required unless assistive exemption Articles 50(1), 50(2)
Article 50(2) exemption? Summaries may be considered synthetic text generation — not clearly an "assistive function for standard editing" [LEGAL REVIEW REQUIRED] Article 50(2)
Documentation requirements Article 50 compliance records Article 50
Transparency to users AI disclosure at first interaction; summary output marked as AI-generated unless exemption applies Article 50
AI competence requirement Yes — Article 4 Article 4

Dimension Assessment Regulatory Basis
Prohibited (Tier 1)? No Article 5
High-risk (Tier 2)? Uncertain — requires formal legal assessment [LEGAL REVIEW REQUIRED] Article 6(2), Annex III Point 8(a)
Annex III Point 8(a) risk If outputs are used directly in judicial proceedings or as basis for consequential legal decisions affecting persons, Annex III Point 8(a) may be engaged [ASSUMPTION] Annex III Point 8(a)
Article 6(3) derogation applicable? Possible — if tool performs preparatory task only (Article 6(3)(d)) and human lawyer makes final legal assessment. Does not apply if profiling of natural persons occurs. [LEGAL REVIEW REQUIRED] Article 6(3)
Profiling of natural persons? Risk is higher for analysis tools — must be confirmed against actual system design [ASSUMPTION] Article 6(3) final subparagraph
Provisional risk tier Tier 2 (High-risk) or Tier 3 (Limited-risk) — classification contingent on product architecture and client use patterns [LEGAL REVIEW REQUIRED] Articles 6, 50
If High-risk: applicable obligations Full Chapter III obligations: risk management system (Art. 9), data governance (Art. 10), technical documentation (Art. 11, Annex IV), logging (Art. 12), transparency to deployers (Art. 13), human oversight (Art. 14), accuracy/robustness/cybersecurity (Art. 15), quality management (Art. 17), conformity assessment (Art. 43), EU declaration of conformity (Art. 47), registration (Art. 49) Articles 8–27, 43, 47, 49
If Limited-risk: applicable obligations Article 6(4) self-assessment (required if provider concludes that SYS-04 falls within Annex III but invokes the Article 6(3) derogation to conclude it is not high-risk — must be documented before market placement); Article 50 transparency Articles 6(4), 50
Documentation requirements If high-risk: full Annex IV technical documentation; if limited-risk via Article 6(3) derogation: Article 6(4) assessment record + Article 50 compliance records; if Annex III not engaged: Article 50 compliance records only Articles 11, Annex IV, 6(4), 50
AI competence requirement Yes — Article 4 Article 4

4. Article Obligation Matrix — High-Risk Systems

If SYS-04 is determined to be high-risk [LEGAL REVIEW REQUIRED], the following obligations apply in full. These obligations are also documented here for reference should any other system be reclassified.

Article Obligation Owner [ASSUMPTION] Status
Article 9 Establish and maintain a documented risk management system throughout system lifecycle Head of Product / AIRO [ASSUMPTION] ☐ Not yet implemented
Article 10 Data governance: training/validation/testing data quality, bias examination Head of Engineering [ASSUMPTION] ☐ Not yet implemented
Article 11 + Annex IV Technical documentation drawn up before market placement and kept up to date Head of Engineering [ASSUMPTION] ☐ Not yet implemented
Article 12 Automatic event logging capabilities; retain logs as specified Head of Engineering [ASSUMPTION] ☐ Not yet implemented
Article 13 Instructions for use: system characteristics, limitations, human oversight design, log interpretation Product / Legal [ASSUMPTION] ☐ Not yet implemented
Article 14 Human oversight mechanisms embedded in system design; users must be able to override, disregard, or halt outputs Head of Product [ASSUMPTION] ☐ Not yet implemented
Article 15 Accuracy, robustness, and cybersecurity performance targets declared; resilience against adversarial inputs Head of Engineering [ASSUMPTION] ☐ Not yet implemented
Article 16(c) Quality management system (Article 17) CEO / AIRO [ASSUMPTION] ☐ Not yet implemented
Article 17 Documented QMS including strategy, testing, data management, risk management, incident reporting CEO / AIRO [ASSUMPTION] ☐ Not yet implemented
Article 25(4) Written agreements with third-party AI model providers specifying information and capabilities required for compliance CEO / Legal [ASSUMPTION] ☐ Not yet implemented
Article 26 Deployer (Pickles GmbH's clients) obligations — relevant to inform clients of their duties when using high-risk AI Legal / Client Success [ASSUMPTION] ☐ Not yet implemented
Article 43 Conformity assessment before market placement CEO / Legal [ASSUMPTION] ☐ Not yet implemented
Article 47 EU declaration of conformity CEO / Legal [ASSUMPTION] ☐ Not yet implemented
Article 49(1) Registration in EU AI Act database CEO / Legal [ASSUMPTION] ☐ Not yet implemented

5. Article 50 Transparency Obligations — All Systems

Regardless of high-risk classification, Article 50 applies to all Pickles GmbH AI systems in the following circumstances:

Obligation Trigger Systems Affected Action Required
Article 50(1) — AI interaction disclosure System "intended to interact directly with natural persons" SYS-01, SYS-02, SYS-03, SYS-04 [ASSUMPTION] Inform users at first interaction that they are interacting with an AI system
Article 50(1) — Exemption Disclosure not required if AI nature is "obvious" from context to a reasonably well-informed user All systems — assess per system [LEGAL REVIEW REQUIRED] whether lawyer-facing AI tool interaction is "obvious"
Article 50(2) — Synthetic content marking System generates synthetic audio, image, video or text content SYS-02 (drafts), SYS-03 (summaries), SYS-04 (analysis outputs) [ASSUMPTION] Outputs must be marked in machine-readable format as AI-generated
Article 50(2) — Assistive exemption Does not apply where system performs "assistive function for standard editing" SYS-02 — partial exemption possible; SYS-03, SYS-04 — less clear [LEGAL REVIEW REQUIRED] Legal review required per system
Article 50(5) — Timing Disclosure must be provided "at the latest at the time of the first interaction or exposure" All systems Implement at onboarding / first login

6. Value Chain Obligations

Article 25 imposes obligations on the AI value chain, relevant where Pickles GmbH uses third-party AI model providers:

[ASSUMPTION] Pickles GmbH may use third-party AI model providers (A-004). If so:

Obligation Basis Action Required
Pickles GmbH must treat itself as a provider under Article 16 if it substantially modifies a third-party AI system or places its brand on it Article 25(1)(a)(b) Confirm modification scope with model providers [ASSUMPTION]
Written agreement with third-party providers must specify information, technical access, and capabilities needed for Pickles GmbH to comply with EU AI Act Article 25(4) Include in vendor contracts — see L2-5.3
If third-party model is a general-purpose AI model (GPAI), additional Chapter V obligations may apply to the GPAI provider Articles 51–56 Review model provider's compliance status [ASSUMPTION]

7. Article 4 — AI Competence Obligation

Article 4 applies from 2 February 2025 and is not contingent on high-risk classification. It requires that Pickles GmbH:

Take measures to ensure, to the best of their ability, that their personnel and other persons involved in the operation and use of AI systems on their behalf have a sufficient level of AI competence.

Per the BRAK AI Position Paper (December 2024), AI competence requirements also apply to lawyers as operators of AI systems under Article 3(4) EU AI Act. Pickles GmbH's client-facing materials should therefore support clients' own Article 4 compliance.

[ASSUMPTION] Pickles GmbH's internal AI competence training programme has not been verified. See L1-3.1 and L1-3.4 for related governance structures.


8. Compliance Readiness Summary

System Provisional Tier High-Risk Uncertainty Article 50 Applies Article 6(4) Required AI Competence (Art. 4)
SYS-01 Legal Research Assistant Tier 3 — Limited-risk Low [ASSUMPTION] Yes No — Annex III not engaged Yes
SYS-02 Document Drafting Tool Tier 3 — Limited-risk Low [ASSUMPTION] Yes (incl. Art. 50(2)) No — Annex III not engaged Yes
SYS-03 Document Summarisation Tier 3 or Tier 4 [LEGAL REVIEW REQUIRED] Low [ASSUMPTION] Likely yes No — Annex III not engaged Yes
SYS-04 Legal Analysis Tool Tier 2 or Tier 3 [LEGAL REVIEW REQUIRED] High Yes Conditional — required if provider invokes Article 6(3) derogation after Annex III assessment [LEGAL REVIEW REQUIRED] Yes

Priority action: SYS-04 requires a formal Article 6(4) risk classification assessment by a qualified EU AI Act practitioner before any market placement or operational deployment. [LEGAL REVIEW REQUIRED]


9. Key Dates — EU AI Act Application Timeline

Date Event Impact on Pickles GmbH
1 August 2024 Regulation entered into force General application deferred; the Regulation applies from 2 August 2026 except for the earlier-application provisions listed in Article 113
2 February 2025 Chapters I and II apply (including Article 4 — AI competence) AI competence obligations now active
2 August 2025 Chapter V (GPAI models) and Chapter III Section 4 (notified bodies) apply Relevant if using GPAI model providers
2 August 2026 Full application including Article 50 (transparency obligations) and Chapter III high-risk obligations Transparency obligations and high-risk obligations fully active
2 August 2027 High-risk AI systems as safety components under existing products (Annex I) — delayed application Not directly applicable to Pickles GmbH [ASSUMPTION]

Note: Article 50 transparency obligations for limited-risk systems apply from 2 August 2026 per Article 113. Article 4 (AI competence) already applies from 2 February 2025.


Document Control

Field Detail
Document ID L2-4.1
Next review After SYS-04 legal classification assessment [LEGAL REVIEW REQUIRED]
Cross-references L1-3.1 (AI System Inventory), L1-3.2 (Risk Classification), L2-4.2 (Technical Documentation Template), L2-4.3 (Transparency Framework)
Assumptions relied upon A-001, A-002, A-003, A-004, A-005