AI Act Annex III — Every High-Risk AI Use Case Explained

Annex III of the EU AI Act (Regulation 2024/1689) defines the specific AI use cases classified as high-risk. It covers 8 areas: biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration, and administration of justice. If your AI system falls within these categories, you face the full set of high-risk obligations under Articles 6-49 — conformity assessment, risk management, human oversight, transparency, and more. Gibs indexes the full AI Act across 836 chunks covering all 113 articles, 13 annexes, and 180 recitals with 88% accuracy on an expert-curated evaluation dataset.

The Problem

Annex III is the most referenced part of the AI Act but also the most misunderstood. Developers need to know: "Is my specific use case listed?" The annex uses legal language that doesn't map cleanly to product descriptions. A "recommendation engine" might or might not be high-risk depending on context. And crucially, if your system is NOT in Annex III and NOT a safety component under Article 6(1), it's likely minimal risk — but most guides don't state that clearly.

The 8 Areas of Annex III

Area 1: Biometrics

AI systems intended to be used for:

The distinction matters: real-time remote biometric identification in public spaces by law enforcement is a prohibited practice under Article 5. Post-remote biometric identification and other biometric AI systems are high-risk under Annex III.

Area 2: Critical Infrastructure

AI systems intended to be used as safety components in the management and operation of:

The key qualifier is "safety component." An AI system that monitors energy grid performance for reporting purposes is different from one that controls electricity distribution. Only the latter — where the AI is a safety component in the operation — is high-risk.

Area 3: Education and Vocational Training

AI systems intended to be used for:

This covers AI used in admissions decisions, exam grading, and proctoring. It does not cover AI used for general teaching assistance, content recommendation within a course, or administrative scheduling — those are typically minimal risk.

Area 4: Employment, Workers Management, and Access to Self-Employment

AI systems intended to be used for:

This is one of the broadest Annex III areas. Any AI system that filters CVs, ranks applicants, automates interview scoring, determines task allocation, or monitors employee productivity falls here. Note that this includes AI systems used to make decisions about access to self-employment — not just traditional employment.

Area 5: Access to and Enjoyment of Essential Private Services and Essential Public Services and Benefits

AI systems intended to be used for:

Credit scoring is the most common use case here. If your AI system evaluates whether an individual qualifies for a loan or determines their interest rate, it is high-risk. Similarly, AI used to determine eligibility for social welfare benefits or to triage emergency calls falls under this area.

Area 6: Law Enforcement

AI systems intended to be used by or on behalf of law enforcement authorities for:

Note the distinction from Article 5: predictive policing based solely on profiling or personality assessment is prohibited. AI systems that support law enforcement in broader crime analytics with human oversight are high-risk under Annex III.

Area 7: Migration, Asylum, and Border Control Management

AI systems intended to be used by or on behalf of competent public authorities for:

Area 8: Administration of Justice and Democratic Processes

AI systems intended to be used for:

This does not cover purely administrative court tasks (scheduling, case management). It covers AI that assists in legal reasoning, sentencing analysis, or fact-finding in judicial proceedings.

What If Your System Is NOT in Annex III?

This is where most guides stop. But negative classification matters just as much as positive classification.

If your AI system is:

Then it is likely minimal risk with no mandatory obligations — only voluntary codes of conduct under Article 95. Some systems may still have transparency obligations under Article 50:

Knowing your system is NOT high-risk provides legal certainty. It means you do not need conformity assessment, risk management systems, or EU database registration. That is a significant reduction in compliance effort and cost.

Check Your System With Gibs

import gibs

client = gibs.Client(api_key="sk-gibs-...")

# Gibs matches against Annex III and tells you the result
result = client.classify(
    system_description="AI system that generates credit scores for consumer loan applications",
    regulations=["ai_act"]
)

print(result.classification)    # "high_risk"
print(result.risk_basis)        # "Annex III, Area 5(b): creditworthiness assessment"
print(result.articles)          # ["Article 6(2)", "Article 9", "Article 10", ...]
# Not high-risk — Gibs tells you explicitly
result = client.classify(
    system_description="AI-powered product recommendation engine for an e-commerce website",
    regulations=["ai_act"]
)

print(result.classification)    # "minimal_risk"
print(result.risk_basis)        # "Not listed in Annex III, not a safety component under Article 6(1)"
curl -X POST https://api.gibs.dev/v1/classify \
  -H "Authorization: Bearer sk-gibs-..." \
  -H "Content-Type: application/json" \
  -d '{"system_description": "AI proctoring system that monitors student behavior during university exams", "regulations": ["ai_act"]}'

High-Risk Obligations (What Applies if You're Listed)

If your system falls under any Annex III area, the full high-risk compliance regime applies. These are the key requirements:

| Obligation | Article | Summary | |-----------|---------|---------| | Risk management system | Article 9 | Continuous identification, analysis, estimation, and evaluation of risks throughout the system lifecycle | | Data governance | Article 10 | Training, validation, and testing datasets must meet quality criteria — relevance, representativeness, freedom from errors, completeness | | Technical documentation | Article 11 | Comprehensive documentation demonstrating compliance, drawn up before the system is placed on the market | | Record-keeping | Article 12 | Automatic logging of events during system operation, with traceability throughout the lifecycle | | Transparency | Article 13 | Instructions for use provided to deployers — capabilities, limitations, intended purpose, performance characteristics | | Human oversight | Article 14 | Designed to allow effective oversight by natural persons during the period of use, including ability to override or interrupt | | Accuracy, robustness, security | Article 15 | Appropriate and consistent levels of accuracy, robustness, and cybersecurity throughout the lifecycle | | Quality management system | Article 17 | Systematic procedures for compliance management, risk management, post-market monitoring, and documentation | | EU database registration | Article 49 | Register the system in the EU database before placing it on the market or putting it into service | | Conformity assessment | Article 43 | Internal conformity assessment for most Annex III systems; third-party assessment required for real-time remote biometric identification |

Providers bear the primary obligations. Deployers (companies using high-risk AI systems developed by others) have a lighter but still significant set of obligations under Article 26 — human oversight, input data quality, logging, and cooperation with authorities.

Who This Is For

Try It Now

Free tier: 50 requests/month, no credit card required.

Get your API key | Read the docs | Python SDK | npm package

FAQ

Is Annex III the only way a system becomes high-risk?

No. Article 6(1) also classifies AI systems as high-risk if they are safety components of products covered by EU harmonisation legislation listed in Annex I (e.g., medical devices, machinery, toys, lifts, radio equipment). Annex III covers standalone AI systems used in specific high-risk contexts; Article 6(1) covers AI embedded in or used as a safety component of already-regulated products.

Can the Annex III list be updated?

Yes. Article 7 gives the European Commission the power to amend Annex III through delegated acts, adding new high-risk use cases or modifying existing ones. The criteria for additions include severity of harm, probability of occurrence, reversibility of harm, extent of existing EU regulation, and number of potentially affected persons. The Commission must also consider the degree of asymmetry of power between the AI system user and the affected persons.

Is a chatbot high-risk under Annex III?

Generally no. A customer service chatbot is not listed in any Annex III area. It has transparency obligations under Article 50 — users must be informed they are interacting with an AI system — but is otherwise minimal risk. A chatbot only becomes high-risk if deployed in an Annex III context, for example a chatbot that screens job applicants (Area 4), assesses creditworthiness (Area 5), or assists judicial authorities in legal reasoning (Area 8).

What about AI in healthcare?

AI in healthcare can be high-risk through two routes: (1) if it is a safety component of a medical device under Article 6(1) and Annex I (which lists the Medical Devices Regulation), or (2) if it falls under Annex III Area 5 (access to essential services, such as health insurance risk assessment). AI-powered medical device software (Software as a Medical Device, SaMD) is typically regulated under both the AI Act and the Medical Devices Regulation (EU 2017/745).

Does emotion recognition always make a system high-risk?

Emotion recognition in the workplace and educational institutions is listed in Annex III Area 1 as high-risk. Emotion recognition in other contexts — for example, in marketing research or entertainment — has transparency obligations under Article 50 (the person must be informed) but is not automatically high-risk. Real-time emotion recognition using biometric data in certain contexts may be prohibited under Article 5.

How does Gibs classify edge cases?

Gibs matches system descriptions against the full Annex III text, cross-references Article 6(1) safety component criteria, and checks Article 50 transparency obligations. The AI Act corpus covers all 113 articles, 13 annexes, and 180 recitals across 836 chunks. For ambiguous cases, Gibs explains the reasoning with specific article citations, identifying which Annex III area applies (or explicitly stating that none apply), allowing you or your legal team to make the final determination.

What is the exception under Article 6(3)?

Article 6(3) provides that a high-risk AI system listed in Annex III is not considered high-risk if it does not pose a significant risk of harm to the health, safety, or fundamental rights of natural persons. Specifically, the exception applies if the AI system: (a) performs a narrow procedural task, (b) improves the result of a previously completed human activity, (c) detects decision-making patterns without replacing or influencing human assessment, or (d) performs a preparatory task to an assessment relevant to the Annex III use cases. However, AI systems that profile natural persons are always considered high-risk, regardless of this exception.

Does Gibs cover other regulations besides the AI Act?

Yes. Gibs currently covers the EU AI Act, DORA (Digital Operational Resilience Act), and GDPR. Cross-regulation queries are supported — for example, AI systems processing personal data may trigger both AI Act obligations (risk management under Article 9, data governance under Article 10) and GDPR requirements (data protection impact assessment under Article 35, lawful basis under Article 6). Gibs returns cited answers from both regulations in a single response.

Last updated: 2026-02-19