EU AI Act for Developers — Classification, Obligations, and Deadlines

The EU AI Act (Regulation 2024/1689) requires developers to classify AI systems by risk level and meet specific obligations based on that classification. This guide covers what you actually need to do — risk categories, the difference between provider and deployer, key deadlines, and how to automate compliance checking with the Gibs API. The full regulation spans 113 articles, 13 annexes, and 180 recitals. Gibs indexes all of them across 836 chunks with 88% accuracy on an expert-curated evaluation dataset.

The Problem

Most AI Act resources are written for lawyers. Developers get blog posts with vague summaries and no actionable tooling. You need to know: Is my system high-risk? What do I have to do? When is the deadline? And ideally, you want to check this programmatically — not fill out a 40-question web form every time you ship a feature.

The regulation itself is 144 pages of legal text. Cross-references between articles, annexes, and recitals are dense. Figuring out whether Article 6(1) or Article 6(2) applies to your system requires reading Annex I, Annex III, and several recitals. That is not a reasonable workflow for an engineering team shipping on two-week sprints.

How the AI Act Classifies Systems

The AI Act uses a four-tier risk classification. Every AI system falls into one of these categories, and the category determines what obligations apply.

Prohibited (Article 5)

These systems are banned outright as of February 2, 2025:

If your system does any of these, stop building it. Penalties for prohibited practices reach up to 35 million EUR or 7% of global annual turnover (Article 99).

High-Risk (Article 6 + Annex III)

A system is high-risk if it falls under Article 6(1) — safety component of a product covered by existing EU harmonization legislation listed in Annex I — or Article 6(2) — listed in Annex III. The Annex III categories that matter most for developers:

High-risk obligations include: conformity assessment (Article 43), risk management system (Article 9), data governance (Article 10), technical documentation (Article 11), record-keeping (Article 12), transparency (Article 13), human oversight (Article 14), accuracy and robustness (Article 15), and quality management system (Article 17).

Limited Risk (Article 50)

These systems have transparency obligations only:

Minimal Risk

Everything else — spam filters, recommendation engines, game AI, search ranking, autocomplete, code linters. No mandatory obligations under the AI Act. You can build and deploy these freely, though voluntary codes of conduct are encouraged (Article 95).

Provider vs Deployer — Which Are You?

The AI Act assigns different obligations depending on your role. Most developers are providers.

| | Provider | Deployer | |---|---|---| | Definition | Develops or commissions an AI system and places it on the market or puts it into service under their own name or trademark | Uses an AI system under their authority, except for personal non-professional activity | | Key Articles | Articles 16-25 | Articles 26-27 | | Obligations (high-risk) | Full: conformity assessment, risk management system, technical documentation, quality management system, post-market monitoring, serious incident reporting | Risk management under provider instructions, human oversight, input data quality, logging, cooperation with authorities | | Example | Company building a CV screening tool, company training a credit scoring model | Company using a third-party CV screening tool for hiring |

When you become a provider even if you didn't build it from scratch: Under Article 28, you are treated as a provider if you put your name or trademark on a high-risk AI system, make a substantial modification to a high-risk system, or modify the intended purpose of a system so that it becomes high-risk. Fine-tuning a model for a high-risk use case can trigger this.

Key Deadlines

The AI Act has a phased application schedule. Not everything applies at once.

| Date | What Applies | |------|-------------| | August 1, 2024 | AI Act enters into force | | February 2, 2025 | Prohibited practices (Article 5) — already in effect | | August 2, 2025 | General-purpose AI model rules (Articles 51-56), AI literacy obligation (Article 4) | | August 2, 2026 | High-risk AI system obligations (Articles 6-49) — the big one for most developers | | August 2, 2027 | High-risk systems in Annex I (existing EU product safety legislation, e.g., medical devices, machinery) |

August 2, 2026 is the deadline that affects the majority of AI developers. If you are building or deploying a high-risk AI system under Annex III, you have until that date to achieve full compliance — conformity assessment, risk management, documentation, and all the rest.

Automate It With Gibs

Gibs is a REST API that answers AI Act compliance questions with article-level EUR-Lex citations. Instead of manually reading 144 pages, you make an API call and get structured, cited answers.

Classify your system:

import gibs

client = gibs.Client(api_key="sk-gibs-...")

# Check if your system is high-risk
result = client.check(
    question="Is a CV screening system that ranks job applicants high-risk under the AI Act?",
    regulations=["ai_act"]
)

print(result.answer)
# "Yes. A CV screening system that ranks job applicants is classified as
#  high-risk under Annex III, point 4(a), which covers AI systems intended
#  to be used for recruitment, specifically for screening or filtering
#  applications and evaluating candidates..."

print(result.sources)
# ["Article 6(2)", "Annex III point 4(a)", "Article 9", "Article 14"]

Check specific obligations:

import { Gibs } from '@gibs-dev/sdk'

const gibs = new Gibs({ apiKey: 'sk-gibs-...' })

// Full compliance check
const result = await gibs.check({
  question: 'What documentation does a provider of high-risk AI need under the AI Act?',
  regulations: ['ai_act']
})
// Returns cited answer referencing Article 11 (technical documentation),
// Article 12 (record-keeping), Article 13 (transparency and provision
// of information to deployers), Article 17 (quality management system)

cURL:

curl -X POST https://api.gibs.dev/v1/check \
  -H "Authorization: Bearer sk-gibs-..." \
  -H "Content-Type: application/json" \
  -d '{"question": "What are the transparency obligations for chatbots under the AI Act?", "regulations": ["ai_act"]}'

Gibs also provides a native MCP (Model Context Protocol) server at mcp.gibs.dev. AI coding assistants like Cursor, Claude Desktop, and Windsurf can call Gibs directly to answer regulatory questions within your development environment.

Who This Is For

This is for technical teams. If you need legal advice on specific situations, consult a lawyer. Gibs gives you the regulatory knowledge base — structured, cited, and programmable. Every answer traces to specific articles, not vague summaries.

Try It Now

Free tier: 50 requests per month, no credit card required.

Get your API key | Read the docs | Python SDK | npm package

FAQ

Do I need to comply with the AI Act if I'm outside the EU?

Yes, if your AI system is placed on the market or put into service in the EU, or if its output is used in the EU. The AI Act has extraterritorial scope under Article 2. A company headquartered in the US that offers a credit scoring AI to European banks must comply.

Is a chatbot high-risk under the AI Act?

Generally no. Most chatbots are limited risk under Article 50, requiring only transparency disclosure — users must know they are interacting with an AI system. A chatbot only becomes high-risk if it is used in an Annex III context, for example a chatbot that makes employment decisions or determines access to essential services.

What's the penalty for non-compliance?

Up to 35 million EUR or 7% of global annual turnover (whichever is higher) for prohibited practices. Up to 15 million EUR or 3% for violations of other obligations. Up to 7.5 million EUR or 1% for supplying incorrect information to authorities. SMEs and startups get proportionally lower caps (Article 99).

Do I need a conformity assessment for my AI system?

Only if your system is classified as high-risk. Most high-risk systems under Annex III can use internal conformity assessment — you assess yourself against the requirements. Real-time remote biometric identification systems and certain critical infrastructure AI require third-party conformity assessment by a notified body (Article 43).

What is AI literacy under Article 4?

Article 4 requires providers and deployers to ensure their staff and other persons dealing with AI systems on their behalf have a sufficient level of AI literacy. This means understanding of AI system capabilities, limitations, and potential risks appropriate to the context of use. This obligation applies from August 2, 2025.

What about general-purpose AI models?

General-purpose AI models (GPAI) — foundation models and large language models — have their own set of obligations under Articles 51-56. All GPAI providers must provide technical documentation, comply with EU copyright law, and publish a summary of training data. GPAI models with systemic risk (based on compute thresholds) have additional obligations including adversarial testing, incident monitoring, and cybersecurity measures. These rules apply from August 2, 2025.

Can Gibs tell me which Annex III category my system falls under?

Yes. Describe your AI system in a query and Gibs will identify the applicable Annex III category (if any), cite the specific point, and list the resulting obligations with article references. The AI Act corpus in Gibs covers all 113 articles, 13 annexes, and 180 recitals.

Does Gibs cover other regulations besides the AI Act?

Yes. Gibs currently covers the EU AI Act, DORA (Digital Operational Resilience Act), and GDPR. Cross-regulation queries are supported — a question like "How do AI Act data governance requirements interact with GDPR?" returns cited answers from both regulations with clear attribution. Additional regulations are planned.

Last updated: 2026-02-19