AI Act Risk Classification — Check If Your AI System Is High-Risk

The EU AI Act (Regulation 2024/1689) classifies AI systems into four risk levels: prohibited, high-risk, limited risk, and minimal risk. Gibs is an EU AI Act classification tool that classifies your AI system by matching it against Annex III use cases and Article 6 criteria, returning the risk level, applicable obligations, and article citations. Unlike manual AI Act compliance checker questionnaires, Gibs is a programmable API — send a system description, get a structured answer.

The Problem

Every AI system deployed in the EU needs classification before August 2, 2026 (high-risk enforcement). The process today: read 113 articles, cross-reference Annex III's 8 areas, check Article 6(1) safety components, and determine if transparency obligations under Article 50 apply. Most "AI Act tools" are web questionnaires requiring 30+ manual inputs. None offer a programmatic API. None function as a true AI Act compliance checker that tells you when your system is NOT high-risk (negative classification).

How Gibs Solves This

Send a plain-text system description, get classification with citations.

import gibs

client = gibs.Client(api_key="sk-gibs-...")

# High-risk example
result = client.classify(
    system_description="AI system that screens job applicants and ranks CVs automatically",
    regulations=["ai_act"]
)

print(result.classification)    # "high_risk"
print(result.risk_basis)        # "Annex III, Area 4: Employment, workers management"
print(result.obligations)
# ["conformity_assessment", "risk_management_system",
#  "human_oversight", "transparency", "eu_database_registration"]
print(result.articles)
# ["Article 6(2)", "Article 9", "Article 14", "Article 16", "Article 26"]
# Minimal risk example — Gibs explicitly says "not high-risk"
result = client.classify(
    system_description="AI-powered email spam filter for internal corporate use",
    regulations=["ai_act"]
)

print(result.classification)    # "minimal_risk"
print(result.risk_basis)        # "Not listed in Annex III, not a safety component under Article 6(1)"
# No mandatory obligations — only voluntary codes of conduct (Article 95)

Key differentiator: Gibs tells you when your system is NOT high-risk. Most tools only identify positive matches. Negative classification is equally important for legal certainty.

AI Act Risk Categories

| Risk Level | Criteria | Examples | Key Articles | |-----------|----------|----------|-------------| | Prohibited | Article 5 banned practices | Social scoring, real-time biometric ID (with exceptions) | Article 5 | | High-Risk | Annex III use cases OR safety components (Art 6(1)) | CV screening, credit scoring, medical devices with AI | Articles 6-49 | | Limited Risk | Transparency obligations only | Chatbots, deepfakes, emotion recognition | Article 50 | | Minimal Risk | No mandatory obligations | Spam filters, game AI, inventory optimization | Article 95 (voluntary) |

Annex III covers 8 areas: biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration, and administration of justice.

Who This Is For

Gibs is for programmatic classification. It is not a consulting service or audit platform — it is the compliance engine you build on top of.

How Classification Works

The EU AI Act classification tool in Gibs follows the same decision tree that a compliance lawyer would use, but executes it programmatically against the full regulation text:

  1. Prohibited check (Article 5): Does the system description match any of the 8 banned practices? If yes, classification is prohibited.
  2. Annex III match (Article 6(2)): Does the system fall within any of the 8 high-risk use case areas? Gibs matches against the full Annex III text, not just category headings — a "video interview scoring system" matches Area 4 (employment) even without using the exact legal phrasing.
  3. Safety component check (Article 6(1)): Is the system a safety component of a product covered by EU harmonisation legislation listed in Annex I? This catches AI embedded in medical devices, machinery, and other already-regulated products.
  4. Article 6(3) exception: Even if matched to Annex III, does the exception apply? Systems performing narrow procedural tasks, improving prior human activities, or detecting patterns without replacing human assessment may be excluded — unless the system profiles natural persons.
  5. Transparency check (Article 50): If not high-risk, does the system have limited-risk transparency obligations? Chatbots, deepfakes, emotion recognition, and AI-generated public-interest content require disclosure.
  6. Minimal risk default: If none of the above apply, the system is minimal risk with no mandatory obligations.

Every classification response includes the specific articles and Annex III points that drove the decision, so your legal team can verify the reasoning.

Try It Now

Free tier: 50 requests/month.

curl -X POST https://api.gibs.dev/v1/classify \
  -H "Authorization: Bearer sk-gibs-..." \
  -H "Content-Type: application/json" \
  -d '{"system_description": "AI chatbot that provides customer service for an online retailer", "regulations": ["ai_act"]}'

Get your API key | Read the docs

FAQ

What is the EU AI Act classification system?

The EU AI Act classifies AI systems into four risk levels: prohibited (Article 5), high-risk (Article 6 + Annex III), limited risk (Article 50 transparency obligations), and minimal risk (no mandatory obligations). Classification determines which compliance obligations apply to your system.

How does Gibs determine if an AI system is high-risk?

Gibs matches your system description against Annex III's 8 use-case areas and checks whether the system is a safety component of a product covered by EU harmonisation legislation under Article 6(1). It returns the specific Annex III area and articles that apply, or explicitly states the system is not high-risk if no match is found.

Can Gibs classify systems as "not high-risk"?

Yes. Negative classification is a key feature. When a system does not match any Annex III area and is not a safety component under Article 6(1), Gibs explicitly classifies it as minimal risk and notes any applicable transparency obligations under Article 50.

When does the AI Act high-risk enforcement start?

High-risk AI system requirements under the EU AI Act apply from August 2, 2026. Prohibited practices already apply (since February 2, 2025), and GPAI rules apply from August 2, 2025.

Does Gibs cover GPAI (general-purpose AI) obligations?

Yes. Gibs covers general-purpose AI model obligations under Articles 51-56, including transparency requirements, technical documentation, and the systemic risk provisions for models above the compute threshold.

Can I check AI Act and GDPR obligations together?

Yes. Gibs supports cross-regulation queries. An AI system processing personal data may trigger both AI Act obligations (e.g., risk management under Article 9) and GDPR requirements (e.g., DPIA under Article 35). Gibs returns cited obligations from both regulations in a single response.

Last updated: 2026-02-19