AI Act Risk Classification — Check If Your AI System Is High-Risk

The EU AI Act (Regulation 2024/1689) classifies AI systems into four risk levels: prohibited, high-risk, limited risk, and minimal risk. Gibs is an API that classifies your AI system by matching it against Annex III use cases and Article 6 criteria, returning the risk level, applicable obligations, and article citations. No web forms or manual research — send a system description, get a structured answer.

The Problem

Every AI system deployed in the EU needs classification before August 2, 2026 (high-risk enforcement). The process today: read 113 articles, cross-reference Annex III's 8 areas, check Article 6(1) safety components, and determine if transparency obligations under Article 50 apply. Most "AI Act tools" are web questionnaires requiring 30+ manual inputs. None offer an API. None tell you when your system is NOT high-risk (negative classification).

How Gibs Solves This

Send a plain-text system description, get classification with citations.

import gibs

client = gibs.Client(api_key="sk-gibs-...")

# High-risk example
result = client.classify(
    system_description="AI system that screens job applicants and ranks CVs automatically",
    regulations=["ai_act"]
)

print(result.classification)    # "high_risk"
print(result.risk_basis)        # "Annex III, Area 4: Employment, workers management"
print(result.obligations)
# ["conformity_assessment", "risk_management_system",
#  "human_oversight", "transparency", "eu_database_registration"]
print(result.articles)
# ["Article 6(2)", "Article 9", "Article 14", "Article 16", "Article 26"]
# Minimal risk example — Gibs explicitly says "not high-risk"
result = client.classify(
    system_description="AI-powered email spam filter for internal corporate use",
    regulations=["ai_act"]
)

print(result.classification)    # "minimal_risk"
print(result.risk_basis)        # "Not listed in Annex III, not a safety component under Article 6(1)"
# No mandatory obligations — only voluntary codes of conduct (Article 95)

Key differentiator: Gibs tells you when your system is NOT high-risk. Most tools only identify positive matches. Negative classification is equally important for legal certainty.

AI Act Risk Categories

| Risk Level | Criteria | Examples | Key Articles | |-----------|----------|----------|-------------| | Prohibited | Article 5 banned practices | Social scoring, real-time biometric ID (with exceptions) | Article 5 | | High-Risk | Annex III use cases OR safety components (Art 6(1)) | CV screening, credit scoring, medical devices with AI | Articles 6-49 | | Limited Risk | Transparency obligations only | Chatbots, deepfakes, emotion recognition | Article 50 | | Minimal Risk | No mandatory obligations | Spam filters, game AI, inventory optimization | Article 95 (voluntary) |

Annex III covers 8 areas: biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration, and administration of justice.

Who This Is For

Gibs is for programmatic classification. It is not a consulting service or audit platform — it is the compliance engine you build on top of.

Try It Now

Free tier: 50 requests/month.

curl -X POST https://api.gibs.dev/v1/classify \
  -H "Authorization: Bearer sk-gibs-..." \
  -H "Content-Type: application/json" \
  -d '{"system_description": "AI chatbot that provides customer service for an online retailer", "regulations": ["ai_act"]}'

Get your API key | Read the docs

FAQ

What is the EU AI Act classification system?

The EU AI Act classifies AI systems into four risk levels: prohibited (Article 5), high-risk (Article 6 + Annex III), limited risk (Article 50 transparency obligations), and minimal risk (no mandatory obligations). Classification determines which compliance obligations apply to your system.

How does Gibs determine if an AI system is high-risk?

Gibs matches your system description against Annex III's 8 use-case areas and checks whether the system is a safety component of a product covered by EU harmonisation legislation under Article 6(1). It returns the specific Annex III area and articles that apply, or explicitly states the system is not high-risk if no match is found.

Can Gibs classify systems as "not high-risk"?

Yes. Negative classification is a key feature. When a system does not match any Annex III area and is not a safety component under Article 6(1), Gibs explicitly classifies it as minimal risk and notes any applicable transparency obligations under Article 50.

When does the AI Act high-risk enforcement start?

High-risk AI system requirements under the EU AI Act apply from August 2, 2026. Prohibited practices already apply (since February 2, 2025), and GPAI rules apply from August 2, 2025.

Does Gibs cover GPAI (general-purpose AI) obligations?

Yes. Gibs covers general-purpose AI model obligations under Articles 51-56, including transparency requirements, technical documentation, and the systemic risk provisions for models above the compute threshold.

Can I check AI Act and GDPR obligations together?

Yes. Gibs supports cross-regulation queries. An AI system processing personal data may trigger both AI Act obligations (e.g., risk management under Article 9) and GDPR requirements (e.g., DPIA under Article 35). Gibs returns cited obligations from both regulations in a single response.

Last updated: 2026-02-19