ORIENT Framework Generator Contact Türkçe versiyonu →
← hexis.center
EU AI Act · February 2026

EU AI Act: Risk Classification Guide

A practical starting point for organisations operating in Turkey. Four risk tiers, common misunderstandings, and three questions to classify any AI system.

The EU AI Act entered into force in August 2024 and applies to organisations based in Turkey. If your system touches the EU market, affects EU citizens, or sits within an EU company's supply chain — the regulation applies to you.

So where do you start?

The first step is always the same: understand which risk category your system falls into.

Four Tiers, One Question

EU AI Act classifies AI systems into four risk tiers. This classification directly determines your compliance obligations.

Tier 1
Prohibited Systems
Cannot be deployed under any circumstances. Social scoring systems, real-time biometric surveillance in public spaces (outside narrow exceptions), and subliminal manipulation tools fall into this category. If your system meets this definition, the question is not compliance — it is prohibition. Article 5, Regulation (EU) 2024/1689. In force since 2 February 2025.
Tier 2
High-Risk Systems
The most comprehensive obligation tier. Full enforcement begins 2 August 2026 for Annex III systems. Hiring and employee evaluation systems, credit scoring tools, educational assessment systems, and critical infrastructure management fall into this category. High-risk operators must conduct risk assessments, implement human oversight, maintain technical documentation, and keep logs. Article 6 + Annexes I and III.
Tier 3
Limited Risk Systems
Transparency obligations apply, but the burden is lighter than high-risk. You must inform users that they are interacting with an AI system. Customer-facing chatbots and content recommendation systems typically fall here. Article 50.
Tier 4
Minimal Risk Systems
Spam filters and AI features in games. No mandatory legal obligations — voluntary codes of conduct are sufficient. Article 95.

Where Turkish Companies Most Often Get Stuck

In practice, Turkish organisations tend to encounter difficulties in two areas.

First, they don't know the risk category of the SaaS tools they use. Recruitment platforms, automated CV screening features, and performance management software can fall within the high-risk category. Not having developed the tool yourself does not exempt you from obligations — deployers are liable too.

Second, the "we're not in the EU" assumption. A company based in Turkey that serves EU customers or evaluates EU employees is covered by the regulation. Geographic location is not the deciding factor — scope of impact is.

Three Practical Questions to Classify Any System

Use these three questions to position your system quickly.

  1. Does this system make decisions about an individual? If it produces outputs about specific people in hiring, credit, education, healthcare, or justice contexts, you are close to the high-risk category.
  2. How binding is the output? Is the system offering a recommendation or directly shaping a decision? How real is human oversight — genuine review or nominal rubber-stamping?
  3. What data does it process? If the system handles biometric, behavioural, or sensitive personal data, the risk level increases automatically.

The answers to these three questions will point you to the right category. If the answers are ambiguous — as they usually are at the start — that is itself a signal: your system needs better documentation.

Classification Is Not a One-Time Event

The most important mindset shift EU AI Act introduces is this: risk classification is not a checklist exercise — it is an ongoing evaluation practice. As systems are updated, deployment contexts change, and data flows evolve, the category can shift.

The starting point is straightforward: know where your system stands today.

Use the Hexis Risk Classifier to determine the EU AI Act risk level of your AI system — step by step, with article references and obligation mapping.
Risk Classifier → Compliance Checklist →

Note: This article is based on information available as of February 2026. Article references are to Regulation (EU) 2024/1689. For information on proposed amendments, see our article on the Digital Omnibus. This article does not constitute legal advice.