EU AI Act Risk Sınıflandırma ve Yapay Zeka Yönetişim Matrisi — Türk Şirketler için Ücretsiz Uyum Aracı

Hexis · ORIENT Framework · Stage 1
Observe — Define Your AI System
Compliance begins with knowing what your system actually does.
Every obligation under the EU AI Act traces back to one question: what is this system designed to do? Your system's intended purpose determines its risk classification, your legal obligations, and the measures you need to take. An undefined system cannot be classified. An unclassified system cannot be governed.
You will give your AI system a formal identity — document what it does, who it affects, and how it could be misused. Foreseeable misuse assessment is part of this stage. Most organizations have never done this formally.
Article 3(12): Intended purpose must be defined. Risk classification is based on this definition.
Article 6: Risk category is determined by intended purpose. Observe is the prerequisite for classification.
What you will produce
An AI System Card — a documented, single-page identity record of your system. This card is the input for the Risk stage and every subsequent ORIENT phase. EU AI Act Article 3(12) compliant.
Estimated time: 10–15 minutes
System Identity
Rule-based automation systems without AI components are generally outside the scope of the EU AI Act (Article 3(1)). You may still continue to assess governance maturity.
The EU AI Act applies specifically to AI systems as defined in Article 3(1). Pure rule-based automation that does not involve machine learning, logic-based, or statistical approaches may fall outside the regulation's scope. Identifying the AI component correctly is essential for determining whether the Act applies to your system.
Context
Article 3(12) of the EU AI Act requires intended purpose to be explicitly defined before a system is placed on the market or put into service. Your risk classification under Article 6 — and every obligation that follows — is determined by this definition. Without a documented intended purpose, risk assessment has no foundation.
Examples:
• "Screens job applications and ranks candidates by fit score"
• "Calculates credit risk score for loan applications"
• "Recommends products based on customer purchase history"
The EU AI Act assigns different obligations based on your role in the AI value chain. Providers (Article 16) bear the heaviest obligations: conformity assessment, technical documentation, post-market monitoring. Deployers (Article 26) must ensure proper use, human oversight, and input data relevance. Distributors and importers have verification duties (Articles 24–25). Misidentifying your role means misidentifying your obligations.
Is this system used in the EU market, or does its output affect individuals in the EU?
Article 2 of the EU AI Act defines its territorial scope. The Act applies to providers placing AI systems on the EU market, deployers established in the EU, and providers/deployers in third countries where AI system output is used in the EU. If your system has no connection to the EU market, the Act may not apply — though other jurisdictions may have their own requirements.
Impact
The EU AI Act pays particular attention to systems affecting vulnerable groups — minors, persons with disabilities, or those in situations of vulnerability. If your system affects any of these groups, additional obligations may apply, and a Fundamental Rights Impact Assessment (FRIA) under Article 27 may be required.
Article 14 of the EU AI Act requires high-risk AI systems to have effective human oversight measures. The level of autonomy directly determines the type and extent of human oversight required. Fully automated decision-making also triggers rights under KVKK Article 11(1)(g), which gives individuals the right to object to decisions made solely by automated systems.
Does this system process personal data as input or output?
Risk Signals
Article 3(12) of the EU AI Act includes "reasonably foreseeable misuse" within the definition of intended purpose. Your risk classification must account not only for what the system is designed to do, but also for predictable unintended uses. If foreseeable misuse is not assessed at this stage, the Risk stage begins on an incomplete foundation.
← Back to Observe
Hexis · EU AI Act Risk Classifier
Risk — Determine Your System's Legal Position
An unclassified system cannot be governed. This stage establishes which obligations apply to you — and which do not.
Your system information from the Observe stage has been used to pre-fill parts of this assessment. Review each step and confirm — or correct where needed. Classification is your decision.
The EU AI Act organizes AI systems into four risk categories. Which category your system falls into depends on one question: what kind of decision does this system make or support, and about whom? Misclassification carries risk in both directions. Classifying a high-risk system as low-risk means missing mandatory obligations. Classifying a low-risk system as high-risk means misallocating resources and time. Accurate classification is the foundation of your compliance strategy.
Hexis · Contextual Matrix Generator
System Name
Risk Exposure
Oversight
Human review mechanisms and intervention capabilities during AI operation
Monitoring
Ongoing performance tracking — can you detect if the system's outputs change or degrade over time?
Documentation
Technical records, explainability and disclosure to affected individuals
← Methodology

AB AI Act Risk Sınıflandırması Nedir?

AB Yapay Zeka Yasası (EU AI Act), her yapay zeka sistemini olası zararına göre dört risk seviyesine ayırır: kabul edilemez risk, yüksek risk, sınırlı risk ve minimal risk. Hangi seviyede olduğunuz, hangi yasal yükümlülükleri taşıdığınızı doğrudan belirler.

Yüksek riskli sistemler için Ağustos 2026'da yürürlüğe girecek yükümlülükler; teknik dokümantasyon, insan gözetimi ve uygunluk değerlendirmesini kapsar. Yanlış sınıflandırma hem uyum boşluğu hem de gereksiz maliyet doğurur.

Bu Araç Nasıl Çalışır?

Risk Classifier, sisteminiz hakkında altı adımda sorular sorar ve AB AI Act'in Annex III ile Madde 6 kriterlerine göre risk profilinizi belirler:

Araç, sağlayıcı (provider) ve konuşlandırıcı (deployer) rollerini ayrı değerlendirir. Sonuçlar yol gösterici niteliktedir; hukuki danışmanlık yerine geçmez.

Kimler Kullanır?

KOBİ sahipleri ve teknik ekipler için: Sisteminizin hangi risk kategorisinde olduğunu, hangi yükümlülüklerin öncelikli olduğunu öğrenin.

Uyum danışmanları için: Müşteri sistemlerini hızlıca ön değerlendirmeden geçirin, bulgularınızı yapılandırın. Türkiye'de faaliyet gösteren veya AB pazarına hizmet veren tüm şirketler AB AI Act kapsamında değerlendirilebilir.