Türkçe versiyonu için: EU AI Act + KVKK Kontrol Listesi →
Hexis · EU AI Act Compliance Checklist
EU AI Act Compliance Checklist
Track your compliance progress across all EU AI Act obligations. Filter by your risk classification to see applicable requirements.
← Back to Assessment
Risk Filter
0 of 31 items completed (0%)
Unacceptable Risk
PROHIBITED
This AI system cannot be placed on the EU market. Prohibited AI practices are banned under Article 5 of the EU AI Act, in force since 2 February 2025. Penalties may reach €35 million or 7% of global annual turnover, whichever is higher (Art. 99(3)).
← Return to classifier to reclassify
Section 1 AI System Inventory & Registration
0/5
Foundational for all compliance levels
Have all AI systems in your organisation been identified and inventoried?
Art. 49
Is the purpose, scope, and intended use of each AI system documented?
Art. 9(2)(a)
Has each AI system been classified by risk level under the EU AI Act?
Art. 6, Annex I/III
Are AI components from third-party suppliers included in the inventory?
Art. 25
Have high-risk AI systems been registered in the EU database?
Art. 49(1)
Section 2 Prohibited Practices Screening
0/5
In force since 2 February 2025 — €35M or 7% turnover penalty
Have all AI systems been screened against the prohibited practices list?
Art. 5(1)(a)–(h) IN FORCE
Is social scoring or behaviour-based classification absent from all systems?
Art. 5(1)(c)
Are emotion recognition systems absent from workplace and education settings?
Art. 5(1)(f)
Is real-time remote biometric identification absent from public spaces?
Art. 5(1)(h)
Are subliminal manipulation and vulnerability exploitation techniques absent?
Art. 5(1)(a), (b)
Section 3 Risk Management System
0/5
Required for high-risk systems — Art. 9
Has a risk management system been established and maintained throughout the AI system lifecycle?
Art. 9(1)
Are known and reasonably foreseeable risks identified and analysed?
Art. 9(2)(a)
Are risks evaluated under intended purpose and conditions of foreseeable misuse?
Art. 9(2)(b)
Have appropriate and targeted risk management measures been adopted?
Art. 9(4)
Has residual risk been assessed, judged acceptable, and communicated to deployers?
Art. 9(4)
Section 4 Data Governance
0/4
Training data quality and bias prevention — Art. 10
Are training, validation, and testing datasets subject to appropriate data governance practices?
Art. 10(2)
Have datasets been examined for possible biases that could lead to discrimination?
Art. 10(2)(f)
Are data quality criteria defined (relevance, representativeness, accuracy, completeness)?
Art. 10(3)
Has the origin and collection method of training data been documented?
Art. 10(2)(b)
Section 5 Transparency & Documentation
0/6
Technical docs for high-risk, disclosure obligations for all AI interactions
Has technical documentation been drawn up before the system is placed on the market?
Art. 11 + Annex IV
Is automatic logging of events enabled for the AI system's lifetime?
Art. 12
Are instructions for use provided to deployers, including capabilities and limitations?
Art. 13
Are users informed when they interact with an AI system (chatbot disclosure)?
Art. 50(1) AUG 2026
Is AI-generated content (text, audio, image, video) clearly labelled as such?
Art. 50(2)
Are deep fakes and synthetic media disclosed to recipients?
Art. 50(4)
Section 6 Human Oversight & Robustness
0/5
Human-in-the-loop requirements — Art. 14–15
Is the AI system designed to allow effective human oversight during use?
Art. 14(1)
Can the human overseer fully understand the system's capabilities and limitations?
Art. 14(4)(a)
Can the human overseer decide not to use, override, or reverse the AI output?
Art. 14(4)(d)
Is there an ability to interrupt or stop the AI system (stop/kill switch)?
Art. 14(4)(e)
Does the system achieve appropriate levels of accuracy, robustness, and cybersecurity?
Art. 15(1)
Section 7 Governance, Accountability & AI Literacy
0/8
Organisational governance and mandatory AI literacy
Has AI literacy training been provided to staff involved in AI operations?
Art. 4 IN FORCE
Has an AI governance owner or team been designated?
Best Practice — ISO/IEC 42001
Is there a documented AI policy covering acceptable use, ethics, and compliance?
ISO 42001
Has a quality management system been established?
Art. 17
Has a fundamental rights impact assessment (FRIA) been conducted?
Is a post-market monitoring system in place?
Art. 72
Is there a process for reporting serious incidents to authorities?
Art. 73
Has a conformity assessment been carried out before the system is placed on the market or put into service?
Art. 43
Section 8 Deployer Obligations
0/5
Specific duties for organisations deploying high-risk AI systems — Art. 26
Is the high-risk AI system used in accordance with the provider's instructions for use?
Art. 26(1)
Is input data relevant and sufficiently representative for the system's intended purpose?
Art. 26(4)
Is the operation of the high-risk AI system monitored on the basis of instructions for use?
Art. 26(5)
Have affected employees and workers' representatives been informed about the AI system's use?
Art. 26(7)
Are logs generated by the high-risk AI system kept for a period appropriate to the intended purpose (minimum 6 months)?
Art. 26(6)

AB AI Act Uyum Kontrol Listesi

AB AI Act kapsamındaki yükümlülüklerinizi sistematik olarak takip etmek için hazırlanmış kontrol listesi. Risk seviyenize ve rolünüze (provider / deployer) göre hangi maddelerin öncelikli olduğunu görün.

Bu Kontrol Listesi Ne Kapsar?

Nasıl Kullanılır?

Her madde için üç durumdan birini işaretleyin: tamamlandı, devam ediyor veya uygulanamaz. Liste, Annex III kapsamındaki sekiz kullanım alanını ve yürürlük takvimini göz önünde bulundurur.

Risk seviyenizi henüz belirlemediniz mi? Önce Risk Classifier aracını kullanın.

Yüksek riskli sistem konuşlandırıyorsanız temel haklar etki değerlendirmesi de gerekebilir. FRIA aracını inceleyin.