August 2025. The most debated chapter of EU AI Act entered into force: Articles 51–56 introduce rules for general-purpose AI (GPAI) models. Who do these rules cover, what do they require, and what do they mean for organisations operating in Turkey?
What Is GPAI and Why Does It Matter?
GPAI — General Purpose AI — refers to models like ChatGPT, Claude, Gemini, and Llama. Their defining characteristic is that they are designed for a wide range of applications, not a single task.
EU AI Act treats these models as a distinct category because assessing their risk is far more complex. The risk profile of a recruitment system is relatively fixed; the risk profile of a language model that can operate in any context varies with deployment.
What Do Articles 51–56 Require?
These articles establish a two-tier obligation structure: baseline requirements for all GPAI models, and additional requirements for models that pose systemic risk.
- Technical documentation is mandatory: how the model was trained, what data it uses, its capabilities and limitations must be documented in writing.
- Copyright policy must be disclosed: compliance with EU copyright law regarding training data must be demonstrated.
- A model card must be published: transparent information to allow users to properly evaluate the model.
- Threshold: models trained on more than 10²⁵ FLOPs fall into this category — representing the upper boundary of computational resources used to train today's most powerful models.
- Regular risk assessments and cybersecurity testing are mandatory.
- Serious incidents must be reported to the European AI Office.
Which Organisations Are Affected?
A critical distinction applies here: model developers versus model users.
In short: even if you did not develop the model, what you build on top of it remains your responsibility.
What Changed in August 2025, and What Remains Unclear?
- Technical documentation and model card requirements are active
- The European AI Office has begun exercising its supervisory authority
- Major model providers have started disclosing their compliance processes
- How the systemic risk threshold will be updated over time has not been settled
- How open-source models will be assessed remains contested
- Third-party audit mechanisms are not yet mature
Hexis Perspective
Articles 51–56 are typically read as a technical compliance exercise: produce documentation, publish a model card, clear the threshold or don't. That reading is incomplete.
These articles are asking a more fundamental question: what is underneath your system, and do you actually know?
If a company uses a GPAI-based product, it cannot establish real human oversight without understanding the model's limitations, training data policy, and potential failure modes. Documentation is not the end state — it is the starting point.
Hexis treats these obligations as orientation, not endpoints. The rules will change; the underlying question will not: do you understand your system well enough to govern it?
Conclusion
GPAI rules demonstrate why AI governance cannot be limited to high-risk systems alone. The foundational models you use also require a governance framework.
The starting point: request documentation for the GPAI-based tools you deploy, and document what your own system does on top of that model.
Once you have determined whether you fall within the GPAI scope, use the EU AI Act Compliance Checklist to track your obligations step by step.
Note: This article is based on information available as of February 2026. Article references are to Regulation (EU) 2024/1689 (EU AI Act). This article does not constitute legal advice.