Artificial intelligence (AI) is taking the world by storm, causing legislation and classifications, including the EU AI Act risk categories, to emerge. AI has gone from a new technology to a daily tool, making it important to address, especially in light of corporate operations.
Additionally, AI’s aptitude for self-learning and borderless nature challenge governments, businesses and regulators to contain it. In March 2024, the European Union agreed to the first-of-its-kind act to regulate different AI systems according to their risks. Globally, many nations are set to follow their example, including South Africa. To stay ahead of the curve, it’s important to know what South African organisations can learn from this hallmark act and explore the risk categories involved:
- Risk categories in the EU AI Act
- Details of the EU AI Act risk categories
- The distinction between AI systems deployers and providers
- Why it’s important in South Africa
The state of the EU AI Act
At its core, the EU AI Act is a risk-based approach to categorise AI systems based on the harm they may cause, hence the importance of the EU AI ACT risk categories. It passed in early 2024 and came into force in August 2024. Covered organisations must comply by February 2025, making the fourth quarter of 2024 a high-stakes quarter for those who provide and deploy AI. Not only will this have implications for South African-based firms that offer AI services within the EU, but it is also set to influence, to a large extent, South Africa’s approach to legislation.
The EU AI Act applies to many different entities but is of particular concern for:
- Providers: Also called developers, these entities place AI systems or general-purpose AI models on the EU market, whether the provider is based in the EU or elsewhere.
- Deployers: Also called users, these are entities located or operating in the EU that utilise AI systems as part of their business offering.
Looking for a succinct overview of the EU AI Act? Find one here.
The 4 EU AI Act risk categories
The EU AI ACT risk categories classify AI systems into four risk levels based on their intended purpose. Understanding what each level entails and which systems it applies to is essential to compliance and risk management:
- Unacceptable risk: AI systems like social scoring and biometric categorisation pose a clear threat to people’s safety, livelihoods and rights. To curb their use, the Act prohibits such systems, including systems that attempt to subliminally influence human behaviour through emotional recognition.
- High risk: AI is also used in critical infrastructure, such as transport, educational/vocational training, safety applications, worker management and more. Systems in this category have the highest potential for good — and significant risk for misuse. The Act mandates the most rigorous rules for high-risk systems. (Read more about these systems below.)
- Limited risk: Many AI systems interact with individuals or generate content that can be deceptive without context or appropriate notice of AI system involvement. Considered low-risk, these systems are subject to information and transparency requirements to ensure that people know when they are interacting with AI. Providers must enable the marking and detection of the system’s output, while deployers must disclose content that has been artificially generated or manipulated.
- Minimal or no risk: This EU AI Act risk category encompasses AI-enabled video games or spam filters, considered risk-free. Providers and deployers can use them freely but are advised to adopt a voluntary code of conduct. However, existing data protection regulations still apply.
General-purpose AI models and systems
Tools like ChatGPT are among the best-known AI tools on the market. Which risk category applies to these systems, considered general-purpose AI (GPAI)?
The Act recognises that GPAI operates more broadly and isn’t designed for the specific purposes outlined in each risk category. This makes them applicable to various use cases, but it also makes them hard to classify.
Instead of pinning them to a risk category, the Act emphasises that GPAI carries systemic risk, meaning that there are far-reaching risks inherent in the use of these tools. The Act mandates transparency requirements, risk management, reporting and surveillance related to GPAI. It is worth noting that GPAI providers could be subject to some of the same obligations as high-risk systems if their models are used in high-risk applications.

Key obligations of the EU AI Act high-risk category
High-risk is the most heavily regulated of the EU AI Act risk categories. As such, providers and deployers specifically handling high-risks systems must understand and comply with the requirements of the Act, specifically those detailed in Chapter III, Section 2.
For providers
- Risk management: Providers must establish, document and maintain a continuous risk management system. The system must identify the risks of using the AI systems as intended, develop and test risk mitigation measures and ensure that any remaining risk is acceptable. These approaches should pay particular attention to the impact of the AI system on persons under 18 or other vulnerable groups.
- Data and data governance: AI systems cannot exist without data, so data protection and management integration are essential to avoiding risk. Entities developing high-risk AI systems must implement data governance practices that include system design, data collection and preparation, data suitability, measures to prevent bias and data gap analysis, and to ensure data privacy.
- Technical documentation: National authorities need specific documentation to evaluate a system’s compliance with the Act. Providers should create and continuously update technical documentation that details the elements and development process, insight into training data sets used, information about the monitoring and control of the system and what cybersecurity measures exist.
- Record keeping: Providers should design high-risk AI systems with automated recording of events — also called logs — throughout their lifecycle to enable post-market surveillance.
- Transparency: The Act requires that providers avoid the black box of AI and instead be able to interpret the system’s outputs and use. This includes providing detailed instructions about the system’s intended use, accuracy and any known or anticipated risks to human health or safety.
- Human oversight: Mandating human oversight is central to the regulation. Providers must design high-risk systems to permit human intervention while minimizing risks to health and safety. The level of intervention should match the risk level of the system’s autonomy and context of use. The more risky it is for the system to act on its own, the more human oversight it should require.
For deployers
Deployers’ obligations under the Act are tied to the instructions of providers. The provider of an AI system must equip deployers with comprehensive instructions for using the system safely and responsibly. Deployers must adopt appropriate measures following those instructions — a burden general counsel can help navigate.
Any deployer that deviates from the providers’ instructions with a new use case or a change to the system can then be classified as a provider, which comes with steeper regulatory requirements.
Under the Act, deployers are responsible for:
- Ensuring adequate AI awareness
- Due diligence
- Performing a fundamental rights impact assessment (FRIA)
- Ensuring compliance and surveillance according to the provider’s instructions
- Human oversight by natural persons
- Transparency and information
- Record-keeping and logs
- Incident reporting
- Cooperation with authorities
While using AI systems, deployers must still comply with existing member state laws, like GDPR. This includes completing a GDPR data protection impact assessment (DPIA), recognising that AI can involve automated decision making, and high volumes of personal data.
What does this mean for South Africa?
In recent years, risk and compliance went from a departmental obligation to an organisation-wide responsibility – and AI is no different. Diving into the EU AI Act risk categories highlights the importance of taking a proactive approach to protecting human health, safety and fundamental rights that keeps up with AI’s rapid growth.
With regulations about AI growing now that AI is ubiquitous, South African legislation is bound to ramp up – and quickly. But what lessons can South African organisations learn from this recent development?
Compliance with acts like the EU AI Act doesn’t mean cutting corners and acting just to avoid risk. But deployers, providers and other entities can stay ahead – with the right toolkit.