Guides
How does the EU AI Act affect my business?
How does the EU AI Act affect my business? A guide to this important European legislation, but specifically from a corporate governance perspective.
You might have read a lot about the EU’s AI Act, and similarly might have read a few guides documenting how it will affect businesses, but here we zoom in exclusively on the boardroom, because the decisions you have to make here are unique and far-reaching, especially when it comes to something like AI.
To review: the AI Act entered into force on 1st August 2024 and will become fully enforceable by August 2026.
Its provisions apply to:
- Providers: Companies that develop an AI system or a general-purpose AI model and place it on the market under their own name.
- Deployers: Organisations using AI systems under their authority (you and your business are most likely to fall into this category).
- Importers and distributors: Any entity bringing AI systems into the EU market.
Extra-territorial entities: Yes, you could be responsible even if your company is headquartered outside the EU itself. All that is needed is for your company’s AI output to be used inside the EU in some way.
5 Ways the AI Act reshapes the boardroom
There’s no question that directors have a crucial role to play in their company’s use of AI. Anyone who thought they might have been able to leave it to the IT team is mistaken.
The problem is knowing where to go for answers that don’t speak primarily to IT teams, or even management. You need dedicated guidance just for directors.
So, here are the 5 ways the AI Act will reshape boardrooms that fall under its remit.
Can you explain?
Perhaps the most important aspect for boards, the Act requires you to be able to explain the mode of operation for the AI systems you use. You don’t need endless specific and technical details, but you need to know basic things like where the system gets its training data, how it’s used, and whether any flaws have surfaced. The board is the last stop for decision-making, and you simply cannot approve an AI system for use if you don’t understand it alongside your boardroom colleagues.
The risk list
Under the Act, AI risk is divided into four categories.
- Minimal risk – Like AI-enabled video games or spam filters
- Limited risk – Like genAI, chatbots etc
- High risk – Like AI embedded into critical infrastructure, public services, education, insurance etc.
- Unacceptable risk – Social scoring, racial recognition, manipulation etc.
Much like the requirement to explain, the Act also requires you to know where every AI system you use fits on this list. If it’s high risk, for example, your board has to ensure there is a robust Quality Management System (QMS) and a continuous risk assessment process. Like other elements of risk, failing to oversee these systems could be seen as a breach of your fiduciary duty, and land you in deep trouble with stakeholders as well as regulators.
Your supply chain
You may have noticed the amount of new regulation that focuses on not just your company’s activities, but those of its supply chain too. In the eyes of the law, what your providers do increasingly counts towards your own actions and viewpoints.
AI is no different.
In general, the Act places significant “deployer” obligations on you and your board. Directors must oversee due diligence on any AI vendors they interact with. If a third-party tool you use for hiring is found to be biased, your company (and its reputation) will be on the front line of the fallout.
The financial stakes
You’ve probably seen it with GDPR; The EU like to slap hefty fines as penalties for non-compliance. It might take a while to get a final verdict, but once you have it, and it’s bad news, your company could pay dearly.
The penalties for non-compliance are eye-watering, designed to be even more punitive than GDPR. Fines can reach up to €35 million or 7% of total worldwide annual turnover, whichever is higher. As a director, you must weigh the cost of compliance against a potential fine that could wipe out a year’s profit or cause a catastrophic drop in share price.
Maintaining the ethics
The board has a strong role to play in setting the tone of work for the company. Its principles become staff principles, which filter further along to shape the business’s internal and external priorities.
AI can be a challenge to that norm, because its potential is so broad, but you still need to maintain your principles in its use. Because of that, you need to consider all of the above in the context of ethics. What are you okay with? What aren’t you okay with? What’s your reasoning for it? How will you apply your principles in real-world AI scenarios?
Unfortunately, there’s no easy answer to any of these questions, especially as directors continue to learn about AI’s inner workings and applicability. However, these questions remain vital if you are to harness AI properly.
The AI literacy gap: Why training is your first line of defence
Article 4 of the AI Act explicitly requires organisations to ensure their people possess a sufficient level of AI literacy.
While it might be impossible overnight, it is imperative that you put the work in now to remain in control of what will be one of the most defining innovations in business.
For boards, dedicated AI governance training means training focused on the risks and opportunities, as well as the questions to ask around the boardroom table that make a difference in day-to-day operations.
Without this foundational knowledge, the board cannot provide the “effective oversight” that regulators now expect.