News analysis

The EU’s AI Act: Boards should feel the urgency

by Dan Byrne

The EU’s AI Act is a first-of-its-kind regulation governing one of the decade’s biggest changes in business, and directors should feel the urgency.

AI is rapidly transforming the way many companies do business. If you want proof, just look at the EU’s new law governing the subject. 

The bloc was slow to move on this legislation just a few short years ago, but that was before ChatGPT exploded onto the scene. Now, the pace of AI-induced change is staggering; lawmakers are rushing to introduce some structure into this evolving landscape. 

European directors should feel the urgency around this and fast. They face much stricter responsibilities around AI, and the latest data suggests they aren’t entirely prepared for that challenge yet.

Quick recap: what are the basics of the EU’s AI act?

“We have the first regulation in the world that clearly sets a path for a safe and human-centric development of AI,” said Brando Benifei, co-lead behind the new legislation passed by the European Parliament this week and likely to enter into force in May. 

The act uses a risk-based approach (identifying and prioritising the highest risks for compliance). Using this lens, AI tools will be divided into four risk-based groups.

  • Minimal risk systems (like spam filters or AI-based recommender systems) will remain unregulated. 
  • High-risk systems (such as medical devices and systems for immigration, education access and law enforcement) will require a heavy degree of control and reporting. Companies responsible must record datasets, define human oversight, and clarify cybersecurity. 
  • Specific transparency risk systems will need to be flagged as such. In other words, people need to be told when they are talking to a machine (like a chatbot), are subject to biometric readings, or are consuming AI-generated content. 
  • Unacceptable risk systems will be banned outright. These include predictive policing or social scoring systems. Police will not be allowed to use real-time face recognition unless they are searching for an individual concerning a serious crime.

Most companies using or generating AI content will likely be most concerned with the “high” and “specific-transparency” risk systems categories.

Stay compliant, stay competitive

Build a better future with the Diploma in Corporate Governance.

Stay compliant, stay competitive

Build a better future with the Diploma in Corporate Governance.

Where do ChatGPT and other chatbots/language models sit?

ChatGPT will not be classified as high-risk, according to the European Parliament, but it must comply with crucial parts of the legislation: 

  • It needs to state that its content is AI-generated
  • It cannot generate illegal content
  • It must respect copyrights in the content it generates.

What are the penalties?

In the most severe cases, potential penalties could be a fine of €35 million or 7% of a company’s global turnover – not just EU turnover.

Is this a problem for directors?

In the short term, it could be.

Recent research from The Corporate Governance Institute and Board Intelligence has revealed a notable lack of readiness among boards regarding digital tools and AI. 

The data from mid-2023 suggested only a fifth of directors were examining the potential (and impact) of AI tools on their work. Other main findings included 60% of directors feeling they had received insufficient cyber resilience training in the past year, while 82% agreed they needed to use tech more for boardroom performance. 

It’s concerning – especially when laws like this far-reaching AI act are passed – because it suggests directors have severe gaps in readiness and confidence to oversee compliance.

Is compliance with this act really a director’s responsibility?

Every aspect of governance and compliance is a director’s responsibility. It’s not limited to AI. 

While directors might not be involved in day-to-day AI work, they will undoubtedly be responsible for strategising and making decisions about AI. 

If your company uses – or plans to use – AI, it’s the board’s job to ensure this is done right. The EU law adds a massive amount of tangible tasks to that. 

You need to ensure that your company is complying with the act and integrating it into long-term strategy while still acting in the best interests of company stakeholders.

It’s a tricky balance to achieve and impossible without the proper knowledge and expertise in the boardroom. 

That’s why your board should feel the urgency now. Seek dedicated governance training on the subject or the right external advice. At the very least, ensure that key members around the boardroom table have enough information to manage AI reporting and decision-making. 

This will separate the good from the weak strategies when it comes to AI, especially now that governments are getting serious about specifics, reporting, and metrics.

How much time does my company have?

Not a lot – it’s the other significant factor. 

Bans on unacceptable risk systems will come online this year. More general chatbots and other systems rules will be in force in mid-2025. Everything else will begin to apply in 2026.

In summary

The EU’s AI act is a major step in regulating artificial intelligence. In some ways, it’s remarkable that the law has been passed with such speed, but given the rapid surge in AI popularity this decade, quick laws are inevitable. 

For directors, the time for preparation is now if you haven’t started already. Any European company that uses AI now has a brand new, thorough rulebook to live by. If your board doesn’t have the expertise to oversee that, act now.

Insights on leadership

Want more insights like this? Sign up for our newsletter and receive weekly insights into the vibrant worlds of corporate governance and business leadership. Stay relevant. Keep informed.

Tags
AI
Corporate Governance
EU