Thought Leadership
When AI reached the board agenda
When AI reached the board agenda: expert insights from Boglarka Radi on how we’ve seen artificial intelligence make the rapid leap to centre-stage in the boardroom.
As a company secretary, I think the most telling developments around artificial intelligence are no longer found in the technology briefings. My experience is that these developments are now surfacing in board papers, risk registers, audit committee discussions, and questions about assurance. AI is beginning to influence and shift everyday decisions across businesses, often in ways that are not immediately visible at the board level. That shift changes the nature of the conversation. The board is rarely asked whether AI is interesting; it is asked whether AI is controlled.
No more technical isolation for AI
For company secretaries, the above places function at the centre of the story. It doesn’t mean secretaries need to be technical experts, but they do need to be architects of assurance, the ones ensuring that AI shows up on agendas are reflected in control frameworks, captured in minutes and owned clearly across committees. In that sense, the arrival of AI is not rewriting governance but more about testing whether governance is doing what it has always claimed to do.
This is one of the important areas where the governance lens becomes decisive. When AI begins to influence material decisions, the question is no longer how innovative the tool is but whether the organisation can explain who owns it, how it is monitored and what happens when it fails. For company secretaries, this means ensuring AI is visible in the right places rather than living in technical isolation. AI should belong in enterprise risk discussions when it affects judgment. It should belong in internal control conversations when it touches reporting or compliance, and lastly, it should belong in culture debates when fairness, accountability and transparency are at stake.
Until those governance foundations are properly established, the regulatory exposure continues to grow.
AI and UK boards in 2026
In the United Kingdom, the requirement is being delivered through corporate governance reform rather than a single AI statute. The Financial Reporting Council’s UK Corporate Governance Code 2024 strengthened expectations around risk management and internal controls. Provision 29 requires boards to make a declaration on the effectiveness of material controls and places clear responsibility for that framework at the board level. For premium listed companies, this becomes operational through the 2026 reporting cycles. If AI plays a role in financial processes, decision support or regulatory compliance, the question becomes unavoidable. Does it fall within the control environment, and can the board evidence assurance over it? Alongside this, the Data Use and Access Act 2025 received Royal Assent on 19 June 2025 with phased implementation running into June 2026. For boards, this reinforces an uncomfortable truth: Many AI failures stem from basic governance weaknesses, such as poor data lineage, unclear purpose, access controls or monitoring gaps, not from algorithms. These are traditional company secretariat concerns now playing out in a new technical setting.
AI and the EU
In Europe, the EU Artificial Intelligence Act has shifted the discussion from voluntary principles to statutory obligations. The European Commission has confirmed that the legislation is now in force, with prohibitions and AI literacy duties applying from 2 February 2025 and the core regime for high-risk systems scheduled to apply from 2 August 2026. That timetable matters because it effectively makes 2026 a board readiness deadline. It is no longer enough to know where AI sits in the business. Boards will need to demonstrate how they are governed.
AI across the Atlantic
Across the Atlantic, the same accountability narrative is emerging through disclosure rather than prescriptive design rules. In December 2025, a US Securities and Exchange Commission speech highlighted calls for companies to clarify how they define AI and whether boards oversee its deployment. Different jurisdictions are choosing different regulatory tools, but the direction is consistent: AI is being pulled into familiar territory, oversight, disclosure and demonstrable governance.
By 2026, many boards are likely to discover that AI oversight has become less about experimentation and more about evidence. Regulators are converging on a straightforward expectation. If AI shapes outcomes, the board should be able to explain how it is controlled, monitored and challenged.
