Thought Leadership

AI: Not just another IT project

Not just another IT project

The myth of misframing AI in governance

When a disruptive technology arrives, the reflex question in many boardrooms is: Isn’t this just another IT challenge? 

It’s the wrong question and a dangerous one. It’s the equivalent of asking whether the internet was just another phone line. Each revolution began as a technical curiosity and ended up redrawing the map of business, regulation, and trust.

Artificial intelligence now faces the same risk of misframing. 

Beyond technology management

It is tempting to delegate AI oversight to the IT department. After all, AI systems are technical in nature, and CIOs and CTOs have long managed infrastructure, data, and cybersecurity. Yet AI is different. No IT system has ever been able to flood markets with convincing fakes, embed invisible discrimination into everyday decisions, or corrode trust between a company and its regulators – all before the board has even had time to notice.

AI is not simply a tool; it is a strategic capability that is changing how organisations discover value, make decisions and evidence accountability.  Boards that see it only as a technical upgrade are narrowing their field of vision precisely when it needs to widen. The challenge is not how to install AI, but how to govern it — ethically, strategically, and at speed.

Ready to strengthen your business
or shape your next
career move?

Ready to strengthen your business
or shape your next
career move?

A lesson from the internet

In the 1990s, the internet was dismissed as a sideshow – a toy for hobbyists, or a tool for back-office efficiency. Within a decade, it rewired the global economy. Today, it underpins every aspect of business: supply chains, marketing, compliance, workforce management, financial reporting. Transformation came in waves, often faster and more profoundly than expected.

AI is following a similar arc, but at an accelerated pace. Unlike the internet, whose adoption stretched over decades, AI is embedding across industries in real time. From automated lending in finance to generative design in manufacturing and personalised retail, the same pattern is emerging with AI, only this time the adoption curve is measured in months, not years.

Those who mistake AI for an IT project will spend the next decade chasing technology, trust, and competitive relevance, rather than seizing the opportunities they present.

A governance challenge, not an IT one

AI carries risks that cannot be firewalled within technology teams. Consider:

  • Legal and regulatory exposure: the EU AI Act carries fines measured in billions, with liability pointing directly at boards.
  • Ethical and reputational stakes: AI is already shaping how people are hired, assessed, insured, and even judged. Boards that govern its use wisely protect not just their company’s reputation but the dignity and fairness of the systems they unleash.
  • Financial implications: poorly governed AI can misallocate capital, misprice risk, or mislead investors. But good governance is not just a defensive shield – it’s an enabler of smarter, faster, and more credible decision-making. Boards that set clear oversight for data integrity, model accountability, and ethical deployment often see stronger ROI: capital directed where it matters most, risk priced with greater precision, and innovation aligned with strategy rather than accident.
  • Strategic risk: competitors who govern AI responsibly will move faster into new markets and revenue streams. Governance, in this sense, is not bureaucracy; it’s the mechanism that keeps ambition tethered to reality and turns experimentation into enterprise value.

How these risks are playing out

These risks are already playing out. In 2024, the U.S. Securities and Exchange Commission fined two investment advisers for “AI washing” – exaggerating their use of artificial intelligence in marketing to clients. The penalties were modest ($175,000 and $225,000 respectively), but the signal was not: boards are accountable when claims about AI mislead investors or regulators. This was the SEC’s first enforcement action on AI, and it won’t be the last.

The difference is not academic. A board that leaves AI to the CIO risks overlooking how it reshapes workforce planning, investor relations, customer trust, and corporate reporting. Delegating oversight solely to IT is not prudence; it is abdication. Boards must mandate a cross-functional AI control plane, spanning Legal, Risk, Product, Data, Security, HR, and Operations functions. This ensures that governance, ethics, and accountability are embedded across every dimension of AI deployment – not left to technology teams alone.

Looking ahead

If the first myth was that AI is irrelevant, the second is that AI is simply an IT project. Both invite complacency. Boards that see AI only through the lens of systems management are missing the wider reality: it is a systemic shift with profound legal, ethical, financial, and strategic implications.

Boards that act early, however, can seize opportunity: strengthening investor confidence, differentiating with trusted products, and shaping markets before rules harden.

In the next article, we will examine a third and equally dangerous myth: that boards can afford to wait until regulation provides clarity. By then, it won’t just be costly — it will be too late.

About the Authors:

Paul Johnston | Researcher, Centre for AI in Board Effectiveness and Associate Director, One Advisory

Jamie Bykov-Brett | AI Consultant and Co-Founder of the Executive AI Institute

Alan Hewitt | AI Consultant and former IBM Partner

Insights on leadership

Want more insights like this? Sign up for our newsletter and receive weekly insights into the vibrant worlds of corporate governance and business leadership. Stay relevant. Keep informed.

Tags
  • AI
  • AI governance