Navigating the AI frontier: what Boards must know right now

photo of man sitting in front of people

Since 2022 we’ve been campaigning for Artificial intelligence to be taken seriously as a central boardroom issue, demanding attention from directors and executives. For too long, the conversation around AI has been siloed in IT departments, but its profound impact on business strategy and operations means it’s now a core component of corporate governance. As a recent Nasdaq article highlights, boards are increasingly expected to oversee the technologies shaping their business, making AI fluency a crucial competency.

The push to get AI on the board agenda is more than just a passing trend. A Deloitte survey, “Governance of AI: A Critical Imperative for Today’s Boards,” found that 40% of respondents are rethinking their board’s makeup due to AI. This is a significant indicator that organisations are recognising the need for directors with a deeper understanding of technology and data.

The opportunities AI presents are vast and tangible, capable of transforming a business from the inside out. It can start with something as simple as improving operational efficiency by automating mundane administrative tasks, freeing up valuable time for directors to focus on what truly matters: strategy. This leads directly to enhanced decision-making, as AI provides boards with deeper, data-driven insights. It is a powerful tool for strategic foresight, helping us to not just react to the market but to anticipate it. All this culminates in improved governance, with AI tools offering the capability to enhance collaboration, speed up processes, and increase transparency, leading to more robust oversight.

While the benefits are clear, the risks associated with AI require careful and considered management. Neglecting these can lead to significant financial, legal, and reputational damage.

  • Data Privacy and Security: AI systems rely on vast amounts of data, making data protection a critical concern. Boards must ensure that robust cybersecurity measures are in place to prevent breaches and that data is handled in a manner compliant with regulations like GDPR.
  • Bias and Fairness: AI algorithms are trained on data, and if that data is biased, the resulting decisions will be too. As a case in point, Workday faced a class action lawsuit for alleged bias in its AI-based job screening algorithm. Boards have a fiduciary duty to ensure the ethical deployment of AI and to establish protocols for auditing models to prevent discriminatory outcomes.
  • Lack of Transparency: Many AI models, particularly deep learning networks, are “black boxes,” making it difficult to understand how they arrive at a conclusion. This lack of explainability poses a challenge for accountability and trust.
  • Regulatory Compliance: The regulatory landscape for AI is evolving rapidly, with frameworks like the EU AI Act setting new global standards. Boards must stay informed and ensure their organisations are compliant to avoid hefty fines and legal issues.

AI governance is a new frontier, and it requires a proactive approach from leadership. It is not about being a technical expert, but rather about having the curiosity and a strategic understanding to ask the right questions. The key is to embed AI governance into the company’s overall strategy and risk management framework.

To assist your board in asking the right questions and thinking about the challenges of governance today, download our Questions for Leaders:

You may also like...