Key takeaways from our July 2025 panel event hosted by Norman Broadbent Plc and the FT Board Director Programme.
Whether we’re ready for it or not, Artificial Intelligence is reshaping industries. Yet for many boards, AI remains an opaque and complex topic, laden with risk, hype, and opportunity in equal measure.
Norman Broadbent and the FT Board Director Programme brought together a distinguished panel of leaders to explore how boards can - and must - respond to the growing demands of AI oversight. The session was structured around three core themes:
The Panel:
Key Takeaways
The panel was unanimous: boards must approach AI as a strategic, not technical, imperative. Sanjeevan Bala opened by urging boards to anchor discussions in business outcomes, not technical novelty. “Start with the value narrative,” he advised. “AI is just a lever. The real question is: what are you trying to achieve?”
Susan Hooper echoed this, warning against tech-driven panic: “There’s a fear of falling behind. But AI isn’t always the right solution. Focus on solving the business issue first.”
Rather than being at odds, innovation and governance should work in concert. Susan and Shefaly Yogendra encouraged boards to bake in governance and ethical considerations from the start and not bolt them on as afterthoughts. “Design for governance,” said Susan, “don’t build and then figure out how to govern it.”
The panel also stressed the need for boards to engage with scenario planning and ‘war gaming’ to explore possible risks, consequences, and ethical dilemmas.
One consistent message: board members don’t need to be AI experts, but they do need to ask smart, strategic questions. Shefaly noted that literacy, and not fluency, is the goal: “You don’t need to know how to build the engine, but you do need to understand where it’s taking the business.”
Susan added: “If AI is being used to solve a business issue, the person accountable for that issue should be the one presenting - not just the technologist.”
This built on a question posed by Tanya Gass, who asked which roles NEDs should expect to see presenting to the board on AI. The panel agreed that AI responsibility should not sit solely with technology leaders. Instead, directors should expect to hear from the executive accountable for the business outcome, whether that's the CMO, COO, or another functional lead, depending on the use case.
Boards shouldn’t look to add AI-specific KPIs. Instead, assess whether AI is helping deliver against existing business goals. “The measure is not AI itself,” said Susan. “It’s whether you’ve achieved your goal faster, cheaper, or better using AI.”
Sanjeevan pointed to metrics such as improved customer acquisition cost, reduced churn, or increased supply chain efficiency - depending on the objective.
Boards must invest in their own education but also ensure the executive team and wider organisation are upskilling. “You can’t challenge what you don’t understand,” said Richard Edmondson. “And you can’t build strategy on tech you barely grasp.”
Shefaly raised an important challenge: if board members delay developing AI literacy, they risk becoming a liability. “The question is not who should educate us, but whether we’re willing to take responsibility for our own relevance,” she noted. AI joins a growing list of boardroom literacies that can no longer be optional.
The panel highlighted cross-industry learning as vital. “Look above the parapet,” urged Shefaly. “Talk to peers outside your sector to see how they’re approaching AI.”
The Broader Picture: Balancing Innovation with Sustainability
The session also surfaced a broader concern: the tension between AI’s potential and its environmental footprint. Susan pointed to the energy demands of AI models and the uncertainty around their long-term impact. Climate change and AI are both unstoppable forces, but can boards find ways to pursue innovation without compromising their sustainability goals? As Susan put it:
“We’re looking for answers. We’re looking for perfection. We’re looking to people like us to know the answers. We don’t know them yet. And that’s the environment we’re dealing with on these two issues. We have to make the best decision we can from the information we have - and the commitment to science on both sides of the fence.”
Ultimately, boards are encouraged to scrutinise the environmental implications of AI usage as closely as any other strategic investment.
Final Reflections: Each panellist closed with their top tip:
What’s Next?
This is just the beginning. The pace of change in AI is unrelenting and so is the pressure on boards to respond thoughtfully and decisively.
This discussion made one thing clear: boards can no longer afford to view AI as someone else’s problem. Whether navigating strategic priorities, redefining risk frameworks, or evaluating leadership capability, AI is already reshaping the questions boards must ask and the expectations placed upon them.
The session underscored the importance of boardroom curiosity, cross-sector learning, and business-first thinking. AI should not be treated as a standalone topic, but integrated into broader conversations around value creation, governance, and long-term resilience. Crucially, directors must be willing to invest in their own literacy - not to become technologists, but to exercise sound judgement in uncharted territory.
Importantly, the conversation acknowledged that AI oversight doesn’t happen in a vacuum. Regulatory, cultural, and political approaches to AI vary significantly between the UK, Europe, the US, and China — and boards must stay alert to these differences. As Shefaly noted, AI’s geopolitical implications are vast, and directors must remain globally aware to navigate competitive and ethical tensions.
As Tanya noted in closing, directors today are facing a convergence of complex issues, from AI and cybersecurity to climate risk and geopolitics, many of which weren’t part of the traditional board remit even a decade ago. In this environment, staying curious, current, and constructively challenging is vital.
While no single blueprint emerged, the panel emphasised the importance of anchoring AI discussions in business outcomes, applying ethical scrutiny early, and challenging both executive assumptions and boardroom blind spots.
We hope this summary sparks continued debate and helps inform your conversations in the boardroom.