A year ago, when I ran an AI awareness and governance session with people working in financial services, the conversations were dominated by possibility. Attendees wanted to explore potential use cases, understand what AI could do for their teams and figure out where to start. Fast forward twelve months and the same session tells a very different story.
This isn’t a scientific study. It’s observation from running the same structured session with similar groups of financial services professionals every six months. But the shifts I’ve observed are consistent enough to be worth sharing, because they reflect a broader pattern in how organisations are maturing in their relationship with AI.
MS Copilot is everywhere but not everyone finds it useful
The most visible change is the near-universal presence of Microsoft Copilot. A year ago, perhaps a handful of attendees had access and were regular users. Now, almost all the attendees’ organisation has rolled it out in some form.
But the experience is far from uniform. Those with the full Microsoft 365 Copilot licence — the version that integrates with SharePoint, Teams, Outlook and the wider Microsoft 365 ecosystem — are finding genuine value. It can surface information from across their organisation’s documents and communications, which makes it a practical daily tool rather than just a chatbot.
Others, particularly those on more basic versions, are discovering that standalone LLMs like ChatGPT or Claude often deliver better results for the tasks they actually need help with. This is creating an interesting dynamic where enterprise-mandated tools and individual productivity tools are diverging.
The lesson here is important: rolling out an AI tool is not the same as enabling AI adoption. The tool needs to connect meaningfully with the way people work and the data they need to access.
AI policies are arriving — finally
A year ago, very few attendees had a clear AI policy in place. Most were working in a grey area, either waiting for guidance from leadership, or quietly experimenting without formal boundaries.
This time around, the majority either have an AI use policy in place or are close to finalising one. That’s genuine progress. It means that conversations about AI are moving from informal experimentation into something more structured and governed.
However, having a policy is only the starting point. A document that says “you may use approved AI tools for these purposes” is necessary but not sufficient. The real challenge, and where I see most organisations still struggling, is translating policy into practice. How do teams know which tasks are appropriate for AI? How do they evaluate outputs? What does good AI use actually look like in their specific role?
The application gap is the new frontier
This is perhaps the most significant shift. A year ago, the primary challenge was ideation: people wanted help identifying where AI could add value. Now, most people have a reasonable understanding of AI’s potential. They’ve read the articles, seen the demos, perhaps taught themselves via online resources to create useful agents.
The problem has moved downstream. The challenge is no longer “what could we do with AI?” but “how do we make AI work in practice, in our teams, with our data, within our governance framework?”
This is what I call the application gap. It’s the space between knowing what AI can do and actually embedding it into daily workflows in a way that’s effective, compliant and sustainable.
It shows up in questions like: “We’ve got Copilot but nobody’s really using it properly.” Or: “Our team tried using ChatGPT for drafting but the outputs weren’t good enough and people gave up.” Or: “We have an AI policy but people don’t know how to apply it to their specific work.”
What this means for organisations
These shifts suggest that financial services organisations are entering a new phase of AI maturity. The awareness phase is largely complete. The policy phase is well underway. But the implementation phase — where AI actually changes how people work — is where most are stuck.
This has implications for how organisations invest in AI support and training. Generic AI awareness sessions have done their job. What’s needed now is practical, role-specific guidance that helps people identify and prioritise practical, helpful AI use cases and get them adopted.
That means less time on “what is AI and how does it work” and more time on: how do I write effective prompts for my specific tasks? How do I evaluate whether an AI output is good enough to use? How do I integrate AI into my existing workflows without creating new risks? What does responsible AI use look like in my role?
The bottom line
The conversation in financial services has matured significantly in twelve months. The novelty has worn off. The policies are arriving. Some useful tools are available. Now comes the hard part: making it all work in practice.
Organisations that recognise this shift, and invest in closing the application gap rather than simply adding more tools or running more awareness sessions, will be the ones that actually capture value from their AI investments.
The question is no longer whether your organisation is using AI. It’s whether your people know how to use it well.
At AI Governance we have a structure methodology that helps you identify and priorise AI use cases objectively. Want to know more? Email sue.turner@aigovernance.co.uk

