When Focaldata and the Financial Times published the first wave of their Workforce AI Tracker last week, three numbers should have stopped every executive team in their tracks:
- 22% of workers say their company has a workplace AI policy
- 22% say their company has a paid subscription to any AI tools
- 14% say they have received formal training from their employer on how to use AI
- Yet half of the workforce are using AI tools at least once a month.
Think about what these figures imply. Roughly four in five organisations across the UK and US have given their staff no rules of engagement, no licensed tools, and no training, while half their employees crack on regardless. Two-thirds of workplace ChatGPT users are using the tool without a company AI policy in place, and over 60% are doing so at companies with no paid subscriptions, the kind that typically afford greater data privacy protections.
This is shadow AI at industrial scale. And it is happening because organisations have confused inaction with caution.
A new tool, no manual, no training
Imagine handing every member of staff the keys to a piece of industrial machinery they have never operated. No safety briefing. No instruction manual. No designated trainer. Just a vague sense that the machine is the future and a competitor down the road is already using it. You’d be horrified. HR would be horrified. Your insurer would almost certainly refuse to renew your policy.
Yet this is precisely what is happening with generative AI in most workplaces today. The tools are powerful, the outputs look polished, and the interface is so friendly that the existence of risk is easy to forget.
The machinery analogy holds in another important way. Industrial safety is not achieved by banning the equipment. It is achieved through training, certification, supervision and clear procedures. Organisations that responded to the arrival of power tools by pretending they did not exist were not safer than those that embraced them properly. They were simply less productive and, when accidents happened, less prepared.
The governance gap is a productivity gap
The Focaldata data makes this trade-off explicit. A company that has delivered formal AI training sees a 37 percentage point increase in the share of staff using AI on any given workday, compared with an identical company that has provided none. Even informal guidance produces a 24 point uplift. Training is, by some distance, the largest single driver of AI adoption that the researchers identified, bigger than age, seniority, or industry sector.
In other words, organisations that treat training as an optional cost are also treating productivity as optional. The technology and communications sector, which has the highest training and adoption rates, is also achieving the largest productivity gains (7.8%, more than double the workforce average) and is the most likely to be planning to grow headcount over the next year. Augmentation and growth are showing up in the same places that governance and training are showing up. This is not a coincidence.
Meanwhile, the organisations that have left their staff to figure it out alone are accumulating a different kind of return: a workforce making confident-sounding decisions based on outputs they cannot evaluate, sending data through tools whose terms they have not read, and building working habits that will need to be unwound later. The bill for this comes due slowly, and then all at once.
What “no policy” actually means
It is tempting to interpret the absence of an AI policy as a neutral state, a sort of organisational pause while leadership figures out what to do. It is not. In a workplace where 65% of employees have already used AI at least once on the job, the absence of a policy is itself a policy. It is a decision to delegate every governance question, every data handling judgement, every accuracy check, to whichever individual employee happens to be at the keyboard.
Those individuals are not equipped to make those decisions on the organisation’s behalf. They have not been told which tools are sanctioned, what data is permissible to input, when human review is required, what disclosure to clients looks like, or where the legal and regulatory tripwires sit. They are improvising, and the quality of that improvisation varies enormously.
The Focaldata research shows that workers themselves are not the obstacle. Fewer than one in five non-users cite ideological opposition to AI as the reason they do not use it. The barriers they identify are practical: they have not been trained, and they cannot see how it applies to their role. These are problems that employers can solve. They are choosing not to.
Steering, not braking
Governance, in this context, is not a brake on the machine. It is the steering wheel. It is the mechanism through which an organisation channels the productivity potential of AI in directions that align with its strategy, its values, its regulatory obligations and its risk appetite. Without it, the tool drives the company rather than the other way round.
Three things should be on every leadership team’s agenda in the next quarter.
First, a policy. Not a 40-page document that nobody reads, but a clear and accessible statement of which tools are sanctioned, what data can and cannot be entered, what review is required before AI-assisted outputs leave the building, and where staff should go with questions. The bar here is comprehensibility, not comprehensiveness.
Second, training. The Focaldata data is unambiguous on the size of the prize: even informal guidance moves the needle, and formal training moves it more. This does not need to be a six-month transformation programme. It needs to be specific, role-relevant, and refreshed as tools evolve. The organisations that get this right in 2026 will compound their advantage every quarter.
Third, sanctioned tools. If staff are using AI anyway (and they are), the question is whether they are using tools the organisation has vetted, contracted, and configured, or tools they have signed up to with a personal email address. The cost of an enterprise subscription is trivial compared with the cost of a data incident.
The honeymoon will not last
Focaldata’s researchers describe the current moment as an “AI honeymoon period”. Workers report higher quality output, more interesting work, and reduced stress. Satisfaction is high. But the same workers are simultaneously pessimistic about what AI means for the labour market as a whole (particularly in the UK), and a majority believe AI will reduce total employment in the economy over the next five to ten years.
That gap between personal experience and collective expectation is unstable. As the productivity gains concentrate, as the bifurcation between AI-native and AI-excluded workers deepens, and as the governance failures of the early period begin to surface as incidents, scandals and lawsuits, the political and regulatory environment will shift. Organisations that have built their AI use on a foundation of trained, supported, governed staff will weather that shift. Organisations that have built it on shadow IT and individual improvisation will not.
The 78% who have no policy, no licensed tools and no training are not standing still. They are accumulating risk. The good news is that the lever to address this is in their hands, and the evidence on what works is now clear. The only question is whether they pick it up before the bill arrives.

