Are we investing in the wrong infrastructure for AI?

Most AI transformation conversations focus on technology. But a growing body of thinking — and emerging research — suggests that without equivalent investment in human infrastructure, even the most sophisticated tools will underdeliver. It’s time to rebalance the equation.

The infrastructure we talk about, and the kind we don’t

Ask any business leader about their AI strategy and you will likely hear about platforms, pilots and productivity gains. Which tools are being deployed, which processes are being automated, which efficiency metrics are being tracked. The infrastructure conversation, in most organisations, is essentially a technology conversation.


That framing is understandable. Technology is visible, purchasable and measurable in ways that feel straightforward. You can point to a Microsoft 365 Copilot licence, a rollout timeline, a headcount of trained users. Progress has a clear shape.


But Victoria Ferrier, a chief people officer and strategy specialist, makes a compelling case that this framing misses something critical. Writing recently in Isabel Berwick’s Working It column in the Financial Times, and developing the idea further in her own practice, Ferrier argues that organisations are pouring investment into technical infrastructure while largely neglecting what she calls ‘human infrastructure’, and that this imbalance comes with a cost.


“Your human infrastructure is as important as your tech and AI and systems infrastructure,” she told the FT. It is not, she is careful to emphasise, the same as training. It is not a learning and development programme or an upskilling initiative. It is something deeper and more structural than that.

What human infrastructure actually means

The phrase itself is worth chewing over. Infrastructure, in the technical sense, refers to the foundational systems that enable everything else to function. Roads, power grids, telecommunications networks. They are not the activity; they are what makes the activity possible. Apply that logic to people and you arrive at something quite different from conventional workforce thinking.


Ferrier’s working definition centres on capacity: specifically, the organisational capacity to absorb uncertainty, exercise judgement and adapt. She uses the analogy of a gardener who would not plant a rose without first preparing the soil. You develop the environment before you introduce the thing that needs to grow in it. In an AI transformation context, that means building the conditions in which people can actually work well with AI, not just completing the task in front of them, but navigating the ambiguity, novelty and occasional failure that come with genuinely new tools.


This is not a soft concept, even if it resists easy quantification. Ferrier works with Professor Ruth Crick, an academic whose research has produced a framework for actually measuring the effectiveness of human infrastructure inside organisations. The framework identifies eight capacities that together constitute a workforce’s readiness to learn and adapt:

  • mindful agency
  • sense making
  • curiosity
  • creativity
  • hope and optimism
  • belonging
  • collaboration
  • and orientation to learning.


The crucial point, as Ruth Crick has put it, is that “the capacity of an organisation to learn — not training completion rates, but genuine adaptive capability — is empirically measurable and developable in the course of day-to-day work.”

Once you understand that, the conversation shifts from ‘soft skills’ to strategic infrastructure. And that shift matters.

Why this resonates for AI governance

From an AI governance perspective, the human infrastructure argument connects to something we see repeatedly in our work with organisations. Capability and intent matter and need attention, but that’s not the whole solution. Most organisations now have AI policies. Most have deployed at least one major tool. Many have run awareness sessions and introductory training. The problem is in the space between deployment and effective use – what we have previously described as the application gap.


That gap does not close just through more training. It closes when people have the organisational conditions to make good judgements about when and how to use AI, to question outputs critically, to escalate concerns when something feels wrong, and to take genuine ownership of the results that AI-assisted work produces. These are not skills you acquire in a workshop. They are capacities that develop through practice, feedback and an environment that supports them.


This matters particularly in regulated sectors. Financial services, healthcare and legal services, for example, are environments where the stakes of poor AI-assisted judgement are high, and where governance frameworks increasingly require organisations to demonstrate not just that they have deployed AI responsibly, but that the people using it are equipped to do so. A well-written AI policy is necessary but not sufficient. The policy needs people behind it who have the capacity to apply it in practice.


The EU AI Act, and credible Responsible AI frameworks, place significant weight on human oversight of AI systems. That oversight is only meaningful if the humans providing it have genuine capability to exercise it, not just a checkbox confirming that they attended a training session. Human infrastructure, in this sense, is not ancillary to AI governance. It is a core component of it.

The accountability question

There is a harder dimension to this too, which is about being clear where accountability lives. One of the recurring patterns in AI governance failures is that responsibility gets pushed towards the people with the least power to address systemic problems.

A junior employee spots something concerning in an AI output. Do they have the organisational support to raise it? Is there a mechanism for that concern to reach the people who can act on it? Does the culture reward that kind of judgement, or penalise it as friction?


These are human infrastructure questions. They are about the organisational soil in which responsible AI use either takes root or fails. You cannot solve them by writing a better policy. You have to build the conditions — the belonging, the psychological safety, the collaborative norms — that make responsible behaviour the natural default rather than the effortful exception.

A different kind of AI readiness framework

Ferrier’s critique of standard AI readiness thinking is direct: “Most AI readiness frameworks are built around individual skill acquisitions.” The assumption embedded in most approaches is that readiness is a matter of getting enough people to a sufficient level of technical competence. Learn the tool, pass the assessment, the organisation is ready.

She believes that model will not scale, and there is good reason to agree with her. The rate of AI capability development means that specific tool skills have a short shelf life. What organisations actually need is the underlying capacity to keep adapting so their people can confidently encounter new tools, new use cases, new governance challenges and work through them productively. That requires the eight-component framework of human infrastructure, not a catalogue of completed training modules.


This also has implications for how we think about measurement. Training completion rates are easy to count and easy to report upwards. Adaptive capability is harder to capture but far more meaningful. The research behind Crick’s framework suggests it can be measured, which means it can also be tracked over time, tied to business outcomes, and used to make the case for investment in ways that go beyond intuition.

What this means for leadership

Ferrier’s practical recommendation is pointed: elevate the chief people officer to a true strategic partner in AI transformation. In most organisations, HR sits downstream of strategy. It receives the decisions that have already been made and is asked to implement them. That positioning is not fit for purpose in an AI transformation context, where the human dimension is not an implementation detail but a foundational design question.

This connects to a broader point about how AI investment decisions get made. Capital allocation decisions in most organisations are dominated by technology considerations: which platform, which licence, which integration. The human infrastructure investment — the work of building adaptive capacity, psychological safety, collaborative norms and orientation to learning — is often treated as a cost to be minimised rather than an asset to be developed.

Getting this right means pairing investment in technical infrastructure with proportionate investment in the environment that will receive it. Not as an afterthought, and not as a communications exercise but as a genuine strategic commitment, with the measurement, accountability and executive sponsorship that any serious infrastructure investment requires.

Two systems, one foundation

The most useful reframe that human infrastructure thinking offers is this: technical infrastructure and human infrastructure are not competing claims on budget and attention. They are interdependent systems. One without the other produces predictable failure modes. We don’t want sophisticated tools in an organisation without the human capacity to use them thoughtfully. Nor do we need well-intentioned people without the technical environment that makes their judgement effective.

The soil and the rose. You cannot separate them and expect anything to grow.

You may also like...