The association for the people and businesses of Sheffield's digital industries.

mentoring for talent development

Join the mentoring scheme

Most organisations are asking the wrong question about AI

James Ridgway offers a reality check about AI adoption, explaining why it goes wrong and why AI isn't always the answer.

Kicking off Talking AI, we’re pleased to hear from longtime Sheffield Digital member, The Curve, in a guest post from CTO and Founder, James Ridgway. Leaning into their extensive experience in supporting organisations through digital adoption, James shares the reasons why many AI projects fail, and he offers some practical ways you can use AI in a purposeful, accessible and sustainable way.

The question I hear most often from leadership teams is some version of: “How do we implement AI in our organisation?”

It is a reasonable question. But in my experience, it is usually the wrong one. And starting there is part of the reason so many AI initiatives quietly stall.

A few months ago I was speaking with a leadership team who had spent the better part of a year trying to get an AI initiative off the ground.

They had done everything that looked right. Selected a reputable vendor. Allocated budget. Had the internal conversations. The project had senior sponsorship.

Twelve months later, they had a prototype that nobody was using. The team had quietly stopped talking about it.

When we worked through what had actually happened, the technology was the last thing on the list. The model had performed well in testing. The vendor had delivered what they promised.

The failure had happened much earlier. Before a single line of code was written. Nobody had clearly defined what problem they were solving.

The most common reason AI fails

I have had versions of this conversation more times than I can count. The pattern is consistent.

Most AI projects do not fail because of the technology. They fail because the problem was never properly understood in the first place.

This matters because the diagnosis shapes the response. If you think the technology failed, you look for a better tool or a different vendor. If you understand the problem was poorly scoped, you realise the issue needs fixing before any technology decision is made.

The three failure modes I see most often are straightforward.

Starting with a solution rather than a problem. The team decides they want to use AI, picks a candidate process, and starts building. The question they ask is “how do we apply AI here?”; rather than “what outcome are we trying to improve, and is AI the right way to get there?”

Underestimating data. AI systems depend on reliable, well-structured information. Many organisations carry years of data that is fragmented across systems and inconsistent in format. Feeding that into an AI project does not fix the data problem. It exposes it, usually at the worst possible moment.

Building in isolation. A model that is not integrated into how people actually work will not be used. A tool that sits outside a workflow creates friction rather than removing it.

None of these are technology problems. They are organisational ones.

Sometimes the answer is not AI at all

There is something else worth saying plainly.

For a meaningful proportion of the challenges organisations bring to us, AI is not the right answer. Part of our job is being honest about that.

If a process can be solved with clear rules, straightforward automation, or better reporting, introducing AI adds complexity without adding value. Rules-based systems are transparent, predictable, and easier to govern. For eligibility checks, workflow routing, and compliance validation, they are usually the better choice.

AI earns its place when problems involve patterns that cannot be expressed as explicit rules. When there is unstructured information to interpret at scale. When the volume of data exceeds what traditional analysis can handle.

One of the clients we work with, Prospera Wealth Management, had a genuine AI problem.

Their back office was manually extracting data from complex multi-page provider documents, case by case. It was slow, inconsistent, and becoming a bottleneck as the business grew.

We built a solution using natural language processing that extracts and structures data as documents are uploaded. A human verification step was built in to maintain regulatory oversight. The result was the ability to handle up to ten times the volume during peak periods without increasing headcount.

That outcome was only possible because the problem was clearly understood before the technology was selected. The question was not “should we use AI?”, it was “we have a specific bottleneck, what is the right way to solve it?”.

Getting that right also meant being deliberate about the human element. The verification step was not an afterthought. It was a core part of the solution design. The people who had previously spent their time on manual data entry did not disappear. Their role shifted.

Instead of processing documents by hand, they were reviewing and validating structured outputs, applying judgement where it mattered most.

That is a more valuable way to spend skilled time. And it is a more honest picture of what AI actually does inside most organisations. It does not remove people from the process. It changes where their contribution sits within it.

What getting it right actually requires

The organisations making genuine progress with AI tend to share a few characteristics. They start with the operational problem, not the technology.

They check whether their data and infrastructure can support what they are trying to build before committing. They test feasibility against real data in a controlled environment before scaling anything. And they treat governance as part of the design, not something to address once the system is live.

That last point matters more than it is often given credit for. An AI system that cannot be explained, audited, or overridden is a liability. Building accountability in from the start is not a constraint on innovation. It is what makes innovation sustainable.

None of this is complex. But it requires discipline. And it requires asking the uncomfortable question early: is AI actually what we need here, or are we reaching for it because it is prominent?

Going deeper

We have put this thinking into a practical guide, Is AI the Answer? It covers how AI is being applied in practice, why initiatives stall, what readiness looks like across data, infrastructure and governance. Finally how to tell the difference between a problem that needs AI and one that needs something simpler. It is free to download and written for business leaders, not technical teams.

If you would rather have the conversation in person, we have three events coming up.

On 21 May I am running a 30-minute webinar on why AI initiatives stall and how to move from experimentation to practical value.

On 3 June we are hosting a small closed-door roundtable in Sheffield at Sheffield Technology Park. A grounded conversation with senior leaders about what AI actually looks like inside organisations today. Places are limited to ensure meaningful conversations.

On 16 June The Curve’s CEO, Paul Ridgway will be joined by a panel of industry experts including Bethan Vincent, Daniel Bumby, and Rachel Swann for a 45-minute webinar exploring real-world adoption.

The goal across all three is the same. A more honest conversation about what AI can and cannot do inside real organisations.

That conversation is worth having before you start the next project.