Across much of the energy and commodities sector, artificial intelligence still feels like a question of timing.

Some organisations see caution as prudence and choose to wait. Others highlight a growing list of proof of concepts as evidence that they are already moving forward. In both cases, progress is often judged by visible activity rather than measurable impact, restraint at one end, experimentation at the other

In reality, AI adoption is neither a simple switch nor a competitive sprint. It is a test of how well an organisation understands its own processes, data, and decision making. What AI exposes is not just technical capability but organisational reality. This is why some AI efforts in trading are failing for the reasons people do not expect. They don’t collapse because the models are inadequate or the technology immature. They falter because AI brings hidden assumptions into the open, and many organisations are not yet prepared to confront them.

Stage One: Not started yet, but not standing still

There are legitimate reasons why some trading businesses have not yet deployed AI in live operations. Regulatory scrutiny is intense, margins are volatile, and the cost of mistakes is high. In that context, caution can feel like responsible governance.

The danger is confusing inaction with neutrality.

Even organisations that have not launched formal AI programmes continue to change. Processes evolve organically. Manual workarounds become part of everyday practice. Data is reconciled through experience and judgement instead of explicit rules. Critical knowledge settles in people rather than systems.

Much of this remains invisible. Human expertise quietly absorbs inconsistency and ambiguity as part of routine work. When AI is eventually introduced, these hidden dependencies surface almost immediately, and early initiatives tend to stumble not because the technology is weak, but because the organisation struggles to answer simple questions with certainty. Which figure is authoritative. When it becomes final. Who owns it. Under what circumstances it can change.

Such questions are manageable when people bridge the gaps informally, but they become unavoidable when a system must operate consistently, at scale, and under scrutiny. For those that have not yet begun, the warning is straightforward – waiting does not preserve simplicity, it allows complexity to build out of sight and when AI finally arrives, it meets not a clean slate but years of unexamined assumptions.

Stage Two: The proof of concept trap

At the opposite end are organisations that have embraced experimentation. They have run multiple AI proof of concepts and can demonstrate technical feasibility with some even showcasing impressive results in controlled settings.

Yet daily operations often look much the same.

This is the proof of concept trap. POCs are designed to answer narrow questions quickly. They sidestep governance, streamline data flows, and operate outside production constraints. That is precisely why they succeed. It is also why they struggle to scale.

Over time, organisations accumulate a portfolio of successful experiments without a clear route to operational value. Each POC carries its own assumptions and solves a local problem, and few address the harder issues of accountability, trust, or ownership of decisions.

Where progress really stalls

Repeated experimentation can also introduce fragility. Point solutions multiply. Architectural coherence begins to erode. Confidence weakens, not in AI itself, but in the ability to translate insight into impact without increasing risk. At this point, progress slows not because the technology has disappointed, but because the organisation has reached the boundary between experimentation and responsibility.

When businesses find themselves caught between early promise and meaningful impact, the instinct is often to search for better tools, more data, or more advanced models. In practice, the blockage usually sits elsewhere.

It appears where AI meets real decision making.

Trading decisions are time sensitive, commercially delicate, and shaped by context that rarely fits neatly inside systems. Introducing AI into this environment raises questions that extend beyond performance metrics. How much trust is sufficient. Who carries accountability when outputs are incomplete or wrong. When human judgement should override automated insight, and how that intervention is recorded.

Many organisations realise at this stage that they have never fully described how decisions are made today. Processes exist, but their interpretation varies across desks and regions. Exceptions are handled skilfully, but informally, and risk is managed through experience rather than explicit frameworks.

AI struggles with ambiguity of this kind, and while it does not fail spectacularly, it simply cannot advance without clearer boundaries. This is where many initiatives slow dramatically, not for lack of potential, but because they have reached the limits of implicit understanding.

Brittleness is the hidden risk

As AI programmes expand, another risk tends to develop. Infrastructure that looks advanced on the surface can become fragile underneath.

Brittleness rarely stems from a single poor choice, tending to grow through a series of sensible short term decisions. Whether that’s a POC being pushed into production with minimal redesign, or a model becoming tightly coupled to a specific data feed. Validation is layered on afterwards. Governance is attached rather than engineered.

Gradually, they end up with systems that function well only while conditions remain stable and integrating new data sources becomes costly. Changing tools feels hazardous, and improving models threatens downstream processes that were never designed to rely on them.

In a field where markets and technology both evolve quickly, this creates a difficult paradox. AI is introduced to increase agility, yet the organisation becomes less able to adapt.

What resilient businesses do differently

Businesses that move past this stage tend to share a few common traits. They recognise early that AI will change faster than traditional systems, and so instead of pursuing stability through rigidity, they design deliberately for evolution. Business logic is separated from model logic and assumptions are recorded rather than hidden.

They treat validation and observability as core disciplines, not simply to satisfy regulators, but to sustain trust. They know where their numbers originate, how they are produced, and when they deserve scrutiny.

Most importantly, they define AI’s role in decision making before scaling its reach. They are clear about where it informs, where it recommends, and where it acts and human accountability is explicit, not assumed.

Redefining progress

Success with AI in energy and commodities trading is often described in terms of sophistication. Smarter models. Faster computation. Greater automation.

In practice, success is more subtle that this.

It appears in organisations that can defend their numbers under pressure, and in systems that evolve without undermining trust. Teams that succeed are the ones that understand not only what the technology is doing, but why.

AI does not reward speed for its own sake, nor does it penalise caution. It rewards honesty about readiness.

AI is not replacing traders. It is exposing how trading organisations truly function, and those prepared to face that reality will find progress comes more naturally. Those who are not will continue to mistake activity for momentum.

Follow Us

Get the latest news and stay up to date

Get in touch

If you would like to find out more, or want to discuss your current challenges with one of the team, please get in touch.