UK companies racing to use AI are starting to face growing risk exposure from the very people meant to control it. New research commissioned by La Fosse shows that it isn’t juniors, or external factors, but it is in fact senior leaders carrying the highest risk when using AI at work, driven by heavy use, poor checks and a readiness to override technical judgement.
Basically, 78% of C suite executives say they use AI for work they are not trained to do. At the same time, 93% say AI informed decisions at their level have relied on inaccurate data. 4 in 10 report serious business impact as a result.
This places boards in a difficult position because they control strategy, budgets and data access, yet their AI use carries higher consequences than mistakes lower down the organisation.
What Happens When Confidence Runs Ahead Of Expertise?
7 in 10 C suite executives describe themselves as very confident in their AI expertise, based on the La Fosse survey. That confidence does not travel far down the organisation.
Only 27% of intermediate level employees say they trust senior leadership’s AI capability. Middle management is at 36%, while entry level staff are at 33%. The data points to a workforce watching decisions being made above them and doubting the judgement behind them.
More than half of all tech workers say AI decisions at their company are made without the right expertise. Among C suite leaders themselves, 65% accept that this happens at the most senior level.
Ollie Whiting, chief executive of La Fosse, says this pattern creates hidden danger. “The people with the greatest autonomy over AI are also the ones most exposed to its risks. Concentrated at the top of organisations, this risk is often hidden behind confidence and speed, while gaps in governance, skills, and accountability widen beneath the surface.”
How Does Risky Behaviour Show Up In Everyday Work?
73% of C suite executives admit uploading confidential company data into AI tools. This compares with 42% of entry level staff and 35% of intermediate employees, according to Censuswide.
Senior leaders also report heavier reliance on AI for complex tasks. While 78% of C suite executives use AI for work outside their training, fewer than half of junior and mid level staff say the same.
Errors at the top tend to travel further with 40% of C suite executives having reported serious business impact from AI errors. Entry level staff report a 32% rate, while intermediate employees report 11%.
Whiting spoke of a trust problem forming across organisations. “The disconnect between confidence and competence is undermining trust and adoption of AI across organisations,” he says. “When employees don’t believe leadership understands AI, they are less likely to flag problems early or trust decisions being made at board level.”
What Does This Mean For Jobs And Future Decisions?
The survey suggests anxiety is growing together with AI use. Half of UK tech workers expect AI to lead to job losses at their company within 3 years. This expectation goes hand in hand with the doubts about who controls AI decisions and how those decisions are tested.
Senior leaders appear aware of the gap, as 8 in 10 C suite executives say a dedicated AI specialist is needed at board level. This recognition contradicts their high self confidence, showing boards know support is missing even if they feel assured in their own judgement.
La Fosse says boards need more structure in how AI decisions are made, who signs them off and how data is handled. The company has released a free whitepaper to give leaders a better picture of AI use in the workplace and guidance on reducing risk.
“Even the most experienced experts are still learning about AI,” Whiting says. “Those leaders who question their own confidence and decision making are the ones most likely to avoid costly mistakes.”




