The Enterprise Trust Gap: Why Companies Fear Losing Control of AI
Why AI triggers public anxiety
The report taps into a broader rise in AI-related unease. Much of it comes down to how quickly the technology is spreading and how little the average person understands about what goes on behind the scenes. Many advanced models still act like “black boxes”, offering no clear insight into why they produce the answers they do. That lack of transparency fuels fears about losing control.
There’s also the issue of data. AI systems train on vast amounts of information, often scraped from social platforms, browsing behaviour, smart devices and other sources people don’t actively consent to. Add to that the ongoing drumbeat of data breaches, and it’s no surprise that privacy ranks so high as a concern.
Another layer is the rise of deepfakes and synthetic media. When realistic fake content becomes common, trust begins to erode. People worry about what’s real, who to believe, and whether AI will distort public debate. Bias plays into this too; AI systems trained on skewed datasets can replicate and amplify discrimination in areas such as hiring and lending.
Job fears still matter, even if they rank lowest. For many, it’s not just the threat to income. It’s the idea that work tied to thinking and decision-making—a big part of how people define themselves—could be replaced by a machine.
Girėnas says organisations feel a version of this themselves. “When people don’t understand how AI actually works, confidence drops. It slows down adoption. The only way forward is to build a system of trust, and that starts with complete visibility across the AI tools and workflows inside a company.”