
Apr 7, 2026
Stephen DeAngelis
With apologies to the late Oscar Wilde, I borrow the title of this article from one his most famous plays. Unlike Wilde’s play, which follows two bachelors who are remarkably untrustworthy, I want to discuss the importance of trust in today’s business environment. Trust is something that must be earned over time through consistent, reliable, and honest actions that demonstrate integrity. Trust cannot be demanded or forced, but rather it must be built through experience. Trust can also be lost in the twinkling of an eye.
Unfortunately, in almost all economic sectors, trust is becoming rarer. A few years ago, media executive Steven Rosenbaum wrote, “If you're feeling less trusting, and more suspicious of information that comes across the transom, it turns out you're not alone.”[1] He reported that the tech sector, which is now foundational for almost all other economic sectors, has rapidly been losing trust with the public. He reported, “That puts tech basically in the middle of the pack of industries — behind healthcare, retail, manufacturing but ahead of the energy, automotive, and financial services sectors.”
AI’s Trustworthiness Challenge
The rise of generative artificial intelligence (genAI) has been heralded as a major breakthrough; however, genAI also introduced the world to the reality that AI systems can hallucinate (i.e., lie). That’s a problem when you are trying to develop trust. On the heels of genAI, another type of AI stormed onto the scene: agentic AI. Tech writer Victoria Gayton explains, “AI agents are fast becoming the defining force behind the enterprise shift from simple automation to true decision intelligence.”[2] Here’s the rub. Gayton reports, “Ambition is outpacing execution. … At the center of that tension sits a question most enterprises still can’t answer: ‘Can we trust these systems to make decisions that matter?’” That’s an excellent — and important — question. According to Gayton, the answer to that question means the difference between benefiting from AI and losing out on a company’s investment in AI. She cites Scott Hebner, a Principal Analyst for AI at siliconANGLE, who stated, “Trust is emerging as the currency of innovation. No trust, no ROI.”
Even before agentic AI became the latest “hot tech,” AI had a trust problem. Half-a-dozen years ago, Avivah Litan, a Vice President at Gartner Research, explained, “Security and privacy concerns are the top barriers to adoption of artificial intelligence, and for good reason. Both benign and malicious actors can threaten the performance, fairness, security, and privacy of AI models and data.”[3] She added, “This isn’t something enterprises can ignore as AI becomes more mainstream and promises them an array of benefits. … consumers believe that it is the organization using or providing AI that should be accountable when it goes wrong.”
A report released last year by the Business Application Research Center (BARC) examined “how enterprises are building — or struggling to build — trust into modern data systems.”[4] The results were disturbing to say the least. Of the companies surveyed, “42% still say they do not trust the outputs of their AI/ML models.” The point I’m trying to make is that there are trust challenges across the board that must be overcome if businesses are to benefit from their AI investments.
The Way Ahead
A few years ago, Edouard d'Archimbaud, CTO and Cofounder of Kili, bluntly stated, “Trust in AI is priceless.”[5] Unfortunately, there is no silver bullet solution for gaining AI trust. There are, however, expert suggestions for improving performance and trust. They include:
• Improve data quality. According to d'Archimbaud, “Until now, the machine learning community has focused on data quantity. Now, we need quality.” Gaurav Rao, CEO at Howso, adds, “When it comes to trusting AI, the quality of the data used to drive its decisions holds immense significance. Flawed data, whether incomplete, incorrect, or biased, can skew the accuracy of an AI’s prediction. The consequences of relying on unreliable data in an AI system could potentially become catastrophic.”[7]
• Employ explainable AI. Explainable (aka Causal) AI can help improve trust. Gayton notes, “Enterprises struggle with AI-driven decisions that often appear as inscrutable ‘black boxes.’”[6] She explains, “Unlike conventional AI models that rely on correlation, causal AI helps determine the underlying cause-and-effect relationships that drive decisions.” At Enterra Solutions®, we use a system developed by Massive Dynamics® which provides explanatory, transparent machine learning in the form of the proprietary Representational Learning Machine™ (RLM). The basis of the RLM is high dimensional mathematics and functional analysis. RLM uniquely identifies a function that describes the combination and contribution of variables in the data set that describe the observable effects through multiple layers of interaction with a high degree of precision. This is classified as a “glass-box,” explanatory algorithm that generates a function, whose output is visible as opposed to “black-box” algorithms that merely generate patterns, but do not offer any explanatory description of the dynamics of system/data set, nor have any substantive “understanding” of what the pattern means.
• Leverage security systems. Litan insists, “It is paramount for IT leaders to acknowledge the threats against AI in their organization in order to assess and shore up both the existing security pillars they have present (human focused and enterprise security controls) and the new security pillars (AI model integrity and AI data integrity).”
• Regulate wisely. Although businesses can self-regulate, the public is unlikely to trust such efforts. The public is much more likely to trust AI if national and international regulations put guardrails in place. Alex Polyakov, CEO and co-founder of Adversa AI, explains, “One of the most transformational technologies of our age is artificial intelligence. Security risks and pertinent questions should be foreseen and elucidated before the technology starts to conquer the world. That is why AI security-related consideration is of paramount importance.”[8]
Andre Durand, founder and CEO of Ping Identity, insists, “We’re living through a collapse of trust. … Trust is no longer assumed — it must be designed, verified, and earned at every digital moment.”[9]
Concluding Thoughts
Much of the distrust for AI is a result of social media. Durand explains, “It’s the weaponization of AI to misinform, deepen division, and mimic our likeness through deepfakes. AI-driven deception is testing the very fabric of trust — the essence of authenticity — is being tested. We’ve reached an inflection point. If everything can be faked, what — or who — can we trust?” That distrust for AI can spill over into the business world. That’s why companies must take every possible step to increase trust in their AI systems. If they fail to do so, they won’t be used by employees nor trusted by consumers and, as a result, companies won’t gain the expected ROI for their AI investments. There is some good news. Companies can test and compare AI system results to verify their accuracy and benefits. Enterra’s AI systems have proven to have up 90% accuracy when helping businesses make decisions. As trust improves, business executives are relying more on AI systems to make decisions.
Journalist Lily Mae Lazarus reports, “Around 74% of executives are more confident in AI for business advice compared to colleagues or friends, according to new research by SAP.”[10] The reason they are relying more on AI is the complexity now found in the business world. Jared Coyle, Chief AI Officer at SAP North America, told Lazarus, “Systems in these large organizations are now so complex, so data-driven, that AI is as capable as the smartest people on the planet of parsing data and coming up with the key options. Executives trust the input of a friend and a colleague, but the friend and the colleague didn’t just parse 2 billion pieces of information available to them.” I’m proud to say that Enterra® is the leader in this field with Enterra’s Autonomous Decision Science™ platform that provides data-enabled prescriptive and anticipatory analytics and insights for companies across a broad range of industries. Enterra automates a new way of problem-solving and decision-making, going beyond advanced analytics to understand data, perform analytics, generate insights, answer queries, and make decisions at the speed of the market. And it’s a platform that can be trusted.
Footnotes
[1] Steven Rosenbaum, “Trust Is In Decline Worldwide,” MediaPost, 12 April 2021.
[2] Victoria Gayton, “AI agents face a widening trust gap, theCUBE Research finds,” siliconANGLE, 11 December 2025.
[3] Avivah Litan, “Dark Side of AI: How to Make Artificial Intelligence Trustworthy,” Information Week, 15 September 2020.
[4] Staff, “Report Released on Enterprise AI Trust: 42% Don’t Trust Outputs,” Inside AI News, 19 June 2025.
[5] Edouard d'Archimbaud, “Trust in AI is Priceless,” KD Nuggets, 2 August 2022.
[6] Victoria Gayton, “AI’s trust problem: Can causal AI provide the answer?” siliconANGLE, 7 March 2025.
[7] Gaurav Rao, “Why Trusting AI is All a Matter of the Right Data at the Right Time,” HPC Wire, 28 August 2023.
[8] Alex Polyakov, “Do You Trust Your Artificial Intelligence?” Forbes, 16 April 2020.
[9] Andre Durand, “Is anything trustworthy in the age of AI?” Fast Company, 19 December 2025.
[10] Lily Mae Lazarus, “Exclusive: CEOs are turning to AI for business advice and they trust it even more than their friends and peers,” Fortune, 12 March 2025.
