All articles
Leadership Clarity Is Key To Trust, Credibility, And Organizational Alignment Around AI
ITPR Director of Public Relations Anthony Monks shares why clear leadership narratives, not smarter tools, determine whether AI strengthens or weakens trust.

Key Points
Leaders roll out AI faster than they explain it, leaving teams unclear on purpose, accountability, and success, which erodes trust and fuels skepticism.
Anthony Monks, Director of Public Relations at ITPR, explains how unclear communication and blurred ownership cause AI initiatives to stall even when the technology works.
Leaders build trust by treating AI as decision support, clearly defining why it’s used, who owns outcomes, and how human judgment guides results.
AI doesn’t fail because it can’t perform. It fails because people don’t understand it well enough to trust it, and that’s an education problem.
AI is putting communications under a microscope. When leaders introduce new tools without explaining why they’re being used, how success will be measured, or who remains accountable, uncertainty spreads quickly across teams. While the technology doesn’t create confusion on its own, it amplifies whatever leaders put into it: strengthening clarity when communication is solid and deepening doubt when it isn’t.
That is the view of Anthony Monks, a communications leader with more than 15 years of experience in internal communication strategy and impact measurement. As Director of Public Relations at B2B technology PR consultancy ITPR and a regular columnist for Strategic Magazine, Monks argues that the industry is focused on the wrong problem.
"AI doesn’t fail because it can’t perform. It fails because people don’t understand it well enough to trust it, and that’s an education problem," says Monks. That education gap creates a credibility problem that Monks has seen firsthand, reinforcing his view that even sophisticated AI systems remain fallible and require human oversight.
Own the narrative: In his view, the resulting trust deficit often starts at the top, when leaders introduce AI without a clear narrative. Simply announcing an effort to streamline a process is not enough; leaders must explain why the technology is being used, how success will be measured, and who remains accountable for the outcome. "If you can’t explain why you’re using AI and who’s accountable for the outcome, you’re already eroding trust."
The limits of automation: Even widely used AI tools can misfire, which is why human judgment and oversight remain essential. As Monks puts it, "AI is very good at producing answers, but it has no instinct for when those answers are wrong. That responsibility still sits with people." Without clear governance and ownership, those gaps don’t stay technical for long. They turn into communication failures that employees notice immediately, fueling skepticism about both the technology and the leadership behind it.
The problem often stems from leaders treating AI as a bolt-on shortcut, without clearly defining its purpose or who is accountable for its outcomes. That approach has pushed many generative AI initiatives into a trough of disillusionment and prevents leaders from adopting a more holistic, systems-level view. It can also fuel employee anxiety about automation and job loss, a fear leaders must address early before it spills over into external brand perception.
Used well, however, AI can become a trust-building tool. Monks points to a client who applied AI to analyze gaps between senior leadership perceptions and employee sentiment. The results exposed a significant disconnect, giving leaders visibility into a communication breakdown they had not recognized and an opportunity to open more honest, productive dialogue.
Mirror, not machine: Monks argues that AI should be framed as an analytical mirror, not a stand-in for leadership judgment. Used properly, it helps surface patterns, blind spots, and gaps in understanding, but it does not replace accountability. "AI can show us what’s happening faster and at a greater scale," he says, "but it doesn’t own the outcome. Humans do." Making that distinction explicit helps teams see AI as a tool for insight rather than a proxy for decision-making, which is often the difference between cautious adoption and outright resistance.
A more sophisticated way to build trust is through radical transparency. Being explicit about what AI can and cannot do, and openly acknowledging its limitations, is a powerful credibility signal. While many PR professionals now use AI to support everything from drafting press releases to streamlining webinar workflows and producing more engaging video content, the most trusted leaders are often those who are honest about the technology’s fallibility and clear that its output is only as strong as the human judgment guiding it.
Listen before you speak: Monks says leaders should shift the focus away from tools and back to people. "If you can’t clearly explain why you’re using AI, how it will help, and what success looks like, you don’t understand it well enough to deploy it," he says, adding that trust is built by listening first. "When leaders pay attention to how work actually gets done and communicate how AI supports that reality, adoption follows."
For Monks, the question isn’t whether organizations adopt AI, but whether leaders are willing to own it once they do. When AI is framed as autonomous or inevitable, accountability slips, especially when outcomes fall short. That uncertainty weakens confidence and leaves employees guessing where responsibility actually sits. The fix, he says, is clarity: positioning AI as decision support, explaining its role in plain language, and keeping human ownership visible at every step. "We need to stop talking about AI as if it’s autonomous, because humans are the ones who are accountable," he concludes. "AI is a decision-support tool, not the decision-maker."






