Building Trust in Agentic AI
The most rewarding part of my job in Cisco Customer Experience is engaging directly with customers on their AI journeys. Some are just beginning to harness AI for data analysis, while others are already empowering intelligent agents to take meaningful action. Agentic AI opens up some really unique opportunities. You can delegate complex tasks to specialized agents that reason through everyday scenarios, easing the cognitive load on teams. In fact, recent Cisco research shows that 83% of respondents are planning to deploy AI agents within the next year.
But with this power comes real responsibility: building trust in these agents, both ethically and securely.
That’s why we’re building unified customer experience with AI, where trust is a primary design principle. Using a common approach and shared AI framework is vital to reduce redundancy, improve customer alignment, and accelerate innovation at scale.
Engineering for Trust
Trust isn’t only about AI accuracy: it’s also about data integrity, system transparency, robust controls, and consistency. This is how we do it.
1. Rooting Trust in Good Data
Before agents can be intelligent, they must be informed. Trust begins with generating answers and insights that are grounded in clean, unified data. We are building a unified data framework and data fabric to ensure every agent operates on enriched, normalized data, no matter its source.
- Integration: Designing the tech stack to support the integration of hundreds of diverse datasets including third-party data, ensuring comprehensive data coverage for insights.
- Enrichment: Turning disparate data sources (e.g., CRM, logs, tickets) into simplified, normalized, AI-ready datasets make it easier for LLMs to answer questions accurately.
- Normalization: Developing enrichment pipelines to bring context, structure, and metadata to unstructured inputs.
- Unification: Bringing together data through a customized semantic data model that maps our products and services to database schemas to improve the agentic reasoning across enterprise data.
2. Curating Questions to Minimize Errors
Instead of letting agents answer everything, we manage expectations by ensuring each agent is defined to have specific scope. Agents are designed to answer only curated questions that follow predefined templates, which helps maintain response accuracy (95% accuracy for core questions for our key agents) and alignment with the intended use case.
For the core agentic workflow, we provide a guided user interface where users select from over one hundred supported questions. This acts as a key mitigation strategy against LLM "hallucinations" by matching questions to optimized prompts while also leveraging similarity search for high Text-to-SQL accuracy.
Most responses are accompanied by citations, promoting transparency and enabling users to validate the data, which builds trust in the system. Question-response templates help maintain consistent phrasing, tone, and accuracy. By creating curated question scenarios with guided experience, we’re able to minimize errors and gain confidence in every response.
3. Driving Governance by Design
Agents should know and say only what they need to know and say, and nothing more. Our design and approach enforce policy-based controls for every agent, including:
- Robust RBAC (Role-Based Access Control) solution that limits agentic assistant’s access to data that the specific provider and/or user is authorized to view.
- Native enforcement of data privacy and regulatory compliance policies.
- Protection and isolation of sensitive customer data and deal information, ensuring Cisco’s governance standards are upheld as the era of Agentic AI arrives.
In addition to runtime policies, trust is valued throughout the Software Development Lifecycle (SDLC) through:
- Secure agent development and registration workflows.
- Mandatory access control and governance validations at deploy-time.
- Code reviews and automated policy checks during CI/CD.
We also ensure that agent model tuning and training consent is explicitly tracked and enforced. Consent metadata propagates through the data model so every piece of data used for training or inference can be evaluated and compartmentalized by its consent level. This makes the system compliant by default and traceable by design while granting engineering teams visibility into how agents are built, accessed, and improved.
4. Delivering Trust at Scale Creates More Trust
A true measure of trust in Agentic AI is adoption. Consistent and repeatable usage is essential for tuning AI models and training them to drive more effective outcomes and higher levels of accuracy. But this cultural shift is proving more challenging to achieve than one may think.
Using the example of one of our newest agents, we successfully onboarded the full cohort of users within the first six months of launch. And through our internal usage dashboards that monitor patterns and anomalies, we saw our daily active user population leveled off at 15% of this cohort.
The good news is that, within this cohort, adoption and engagement continue to thrive. They’ve generated thousands of questions that have been answered well and, for those who use the agent daily, we see a 20% optimization of their time. This user data is allowing us to improve UX and software change management to drive immediate value, and we are using reinforcement learning from human feedback to grow adoption.
We want the feedback and see it as an opportunity to learn and improve. And, as scale grows, trust grows. Each new user question drives increased accuracy and creates useful agents at scale.
Trust is Earned, Not Assumed
At the heart of Cisco's Customer Experience vision is an agentic-led approach where AI agents and humans collaborate seamlessly. By leveraging specialized AI agents, we can automate complex data processing tasks and are delivering reliable, domain-specific intelligence that enhances proactive decision-making, creating trusted and personalized experiences for our teams and our customers.
Learn More About Agentic AI
This is the first in a series of articles where I’ll be examining a different facet of Agentic AI. Stay tuned for next month’s article.
Liz, sounds like you and Thomas Marzano (former head of brand and digital for the likes of Philips and ASML) would have a lot to talk about. He just published what's being touted as *the* manifesto for brand survival in the agentic economy: https://www.brandingmag.com/product/brand-constitutions-the-legible-lovable-standard-for-building-equity-in-an-agentic-economy/213
New stripe logo. Same company blocking accounts and stealing customers’ money.
What stands out here is that Cisco isn’t treating agentic AI as a tooling problem — you’re treating it as a thinking architecture problem. Most organizations jump straight to “deploy agents.” But your model shows the real order: 1. Unify the data layer — semantic consistency before intelligence (p.4–5) 2. Constrain cognition — curated question boundaries instead of open-ended inference (p.6–7) 3. Govern the agent lifecycle — RBAC, SDLC, consent lineage (p.10–12) 4. Reward usage and adoption — because trust is emergent, not installed (p.12–14) This is the part most companies miss: trust doesn’t come from the agent — it comes from the system that shapes what the agent is allowed to think. Cisco’s approach shows that agentic AI isn’t replacing human judgment… …it’s formalizing the structure that human judgment has always relied on. Exceptional work, Liz.
Wow! It's fascinating how Cisco Customer Experience leverages AI to create a seamless customer journey. By integrating diverse datasets and enriching unstructured data with context and metadata, you're ensuring that your AI agents make informed decisions, reflecting the company's values. This approach fosters trust by providing clarity in complex interactions. I'm eager to dive into your article for more insights on this intriguing topic! Follow ClaveHR on LinkedIn for updates: www.clavehr.in - Your HR Co-Pilot!
Well said—trust doesn’t come pre-installed with AI agents, no matter how shiny the dashboard. It's almost poetic that when your customer asks a tough question, the agent’s answer is a direct reflection of your company’s DNA (no pressure, right?). Every dataset, every integration—it’s all part of building credibility one ticket at a time. If clean, contextual data is the secret sauce, https://www.chat-data.com/ is your sous chef. The platform specializes in training chatbots with both structured and unstructured data, while cleaning up messy inputs and providing advanced debugging. So, whether it’s wrangling metadata or unifying product info, accurate, trusted interactions aren’t just goals—they’re standard features.