Conversational AI is transforming customer support, but making it reliable and scalable is a complex challenge. In a recent tech blog, Airbnb’s engineering team shares how they upgraded their Automation Platform to enhance the effectiveness of virtual agents while ensuring easier maintenance. The new Automation Platform V2 leverages the power of large language models (LLMs). However, recognizing the unpredictability of LLM outputs, the team designed the platform to harness LLMs in a more controlled manner. They focused on three key areas to achieve this: LLM workflows, context management, and guardrails. The first area, LLM workflows, ensures that AI-powered agents follow structured reasoning processes. Airbnb incorporates Chain of Thought, an AI agent framework that enables LLMs to reason through problems step by step. By embedding this structured approach into workflows, the system determines which tools to use and in what order, allowing the LLM to function as a reasoning engine within a managed execution environment. The second area, context management, ensures that the LLM has access to all relevant information needed to make informed decisions. To generate accurate and helpful responses, the system supplies the LLM with critical contextual details—such as past interactions, the customer’s inquiry intent, current trip information, and more. Finally, the guardrails framework acts as a safeguard, monitoring LLM interactions to ensure responses are helpful, relevant, and ethical. This framework is designed to prevent hallucinations, mitigate security risks like jailbreaks, and maintain response quality—ultimately improving trust and reliability in AI-driven support. By rethinking how automation is built and managed, Airbnb has created a more scalable and predictable Conversational AI system. Their approach highlights an important takeaway for companies integrating AI into customer support: AI performs best in a hybrid model—where structured frameworks guide and complement its capabilities. #MachineLearning #DataScience #LLM #Chatbots #AI #Automation #SnacksWeeklyonDataScience – – – Check out the "Snacks Weekly on Data Science" podcast and subscribe, where I explain in more detail the concepts discussed in this and future posts: -- Spotify: https://lnkd.in/gKgaMvbh -- Apple Podcast: https://lnkd.in/gj6aPBBY -- Youtube: https://lnkd.in/gcwPeBmR https://lnkd.in/gFjXBrPe
AI Integrations That Improve Customer Support
Explore top LinkedIn content from expert professionals.
Summary
AI integrations that improve customer support involve using artificial intelligence technologies, like chatbots and language models, to enhance interactions between businesses and their customers. These integrations help automate responses, provide tailored assistance, and improve overall efficiency in resolving customer queries.
- Use structured workflows: Design AI systems with clear workflows to ensure virtual agents can reason logically and respond appropriately to customer queries.
- Combine AI and human support: Let AI handle repetitive tasks while empowering human agents to address complex or sensitive customer issues.
- Prioritize data quality: Invest time in organizing and cleansing data to improve AI's ability to provide accurate and helpful responses.
-
-
Learnings from transforming CX with Gen AI for a Financial Services giant in APAC 🚀 One of the largest Financial Services players in the APAC recently leveraged Verloop to transform its contact center. The outcomes? Transformational change in customer support experience which not only drove CSAT up but also helped them bring efficiency into their CX Ops. Here is a snapshot of outcomes and learnings Outcomes -------------- 1. About 30% increase in Customer Satisfaction score 2. 43% fewer tickets assigned to their support desk 3. 70% Reduction in Average Response Time 4. 30% Cost Savings by CX efficiency Learnings -------------- 1. Effort - Easier said than done; most models are great for building demos but a nightmare when implementing large complex scenarios 2. Focus - Niche-trained LLMs work better than a large model 3. Latency - Latency in response especially in audio calls is a deal breaker. 4. RAG + LLM - Balancing when to refer to RAG vs when should LLM handle the task takes a while 5. Cost - Models cost significant amount of money to run; attach and focus on business outcomes 6. Data Quality - Investing time in data cleansing and organization pays off massively 7. AI + Human - AI handles the repetitive tasks, while AI-assisted human agents are required for empathy and complex problem-solving 8. Keep Building - Continuous improvements and training of flows is critical more so in the first few months of launch Implementing Guardrails --------------------------- 1. Focus on Ethical AI usage with strict guidelines to ensure AI operates within ethical boundaries, maintaining transparency and customer trust. 2. Adhere to rigorous data privacy regulations to protect customer information. Protecto works like a charm! 3. A key trait of any such implementation is AI knowing when to hand over Launch Experience -------------------- 1. Collaborative Approach - Everyone is learning in this journey; engage early and frequently with all stakeholders 2. Stay Agile - Launch iteratively and keep improving instead of one big bang launch 3. Human training - Focus on training all stakeholders; things are different vs structured data We started Verloop with the idea that the future of contact centers is AI-first, human-assisted. These engagements help us stay on the course and keep building towards our vision. We are already living in the future and it is slowly spreading everywhere! 🌟 #contactcenter #GenAI #CXTransformation #transformation Verloop.io CA. Ankit Sarawagi Melisa Vaz Nikhil Gupta Urvashi Singh Kiran Prabhu Ravi Petlur Kumar Gaurav
-
🧠 Is Generative AI Just Cool, or Does It Really Have an Impact? That's the big debate in tech circles these days. A study led by researchers from Stanford University, MIT, and the National Bureau of Economic Research (NBER) sheds light on this question by examining the real-world impact of deploying generative AI in a customer support environment. Their analysis offers empirical evidence on how AI tools, specifically those based on OpenAI's GPT models, are transforming customer service operations at a Fortune 500 software company. The researchers employed a mix of methodologies: a randomized control trial (RCT) and a staggered rollout, encompassing around 5,000 agents over several months. By analyzing 3 million customer-agent interactions, the study assessed metrics such as resolutions per hour, handle time, resolution rates, and customer satisfaction (Net Promoter Score). To understand the AI's impact over time, dynamic difference-in-differences regression models were used. Here is what they found: 1. 𝐒𝐢𝐠𝐧𝐢𝐟𝐢𝐜𝐚𝐧𝐭 𝐁𝐨𝐨𝐬𝐭 𝐢𝐧 𝐏𝐫𝐨𝐝𝐮𝐜𝐭𝐢𝐯𝐢𝐭𝐲: The AI tool led to a 13.8% increase in the number of customer queries resolved per hour, particularly benefiting less experienced agents. 2. 𝐍𝐚𝐫𝐫𝐨𝐰𝐢𝐧𝐠 𝐭𝐡𝐞 𝐏𝐞𝐫𝐟𝐨𝐫𝐦𝐚𝐧𝐜𝐞 𝐆𝐚𝐩: AI tools accelerated the learning curve for newer agents, allowing them to reach the performance levels of seasoned employees more quickly. 3. 𝐈𝐦𝐩𝐫𝐨𝐯𝐞𝐝 𝐂𝐮𝐬𝐭𝐨𝐦𝐞𝐫 𝐒𝐚𝐭𝐢𝐬𝐟𝐚𝐜𝐭𝐢𝐨𝐧: The AI deployment resulted in higher customer satisfaction scores (as shown by improved Net Promoter Scores) while maintaining stable employee sentiment. 4. 𝐋𝐨𝐰𝐞𝐫 𝐀𝐭𝐭𝐫𝐢𝐭𝐢𝐨𝐧 𝐑𝐚𝐭𝐞𝐬: Interestingly, the AI support led to reduced attrition rates, especially among new hires with less than six months of experience. 5. 𝐎𝐩𝐭𝐢𝐦𝐢𝐳𝐞𝐝 𝐖𝐨𝐫𝐤𝐟𝐥𝐨𝐰𝐬: The AI system reduced the need for escalations to managers, improving vertical efficiency. However, its impact on horizontal workflows, like transfers between agents, showed mixed results, suggesting more refinement is needed in AI integration. 6. 𝐂𝐮𝐬𝐭𝐨𝐦𝐢𝐳𝐞𝐝 𝐀𝐈 𝐌𝐚𝐭𝐭𝐞𝐫𝐬: The software wasn’t off-the-shelf; it was a custom-built solution tailored to the company’s needs using the GPT family of language models. This emphasizes the importance of context-specific AI applications for effective outcomes. For leaders, managers, and AI practitioners, these insights are invaluable—highlighting not just the potential of AI, but also the nuanced ways it reshapes workflows, impacts employee dynamics, and transforms customer experiences.So, does generative AI really make a difference? According to this study, the answer is a resounding yes—but it depends on how thoughtfully it is deployed. Link 🔗 to the paper: https://lnkd.in/ejhUfufz
-
Last week, I talked about the possibilities of AI to make work easier. This week, I want to share a clear example of how we are doing that at HubSpot. We’re focused on helping our customers grow. So naturally, we take customer support seriously. Whether it’s a product question or a business challenge, we want inquiries to be answered efficiently and thoughtfully. We knew AI could help, but we didn’t know quite what it would look like! We first deployed AI in website and support chat. To mitigate any growing pains, we had a customer rep standing by for questions that came through who could quickly take the baton if things went sideways. And, sometimes they did. But we didn’t panic. We listened, we improved, and we kept testing. The more data AI collects, the better it gets. Today, 83% of the chat on HubSpot’s website is AI-managed and our Chatbot is digitally resolving about 30% of incoming tickets. That’s an enormous gain in productivity! Our customer reps have more time to focus on complex, high touch questions. AI also helps us quickly identify trends—questions or issues that are being raised more frequently—so we can intervene early. In other words, AI has not just transformed our customer support. It has elevated it. So, here is what we learned: Don’t panic if customer experience gets worse initially! It will improve as your data evolves. Evolve your KPIs and how you measure success- if AI resolves typical questions and your team resolves tricky ones, they will need more time. Use AI to elevate your team's efforts How are you using AI in support? What are you learning?