AI Adoption: Reality Bites After speaking with customers across various industries yesterday, one thing became crystal clear: there's a significant gap between AI hype and implementation reality. While pundits on X buzz about autonomous agents and sweeping automation, business leaders I spoke with are struggling with fundamentals: getting legal approval, navigating procurement processes, and addressing privacy, security, and governance concerns. What's more revealing is the counterintuitive truth emerging: organizations with the most robust digital transformation experience are often facing greater AI adoption friction. Their established governance structures—originally designed to protect—now create labyrinthine approval processes that nimbler competitors can sidestep. For product leaders, the opportunity lies not in selling technical capability, but in designing for organizational adoption pathways. Consider: - Prioritize modular implementations that can pass through governance checkpoints incrementally rather than requiring all-or-nothing approvals - Create "governance-as-code" frameworks that embed compliance requirements directly into product architecture - Develop value metrics that measure time-to-implementation, not just end-state ROI - Lean into understanability and transparency as part of your value prop - Build solutions that address the career risk stakeholders face when championing AI initiatives For business leaders, it's critical to internalize that the most successful AI implementations will come not from the organizations with the most advanced technology, but those who reinvent adoption processes themselves. Those who recognize AI requires governance innovation—not just technical innovation—will unlock sustainable value while others remain trapped in endless proof-of-concept cycles. What unexpected adoption hurdles are you encountering in your organization? I'd love to hear perspectives beyond the usual technical challenges.
Obstacles to Genai Adoption
Explore top LinkedIn content from expert professionals.
Summary
Generative AI (GenAI) has immense potential, but its widespread adoption faces several obstacles, including technical, organizational, and societal challenges. While organizations recognize its transformative power, issues like governance, data quality, integration, trust, and accessibility hinder its seamless implementation and usage.
- Address data quality issues: Ensure your datasets are clean, secure, and comprehensive to avoid unreliable outputs that undermine AI adoption and performance.
- Simplify integration: Embed GenAI tools directly into existing workflows to reduce disruption and make them intuitive for users to adopt seamlessly.
- Focus on trust-building: Prioritize transparency, explainability, and user-friendly designs to earn stakeholder confidence and reduce resistance to adoption.
-
-
From Toys to Tools: Making Generative AI a True Asset in Healthcare Despite big opportunities for genAI in healthcare, there’s a huge adoption gap at the moment…hard to know exactly how big but there are hundreds of approved applications and only a handful in use in most health systems today. There are lots of very good reasons for this: safety, security, privacy among the many. Right now, many genAI applications in healthcare get great traction for a limited period and then fall into disuse…to me that’s a clear sign that these tools are not yet enabling productivity. It’s a nice to have, not a must have. So how do we move from “toys” to real efficiency-optimizing “tools"? First, why isn’t AI driving real productivity in healthcare yet? 3 primary reasons (there are more!): 1. Accuracy & Hallucination Risks – A single incorrect recommendation can have life-or-death consequences. HC is appropriately cautious here and doesn’t have the monitoring in place to guard against this. Because of these risks, AI today still needs a lot of human oversight and correction. 2. Lack of Workflow Integration – Most AI tools operate outside of clinicians’ natural workflows, forcing extra steps instead of removing them. 3. Trust & Adoption Barriers – Clinicians are understandably skeptical. If an AI tool slows them down or introduces errors, they will abandon it. How Can We Make AI a True Tool for Healthcare? 3 main moves we need to make: 1. Embed Trust & Explainability AI can’t just generate outputs—it has to show its reasoning (cite sources, flag uncertainty, allow inspection). And, it needs to check itself using other gen & non-genAI tools to double & triple check the outcomes in areas of high sensitivity. 2. Seamless Workflow Integration For AI to become truly useful, it must integrate with existing workflows, Auto-populating existing tools (like the EHR) and completing "last mile" steps like communicating with patients. 3. Reducing the Burden on our Workforce, Not Adding to It The tech is not enough…at-the-elbow change management will be needed to ensure human adoption and workflow adaptation and we will need to track the impact of these tools on the workforce and our patient communities. The Future: AI That Feels Invisible, Yet Indispensable Right now, genAI in healthcare is still early—full of potential but struggling to deliver consistent, real-world value. The best AI solutions of the future will be those that: ✅ Enhance—not replace—clinicians’ expertise ✅ Are trusted because they are explainable and reliable ✅ Reduce administrative burden, giving providers more time for patients ✅ Integrate seamlessly into existing healthcare workflows Ultimately, if we build a successful person-tech interaction, the best AI won't be a novelty but an essential tool to enable us to see where our workflows are inefficient and allow us to change them effectively. What do you think? What’s the biggest barrier to making AI truly useful in healthcare?
-
Sometimes AI Gets It Wrong Does this face look familiar? It doesn’t to me either. That’s because it’s not me. It’s an AI-generated version of “me” — based on a photo and vague prompts. And in many ways, that’s exactly how Generative AI is being treated across the financial services industry right now. Over the past week, I met with more than 20 clients and prospective clients — from New York to Florida, San Francisco to London, and Germany. The conversations were energizing, and the message was clear: AI is everywhere — and nowhere at the same time. It’s being talked about in boardrooms and budget meetings. But true, scaled adoption? Still lagging. Not because of the lack of ambition — but because of real, tangible obstacles. Here are the five themes that consistently surfaced across those client conversations: • Data debt is real. Organizations are drowning in data but still lack the foundation to use it effectively — cleanly, securely, and contextually. • The “how” of GenAI isn’t clear. Everyone sees the potential, but use cases are fragmented and integration into daily workflows is still immature. • Governance is keeping leaders up at night. Legal, compliance, and regulatory frameworks for AI are still being built in-flight — creating risk aversion and uncertainty. • Transformation is too often surface-deep. Front-end digital experiences may look slick, but without back-end modernization, the value gets lost in translation. • Hallucinations and AI quality risks are real. Clients are skeptical of GenAI’s reliability — especially for customer facing engagement, and for good reason. AI that generates confidently wrong answers (like the photo below) can damage brand trust if not governed, tuned, and supervised carefully. At Genpact, we believe the next generation of financial services isn’t about layering AI on top of what exists — it’s about reimagining what’s possible, front to back. This moment reminds me of when I was working in banking in Barcelona in the early 2000s. Back then, internet banking felt novel — even suspicious. Very few were using it. But a handful of us could see the shift coming. Within five years, online banking became a norm, not a novelty. GenAI is at a similar crossroads today. We’re not debating if it will change the industry. We’re deciding who will shape how it does. So no, that photo isn’t me. But the conversations? Those were very real. And they’re shaping what comes tomorrow, today. Lisa Galione Asim Burman Karun Aggarwal Prakash Chacko Kavitha Shankaran Sachin Pai Samir Saurav Priyanka Gaur Manish Nayar Alwin Bathija Deepika Singh Fred Peters Anant Shah Avjinder Singh Bains Radhika Bangaru Venkatachalam Narayanan Alex Bray Satish Acharya #GenAI #SometimesAIGetsItWrong #FinancialServices #Transformation #ClientFirst #DigitalBanking #ExecutiveInsights #Leadership #Innovation #Genpact
-
My recent research, which examines the adoption of emerging technologies through a gender lens, illuminates continued disparities in women's experiences with Generative AI. Day after day we continue to hear about the ways GenAI will change how we work, the types of jobs that will be needed, and how it will enhance our productivity, but are these benefits equally accessible to everyone? My research suggests otherwise, particularly for women. 🕰️ The Time Crunch: Women, especially those juggling careers with care responsibilities, are facing a significant time deficit. Across the globe women spend up to twice as much time as men on care and household duties, resulting in women not having the luxury of time to upskill in GenAI technologies. This "second shift" at home is increasing an already wide divide. 💻 Tech Access Gap: Beyond time constraints, many women face limited access to the necessary technology to engage with GenAI effectively. This isn't just about owning a computer - it's about having consistent, uninterrupted access to high-speed internet and up-to-date hardware capable of running advanced AI tools. According to the GSMA, women in low- and middle-income countries are 20% less likely than men to own a smartphone and 49% less likely to use mobile internet. 🚀 Career Advancement Hurdles: The combination of time poverty and tech access limitations is creating a perfect storm. As GenAI skills become increasingly expected in the workplace, women risk falling further behind in career advancement opportunities and pay. This is especially an issue in tech-related fields and leadership positions. Women account for only about 25% of engineers working in AI, and less than 20% of speakers at AI conferences are women. 🔍 Applying a Gender Lens: By viewing this issue through a gender lens, we can see that the rapid advancement of GenAI threatens to exacerbate existing inequalities. It's not enough to create powerful AI tools; we must ensure equitable access and opportunity to leverage these tools. 📈 Moving Forward: To address this growing divide, we need targeted interventions: Flexible, asynchronous training programs that accommodate varied schedules Initiatives to improve tech access in underserved communities. Workplace policies that recognize and support employees with caregiving responsibilities. Mentorship programs specifically designed to support women in acquiring GenAI skills. There is great potential with GenAI, but also risk of leaving half our workforce behind. It's time for tech companies, employers, and policymakers to recognize and address these gender-specific barriers. Please share initiatives or ideas you have for making GenAI more inclusive and accessible for everyone. #GenderEquity #GenAI #WomenInTech #InclusiveAI #WorkplaceEquality
-
Does #GenAI increase developer efficiency 20-30%? I've been in conversations with tech execs who have bought into the hype and are looking for 30% efficiency gains from #AI over the next 2-3 years. They risk running from hype to doom -- and are missing the investments that needs to happen. Pranay Ahlawat, Julie Bedard and team have published work that I've had the benefit of seeing for the past few months on the impact of GenAI on product development. Key findings: 🔹 Only 30% of enterprises have adopted co-pilot style tools for developers. Of the 70% that have, 76% have seen <50% developer adoption. 🔹 Coding is only 10-15% of the product development cycle; if you want to have an impact, you need to think more broadly and invest in tooling -- and your estimate of 20-30% gains on coding efficiency are wicked off. 🔹 GenAI helps best when directly integrated into workflows, for mid-level (not most junior) developers, and with very common languages. Anything outside that lowers the impact. 🔹 About half of companies don't have a plan for what they'll do with new capacity. Without a plan, developers are reluctant to move faster...perhaps fearing job losses. The lack of plans forward, combined with more complex code bases and a focus on "efficiency" over opportunity are big impediments to progress: "Start the transformation with use cases that resonate with engineers. For instance, emphasizing new skill development and affording the time to develop newer features and value-added tasks... Unfortunately, the initial conversation is often only about productivity, which ignites fears and doesn’t inspire developers." Pranay is really deep into what's working and not in many large scale enterprises, where the struggle is very real. The challenges aren't the same at startups -- they have extreme clarity on what's next, the code base is cleaner, and everyone's open to newer ideas. Like any new technology, GenAI has a J-curve in its adoption: negative results up front for long term gains from steady investment. For startups, the J-curve of investment is tiny. For large enterprises, it's serious. Whether you're in a big company or small, I'd recommend reading, 🔗 linked in comments. Have a read and let me know if this fits your company's experience! #FutureOfWork #technology #AI #development #engineering Boston Consulting Group (BCG)
-
In the past few months, while I’ve been experimenting with it by myself on the side, I've worked with a variety of companies to assess their readiness for implementing #GenerativeAI. The pattern is striking: people are drawn to the allure of Gen AI for its elegant, rapid answers, but then often stumble upon age-old hurdles during implementation. The importance of robust #datamanagement is evident. Foundational capabilities are not merely helpful but essential, and neglecting them can endanger a company's reputation and business sustainability when training Gen AI models. Data still matters. ⚠️ Gen AI systems are generally advanced and complex, requiring large, diverse, and high-quality datasets to function optimally. One of the foremost challenges is therefore to maintain data quality. The old adage “garbage in, garbage out” holds true in the context of #GenAI. Just like any other AI use case or business process, the quality of the data fed into the system directly impacts the quality of the output. 💾 Another significant challenge is managing the sheer volume of data needed, especially for those who wish to train their own Gen AI models. While off-the-shelf models may require less data, custom training demands vast amounts of data and substantial processing power. This has a direct impact on the infrastructure and energy required. For instance, generating a single image can consume as much energy as fully charging a mobile phone. 🔐 Privacy and security concerns are paramount as many Gen AI applications rely on sensitive #data about individuals or companies. Consider the use case of personalizing communications, which cannot be effectively executed without having, indeed, personal details about the intended recipient. In Gen AI, the link between input data and outcomes is less explicit compared to other predictive models, particularly those with clearly defined dependent variables. This lack of transparency can make it challenging to understand how and why specific outputs are generated, complicating efforts to ensure #privacy and #security. This can also cause ethical problems when the training data contains biases. 🌐 Most Gen AI applications have a specific demand for data integration, as they require synthesis of information from a variety of sources. For instance, a Gen AI system designed for market analysis might need to integrate data from social media, financial reports, news articles and consumer behavior studies. The ability to integrate these disparate data sets not only demands the right technological solutions but also raises complexities around data compatibility, consistency, and processing efficiency. In the next few weeks, we’ll unpack these challenges in more detail, but for those that can’t wait, here’s the full article ➡️ https://lnkd.in/er-bAqrd
-
According to IBM's latest report, the number one challenge for GenAI adoption in 2025 is... data quality concerns (45%). This shouldn't surprise anyone in data teams who've been standing like Jon Snow against the cavalry charge of top-down "AI initiatives" without proper data foundations. The narrative progression is telling: 2023: "Let's jump on GenAI immediately!" 2024: "Why aren't our AI projects delivering value?" 2025: "Oh... it's the data quality." These aren't technical challenges—they're foundational ones. The fundamental equation hasn't changed: Poor data in = poor AI out. What's interesting is that the other top adoption challenges all trace back to data fundamentals: • 42% cite insufficient proprietary data for customizing models • 42% lack adequate GenAI expertise • 40% have concerns about data privacy and confidentiality While everyone's excited about the possibilities of GenAI (as they should be), skipping these steps is like building a skyscraper on a foundation of sand. The good news? Companies that invest in data quality now will have a significant competitive advantage when deploying AI solutions that actually work. #dataengineering #dataquality #genai
-
At IBM we sponsored a survey with 1,000+ U.S.-based enterprise AI developers, to uncover the hurdles they face when working with generative AI. Here’s what we found: 𝟭/ 𝗦𝗸𝗶𝗹𝗹𝘀 𝗚𝗮𝗽𝘀: Only 24% of app developers surveyed consider themselves experts in GenAI. Fast innovation cycles and a lack of standardized development frameworks are major obstacles. 𝟮/ 𝗧𝗼𝗼𝗹 𝗢𝘃𝗲𝗿𝗹𝗼𝗮𝗱: Developers juggle between 5–15 tools (or more!) to create enterprise AI apps. Yet the most critical tool qualities - performance, flexibility, ease of use, and integration - are also the rarest. 𝟯/ 𝗧𝗿𝘂𝘀𝘁 𝗮𝗻𝗱 𝗖𝗼𝗺𝗽𝗹𝗲𝘅𝗶𝘁𝘆: As enterprises explore agentic AI, trustworthiness and seamless integration with broader IT systems emerge as critical concerns. The consequences are clear: overly complex AI stack enterprise investments and slow innovation. So, what’s the solution? ⭐ SIMPLIFICATION ⭐ Developers need tools that are easy to master and enhance productivity. At IBM, we’re focused on empowering developers with tools and strategies to cut through that complexity. You can learn more about the survey conducted by Morning Consult here: https://lnkd.in/gXDuwTaS IBM Blog: https://lnkd.in/gsMVMmXX
-
Virtually every organization tries to drive AI adoption using the same four methods. All four fail, for very specific reasons. Here's what those are, why they fail, and what to do instead (in today's AI Mindset newsletter): Here's the TL;DR. (And don't forget to subscribe, I'll put the link below!) The Four Horsemen vs. The Four Breakthroughs WHAT DOES *NOT* WORK: 1. Lighthouse Cases: Inspiring success stories don't teach people what to do Monday morning 2. AI Champions Enthusiasts can teach how to do something, but they can't install new behaviors in others. And everyone already knows HOW to use chatGPT, they just don't have the behavior. 3. Use Cases It's like waking up and trying to come up with use cases for electricity. It's backwards. You start with what you do, not 'good use cases' for genAI. People don't extrapolate beyond the specific examples you give them - they just don't. We need to change behavior. 3. Tool Deployment Just giving access doesn't create adoption, anymore than putting treadmills in every house in America cures heart disease. ++++++++ WHAT ACTUALLY WORKS: 1. Think Treadmill AI is a capability you develop through habit, not software you learn. There's nothing to know - just talk to it like a human. 2. Ditch Prompt Libraries Prompt libraries are like lists of things you can say to a colleague to get work done. It's so dumb. Just talk to AI like a human colleague. 3. Think Conversation Use dialogue, not command-response-walk away Google-search style. (But the brain has a hard time with this, that's why we do behavioral based training) 4. Think Electricity Start with your actual work, then apply AI to make it better, not the other way around. Bottom Line: AI adoption is behavior change, not technology deployment. Companies succeed when AI becomes invisible background support for existing work, not a special tool people have to remember to use. +++++++++ UPSKILL YOUR ORGANIZATION: When your organization is ready to create an AI-powered culture—not just add tools—AI Mindset can help. We drive behavioral transformation at scale through a powerful new digital course and enterprise partnership. DM me, or check out our website.
-
Surveys say over half of companies have deployed a GenAI app or feature and I’m not buying it. Deployed = adopted, and I can tell you from experience, adopted is the harder problem. Half of companies still don’t trust their data enough to act on it. Now you’re telling me that they have magically deployed and gotten users to adopt GenAI? Every AI problem is a data problem until the model hits user and customer hands. Then it transforms into a people problem. Users only adopt GenAI when it’s seamlessly integrated into the apps they already use. Don’t underestimate the difficulty of getting users to change. AI Product Design 101: The closer the model supported experience is to the original workflow, the better adoption rates. For example, most business workflows that involve data, use tabular data and LLMs don’t handle that well. SAP only released 1 LLM this week…and it works with tabular data. It has a conversational interface for users to ask questions about spreadsheets, price quotes, and financial reports because that’s what customers are used to doing. Users can work with familiar data types and still get the ease of the new interface and simpler data querying. Familiarity is the smartest approach to adoption. In the LLM-supported products I have worked on, once users adapt their workflows to leverage the new interface, they quickly form new habits. The hard part is getting them to start, and most companies don’t realize how big that behavioral change barrier is. I’m an SAP partner because they build stuff that works and gets adopted. Those surveys would be believable if more companies followed its lead. #GenAI #SAPSapphire