AI is the hardest challenge the security industry has ever faced. While AI has shown tremendous promise in virtually every industry, it’s impact trails in security. The risk of a breach used to be data loss. Now it’s altering decisions at scale. Think about it. We used to care about someone stealing social security numbers from a customer database. Now in addition to that, there is a larger worry about mortgages at scale being erroneously approved by an AI agent. When I talk to security practitioners, they highlight three challenges they struggle with: 1. Skills Shortage: Unlike other industries, few if any in the cybersecurity industry are worried about losing their job to AI. Instead they are worried about not being able to keep up with the volume of attacks if they don’t use AI. 2. Alert Fatigue: the signal within the noise is very low. People are inundated with alerts and gaining clarity from alerts confidently, and containing the attacks quickly is getting harder and harder. 3. Complexity: The explosion of the number of security tools that aren’t cohesively integrated into a platform has resulted in an industry with thousands of tools and on average most organizations have 50 to 70 products in their cybersecurity stack. The complexity is untenable. There is a ton of innovation we are announcing at Cisco. I want to highlight three in particular: First is the introduction of a multi-agent fully orchestrated Agentic workflow in XDR. Security teams aren’t struggling with lack of data. They are struggling with lack of clarity. Generating an alert is easy. Taking a confident action to drive containment of an attack is hard. This innovation marks a clear shift from manual decision making to AI-augmented decision making where the signals are better detected, understood and actioned upon. This is coupled with innovations in Splunk truly reimagining the future of the SOC. Second, we are announcing an expanded partnership with ServiceNow where Cisco’s AI Defense that helps Secure AI itself will be integrated with ServiceNow’s Security Operations Platform. We live in an interconnected ecosystem where the innovations from each company should deliver a compounded effect towards keeping us safe from the adversaries and I’m super excited to partner closely with ServiceNow. Last but certainly not the least, we are launching the worlds first Security specific open weight model as part of an Opensource initiative that the newly formed AI Research team at Cisco called Foundation AI is launching. This is important because the industry needs a Security specific reasoning model that can improve efficacy while reducing the cost. This 8B parameter model has distilled data from the initial corpus of 900B tokens down to 5B tokens where the model will run on 1 to 2 A100’s. This is a dramatic reduction in cost for the efficacy that will be delivered. I’m so proud of the teams for these industry defining innovations.
This is where the real battle is. Not in data access, but in decision quality under pressure. Most teams are drowning in signal with no system to act on it. Manual response doesn’t scale. What matters now is clarity, containment, and confidence at the edge. The shift to agentic workflows and purpose-built models isn’t a feature update. It’s a fundamental upgrade to how security gets done. Appreciate the direction Cisco is taking here. These are moves that reduce complexity and create real leverage.
This is where innovation meets responsibility. AI isn’t just a shiny tool; it’s a test of how well we protect what matters most. I appreciate that Cisco isn’t chasing buzzwords but actually tackling the chaos head-on: clarity over noise, collaboration over silos, and real solutions over surface-level fixes. That’s the kind of leadership this space needs.
This hits hard, especially the shift from “data breach” to “decision breach.” That one line reframes the whole conversation around AI in security. Also really appreciate the focus on clarity, not just data. The alert fatigue is so real, and most tools still leave humans drowning in dashboards. Multi-agent workflows and that open-weight model approach sound like real steps forward, not just more tech, but actual simplification and trust in action.
Maybe we can be helpful...Comments recently made by our CEO, #Steve #Guilford: Software today is created using the antiquated file-system/operating-system paradigm. This is a paradigm that has long been insecure and which AI has studied up on! A new software paradigm that lifts the developer out of the ubiquitous FS/OS layer into a more controlled and converged data-layer environment presents a architecture that AI is not familiar with. What this means in lay terms - move all user data and business logic out of the legacy, insecure file system into a more secure and robust environment that is presented by a database. To do this you need to embrace a data-centric software development model which bakes security best-practices into the platform. This removes much of the cybersecurity burden, and technology, from the network and places it as close to the data as possible, in the form of secure business logic. Can it be done - yes. It was once the dream of Microsoft w/ WinFS - a much touted product that never made it to market. A more recent example can be found here: #asteriondb
All of the points you brought up are critically important as we think about AI and security as we move forward Jeetu Patel. I also think that other must-have conversations we need to have around security and IT are: - Educating the masses to be able to discern AI from real individuals (like we do for phishing, etc.) - Figuring out ways to mitigate those AI threats before they show up. - Building trust in a world that now feels much less trustworthy with AI mimicking human communication and actions. Love the work you are doing to help figure out these complex issues and how we navigate forward in a shared world of AI and humans for the future.
Privacy and security peofessionals need to start upskilling to collaborate with AI, not just defend against it when it’s going wrong. Mid career pros who lean in now will be tomorrow’s most in-demand leaders. Jeetu Patel
One wonders if the human factor has been taken into account when the information gathering for the business case was gathered? The point by CISCO is that they are working on a solution that includes an instance of AI possibly delivering a solution for a problem they do not understand. One of the basis made for the solution is the “shortage of skills” yet there are thousands of unemployed security professionals who can’t find employment. Just use LinkedIn as a dataset. The second is the complexity of the tools available. Yet in reqlity there is No complexity but there is a sense of “who do I trust?”, the market has many vendors delivering bespoke and general solutions that cover the spectrum required yet may not deliver the full scope of what is required. Great article though, it is thought provoking and a rub for groupthink. Many more thoughts are available from skeptics that have seen the bloated tools underdeliver due to innate design issues or human error.
You're absolutely right, Jeetu Patel, and what you’re seeing in cybersecurity mirrors what I’m seeing in corporate communications. Companies rush to adopt AI tools, run a few workshops, and assume they're “AI-ready.” However, without strategic integration and a clear understanding of how each tool supports the system as a whole, it makes little sense. I'm developing solutions to help leaders optimize their AI stack for real impact, not just installation. The key is orchestration, not accumulation.
Rewiring Minds for Innovation: Guiding Creative Leaders Beyond Limits into Visionary Strategy | Mystic | Storyteller | Ideation Expert
7moWhat stands out here isn’t just the scale of the threat, it’s the shift in the nature of it. We’re no longer just protecting data, we’re defending the integrity of decisions. That’s a fundamental reframing of the security challenge. In my work, I’ve found that when complexity overwhelms, clarity doesn’t come from more control, it comes from better thinking models. The industry doesn’t just need more tools; it needs systems that think differently under pressure. The move toward agentic workflows and security-specific models feels like a step toward that deeper kind of intelligence, one that sees the signal, not just the noise.