The agentic shift: How AI is redefining project priorities in software

Opinion
Nov 18, 20258 mins

Agentic AI is changing how software teams pick projects by pushing us to build systems that are clearer, safer and ready for real machine decision-making.

project management meeting raci
Credit: Jason Goodman / Unsplash

I’ve worked in software engineering long enough to see multiple waves of transformation reshape how teams deliver value — the rise of agile, the DevOps movement, the automation boom and, most recently, the adoption of generative AI. Each wave promised speed, efficiency and reduced human toil. But this new wave, driven by agentic AI, feels fundamentally different.

Where previous systems executed commands, agentic AI can reason about goals, plan multi-step actions and coordinate with other agents or systems, often with minimal supervision. These are no longer tools; they are teammates, of a sort, capable of making decisions and influencing outcomes.

This shift is quietly forcing CIOs and software leaders to rethink something foundational: what projects we prioritize, how we measure success and what productivity even means in a world where machines share the decision space.

From automation to autonomy

Traditional automation systems are rule-bound: they execute tasks within defined boundaries. For instance, a script deploys code, or a workflow tool routes tickets. They operate deterministically and predictably. Whereas agentic AI acts based on intent: it can interpret a high-level goal (e.g., “Optimize service reliability”) and plan its own sequence of actions to achieve it.

That’s both exhilarating and unnerving for anyone responsible for enterprise systems. It blurs the line between automation and autonomy. When I first began working with AI-assisted operational tools, I realized how quickly our mental model changes from “What should this system do?” to “What should it be allowed to decide?”

This distinction is critical because it reframes project prioritization. The question isn’t just which initiatives save time or reduce costs, but which ones create an ecosystem where autonomous decision-making can thrive safely. That’s a different kind of optimization; one centered on coordination, trust and control rather than pure efficiency.

The new calculus of project prioritization

Many organizations are eager to adopt AI-driven systems but underestimate what’s required for agentic readiness. Through experimentation, I’ve learned that readiness isn’t about how much AI you have; it’s about how adaptable and observable your environment is.

Three dimensions determine whether a project is ready for agentic integration:

  1. Coordination readiness — Can your systems and processes communicate in structured, machine-readable ways? Agentic AI thrives on integration. Projects that expose APIs, publish events and maintain consistent metadata structures are far easier to automate intelligently than siloed systems. If your enterprise architecture looks like a maze of legacy endpoints, even the smartest agent will struggle to navigate it.
  2. Context readiness — Can your AI agents access relevant knowledge without ambiguity? In my experience, AI systems underperform not because of weak models but because of poor context, such as missing documentation, inconsistent tagging and scattered tribal knowledge. Projects that consolidate domain context (e.g., service catalogs, incident postmortems, architecture graphs) often deliver outsized returns once AI agents come into play.
  3. Observability readiness — Can you see what your agents are doing and why? This is perhaps the most overlooked piece. In traditional automation, visibility means logs and dashboards. For agentic AI, it also means reasoning transparency, such as being able to trace why an agent made a decision.

When we assess initiatives through these lenses, our portfolio priorities shift dramatically. We begin to fund projects that make our ecosystems modular, contextual and observable, rather than simply efficient. Examples include:

  • Infrastructure simplification as a strategic enabler: A multi-cloud service migration may not look glamorous, but if it makes your environment coherent enough for agentic orchestration, it suddenly becomes a high-value initiative.
  • Knowledge centralization as leverage: Building an internal data graph or incident intelligence layer might once have been “nice to have.” Now it’s the substrate for AI reasoning.
  • Governance automation: Establishing policy-as-code or automated compliance checks isn’t just operational hygiene anymore, it’s how you enable trustworthy autonomy. This turns portfolio planning into an ecosystem design challenge.

Governance and accountability

When our team deployed AI-based incident triage, I learned that trust can’t be mandated; it must be earned through evidence. We introduced a tiered autonomy model that guided AI deployment through four stages:

  • Level 0: Observation only — Agent monitors and reports, building baseline performance data.
  • Level 1: Advisory mode — Agent suggests actions but doesn’t execute.
  • Level 2: Supervised execution — Agent acts with human approval, creating an audit trail.
  • Level 3: Autonomous execution — Agent acts independently within predefined domains.

Each progression required proof of reliability and explainability. This incremental approach built trust not just with leadership, but with engineers who had to coexist with the system.

Graduated autonomy zones allow controlled experimentation without compromising compliance. The lesson is universal: governance isn’t a gate at the end — it’s a scaffold that enables safe exploration.

The human role in the age of autonomy

The rise of agentic AI has triggered anxiety that human roles, especially in IT and engineering, might become obsolete. But my experience has been the opposite.

When we automated triage workflows, engineers didn’t become redundant; they became system designers. Their focus shifted from execution to oversight: creating the rules, safeguards and feedback mechanisms that make the AI reliable.

That’s the real promise of agentic AI: freeing humans from reactionary tasks so we can focus on strategy, ethics and innovation. The CIOs who succeed won’t be those who automate the most, but those who design partnerships between humans and agents.

A framework for CIOs: Prioritizing the next wave of AI projects

Translating these insights into strategy requires a deliberate framework. CIOs can use the following model to re-evaluate priorities in the agentic era:

  1. Assess agentic potential: Identify which workflows involve high coordination but low creativity. These are prime candidates for intelligent automation.
  2. Invest in observability and control planes: Before deploying autonomous systems, ensure every AI action can be monitored, explained and reversed.
  3. Focus on context infrastructure: Treat data lineage, documentation and knowledge graphs as first-class projects. They fuel intelligent behavior.
  4. Adopt a maturity model for autonomy: Don’t jump to full automation. Move from observation → advisory → supervised → autonomous, with clear metrics at each stage.
  5. Upskill teams for agentic collaboration: Introduce training on AI orchestration, ethics and incident response. Treat human oversight as a skill, not a fallback.

When project portfolios reflect these priorities, organizations transition from experimenting with AI to operating as AI-native ecosystems.

What the future might look like

Imagine a software organization where autonomous agents manage operational decisions:

  • Predict and mitigate incidents before escalation.
  • Review code not just for syntax but for systemic risk.
  • Optimize workloads dynamically based on cost, performance and sustainability.

Humans still lead, but through architecture, policy and trust rather than direct control.

The next few years will determine which enterprises truly become AI-native. Not because they deployed more AI, but because they built the culture and systems that let human and machine intelligence coexist responsibly.

Redefining success in an agentic world

As a leader, I’ve come to believe that the measure of success in this new era isn’t how much we automate but how wisely we delegate.

Agentic AI offers an incredible opportunity to reimagine not just software projects but the nature of work itself. The challenge is to do so deliberately, such as designing for safety, transparency and human collaboration. If automation was about doing things faster, agentic AI is about doing things smarter and doing them together.

For CIOs, that means rethinking priorities, retraining teams and redefining trust. Because the next frontier of productivity won’t be measured in lines of code, but in the quality of decisions we entrust to machines and the integrity with which we guide them.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

Minav Patel

Minav Suresh Patel is an engineering manager at Amazon, leading large-scale payment platforms that process more than a trillion dollars in transactions annually. His expertise spans distributed systems, software architecture and platform modernization, with a focus on building resilient, cloud-native solutions that scale globally. At Amazon, Minav has driven major initiatives in payments risk management, regionalization and platform simplification, improving reliability and compliance across hundreds of business integrations.

With more than a decade of experience in software development and technical leadership, Minav has built and scaled teams that deliver mission-critical financial infrastructure serving millions of customers worldwide. He is passionate about translating complex business needs into scalable, sustainable systems that balance innovation with operational excellence.

The views expressed in this article are personal and do not represent those of Amazon.

Priyank Desai

Priyank Desai is a senior engineering leader, with 13-plus years of experience building and leading mission-critical applications. He currently leads the global payments processing organization at Amazon, managing teams and systems responsible for processing billions of payment transactions. His interest areas include highly scalable and secure cloud architectures, applied AI/ML and technical leadership excellence. He is passionate about supporting and growing leaders and is actively involved in coaching and mentoring. He contributes externally through participation in various conferences and committees in different roles.

The views expressed in this article are the author's own and do not represent those of Amazon.