Anup Nair
Contributor

Why it’s time to build dumb applications

Opinion
Nov 13, 20256 mins

There's no time like the present to move intelligence into data and let agents take it from there.

the fool
Credit: Jamie Eckle

We have been modernizing enterprise systems for decades now, but most of that modernization has been surface-level. New codebases. Cloud-native stacks. Nicer APIs. And still, the same frustrating truth remains: Every time we build a new app, we rebuild all the meaning too.

Each application defines its own version of what a client is. It hardcodes the rules for risk, eligibility or exposure. It stores data in its own schema, speaks its own dialect and expects everything else to conform. We call this agility, but what we have really done is rebuild complexity in a more expensive language.

The problem is not with the applications themselves. The problem is where we have chosen to put the intelligence. For most enterprises today, that intelligence lives deep in code scattered across dashboards, data pipelines, batch jobs and microservices. It is everywhere and nowhere.

And that is a problem, especially in an AI-driven world.

The shift: Data that understands itself

If you want your AI agents to act intelligently, your data has to be intelligent first.

That means data cannot just be tables and fields. It has to carry context. It has to be self-describing. It has to understand what it represents; not just to a database admin, but to any system, service or agent that interacts with it.

This is where the semantic layer comes in. When you model the core concepts of your business, like client, account, portfolio, transaction, tax rule and exception as a shared graph of meaning, you no longer need to teach each application what those things mean. You define them once, centrally, and everything else references that central location.

Your data becomes the source of truth not just about values, but about relationships, logic and business rules.

The applications simply consume. And when you decouple intelligence from code and anchor it in a semantic, machine-readable layer, something remarkable happens: The applications become simpler, more modular, more replaceable. Your enterprise becomes something it has not been in a long time: understandable. Not just to IT but also to the business.

This is how AI actually works in the enterprise

AI cannot thrive in a world where it has to relearn your business from scratch every time. Language models are powerful, but they hallucinate when context is missing. Rules engines are precise, but brittle when meaning is ambiguous. And agents like the kind we are now building into workflows, portals and copilots simply do not stand a chance unless they are operating on data that knows what it is.

That is why moving intelligence into data is not just a data strategy. It is an AI strategy.

In my work with semantic architectures, I discovered that when you define your business knowledge in a machine-consumable format, it is not just readable. It is queryable, composable and reusable. It allows agents to collaborate across domains without requiring glue code or ETL pipelines. This is the core idea behind machine-consumable protocols: let systems and agents speak the same language by aligning them to a common, knowledge-based layer (graph-based is even better).

As those agents mature, they begin to interact with one another; not through brittle REST calls and payload transformations, but through shared meaning. They ask questions, they retrieve facts, they collaborate. In a nutshell, the semantics of the business are now embedded in the data itself, not siloed inside app logic or trapped in documentation.

What this looks like in practice

Take a typical wealth management scenario. You want to identify clients with aggressive risk profiles, large exposure to crypto and pending tax events in the next 90 days.

Traditionally, this would require multiple teams, bespoke logic, custom dashboards and cross-system data stitching. Definitions would be reinterpreted at every layer and every change would require refactoring the code.

In the intelligent data model described above, it is different. The semantic layer already defines what a risk profile is, how exposure is measured and what taxable events mean. The data is connected. The rules are transparent. The relationships are navigable. An AI agent does not need a developer or an analyst to fetch this insight. It can reason over the model directly.

Even better, it can talk to another agent. One that handles product recommendations. One that calculates tax-loss harvesting opportunities. One that flags portfolio drift. These agents can coordinate and act without hardwiring anything into a single application.

Why this matters now

Today the typical enterprise is not just adopting AI. It is reorganizing around it.

You will have agents monitoring portfolios, agents responding to customer queries and agents reviewing compliance reports. If all of them are working from their own logic, your AI initiative will collapse under its own weight.

But if they all speak to a common semantic layer and that layer is backed by machine-consumable protocols, you get scale, clarity and compound intelligence.

And most importantly, you stop solving the same problem 12 different ways.

The takeaway

We do not need smarter applications. We need smarter data and simpler apps. We need a shared foundation of meaning that lives outside code; one that AI agents and humans can both rely on.

It is time to build dumb applications. Systems that do not try to be the source of truth, but reflect it. Systems that rely on a shared semantic backbone. Systems that let your business logic evolve without having to rewrite everything from scratch.

I believe the real intellectual property of an enterprise doesn’t live or shouldn’t live in its applications. It’s buried in the business knowledge that we are forced to reconstruct over and over because we’ve never captured it in a form that machines or modern AI can actually understand.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

Anup Nair

Anup Nair is managing partner and global CTO at Mphasis. He is an accomplished leader with a track record of providing strategic direction and business growth to organizations. Anup is skilled in leading digital technology services, driving client-centric transformations and guiding digital project completion. Previously, Anup was a digital business leader for banking and capital markets for Dell. He was also the head of digital solutions technology for LivePerson.