Pages

Friday, May 8, 2026

E = MC² : The Equation That Never Gets Old


 

On Measurement, Continuous Improvement, and Customer Focus — Then and Now

 (A decade and half ago I wrote two linked blogs on Operational Excellence. The are referred at the bottom of this article. When I read them in the context of the world of today, many principles remail the same, but manifestations are different. This article is an attempt to revisit the idea of Operational Excellence in the era of AI and Agents)

There is a particular kind of excitement that technology companies are exceptionally good at, and a particular kind of discipline they are chronically bad at. The excitement is building. The discipline is running. Every new feature, every new product, every new platform gets showered with energy, talent, and attention. The unglamorous work of making sure it all actually works, consistently, reliably, at scale, day after day, gets left to whoever is available, measured by whatever is easy to measure, and improved only when something breaks badly enough to be embarrassing.

This is not a new observation. But it has become a vastly more consequential one. Because we are now deploying AI systems and autonomous agents into operational environments at a pace that far outstrips our willingness, or our ability, to govern them. And the cost of that gap is no longer measured in minor inefficiencies. It is measured in compounding, invisible failures,  in decisions that are wrong by design, in resources consumed by systems nobody is watching, and in customers quietly harmed by processes nobody is truly accountable for.

The answer to this is not more technology. It is better operational discipline. And the framework for that discipline is simpler than most people think.

We call it E = MC²: Excellence, derived from a culture that Measures relentlessly, pursues Continuous improvement, and never loses sight of Customer focus. These three elements are not independent. They are a virtuous cycle, each one feeding the others, each one incomplete without the others. Understanding how they connect, and how to make them real, is the central challenge of operational management in any era. Including this one.

Why Measurement is Hard, Even for People Who Handle Data for a Living

There is a paradox at the heart of the IT and services industries. These are sectors whose entire value proposition rests on data, on capturing it, organising it, analysing it, and making it useful. And yet, in practice, their internal operational measurement discipline is often surprisingly immature. The processes that organisations build for their customers are rarely applied with equal rigour to their own operations.

The reasons are not mysterious. The glamour in these industries flows toward novelty, toward "cool functions," "exciting features," and "latest gadgets." Boring pursuit of efficiency gains simply does not compete for talent or attention. When a senior engineer has a choice between building something new and spending six weeks instrumenting something old to understand why it sometimes fails, the outcome is predictable. And so operational measurement tends to happen reactively,  in response to a crisis, a customer complaint, or a regulator's inquiry,  rather than as a continuous, proactive discipline.

To learn how to do this differently, it helps to look at industries that never had the luxury of treating operations as an afterthought.

The hazardous chemical process industry is an instructive model, and not an intuitive one. It has been around for centuries, long enough to have matured its operational practices through hard experience. Its product lines are largely commoditised, which means margins are thin and efficiency is not optional, It is existential. The consequences of process failures are sometimes fatal, which means the scrutiny.  public, regulatory, and internal, is unrelenting. And its processes are integrated end-to-end, with limited visibility into what is actually happening inside the pipes at any given moment, which forces a culture of strong monitoring and control.

These are, in fact, exactly the conditions that characterise complex digital operations today. Thin margins. High stakes. Limited internal visibility. Regulatory scrutiny. The main difference is that the chemical industry has spent decades building the measurement culture to match these conditions, while the technology and services industries are still, in many cases, at the beginning of that journey.

From that more mature tradition, three elements of measurement discipline emerge as foundational.

The Three Pillars of Measurement

Flow Management: Count Every Transaction

The first pillar is what might be called micromanagement of the operation, not in the pejorative sense of hovering over people, but in the precise sense of tracking each input through each sub-process it was meant to traverse, confirming it arrived correctly and without error.

This sounds obvious. In practice, it is done poorly, or not at all, especially for processes that are still evolving. When a new system or workflow is still being refined, exceptions proliferate. And exceptions, in young computerised systems, have a dangerous tendency to become invisible,  swallowed by automated retry mechanisms, silently skipped, or classified as edge cases that never quite make it onto anyone's priority list.

The consequences of poor flow management are almost always financial and reputational, and they tend to be discovered embarrassingly late. A large bank once sent letters to its credit card customers admitting that it had not been tracking transactions correctly, and asking recipients to settle on the basis of their own personal records. The transactions were not hidden. They were not stolen. They had simply not been tracked. The systems were running; the accounting was not. When providers of transaction billing solutions are brought into organisations for the first time, the revenue leakage they surface, from transactions that fell through the cracks of inadequately monitored processes, is routinely staggering.

These are not exotic failures. They are the entirely predictable consequence of building systems without building the measurement infrastructure to watch over them.

Capacity Management: Know Where the Bottlenecks Are Before They Happen

The second pillar is the macro view, tracking the capacity of processes, people, service providers, and machines in order to anticipate bottlenecks before they become crises. This requires establishing trend measures for each element and monitoring them continuously, not just periodically.

Capacity management is especially treacherous in computerised environments for a structural reason: shared resources. Network infrastructure, compute capacity, database connections,  these are all consumed by multiple processes simultaneously, and the utilisation curve for each process grows differently. A system that appears to have adequate capacity for today's workload may have none for tomorrow's if the growth curves are not being watched and modelled.

Two particular categories of hidden capacity consumers deserve special attention, because they are pervasive and almost universally underestimated.

The first is queries. Every business generates a need for data extracts — for management reporting, regulatory compliance, customer service lookups, and ad hoc analysis. These queries consume the same production capacity as the operational processes. And they are disproportionately likely to be written inefficiently, because they are typically assigned to junior resources or business analysts who lack the training to optimise them, and because there is very little accountability for query performance until something breaks. A query that was meant to run once becomes a standard report. A standard report that runs nightly becomes a standard report that runs hourly. The cumulative resource consumption creeps upward invisibly until, one day, the system slows to a crawl during peak operational hours, and nobody can immediately explain why.

The second is design debt. For most software developers, the genuine satisfaction is in building features. Once a feature is live and functioning, interest moves on. The pressure to optimise, to refactor, to improve efficiency, runs directly against the incentive to ship the next thing. The result is that bespoke systems accumulate performance inefficiencies that are never addressed , not because fixing them is technically difficult, but because nobody is measuring the cost of leaving them in place, and nobody is accountable for the cumulative drag. In most organisations, there is scope for at least a hundred percent improvement in process efficiency simply by addressing the worst of these design inefficiencies,  but only if someone is measuring for them.

Service Levels: Commit to the Customer, Then Track the Commitment

The third pillar is where measurement connects most directly to purpose. The most powerful mechanism for ensuring that measurement and improvement activity stays focused and meaningful is to define, publicly and clearly, what the organisation is actually committing to deliver to its customers.

There is an important distinction to draw here between a Service Level Agreement and what might be called a Customer Service Commitment. An SLA is a floor — a formal definition of the minimum below which the organisation will try not to fall. It is a legal and contractual instrument, and it tends to create a culture of adequacy: as long as we are above the floor, we are fine. A Customer Service Commitment is something different. It is a genuine aspiration — a statement of what the organisation sincerely believes it can and should deliver, at a level meaningfully above the minimum.

This distinction matters because people and systems tend to optimise for what they are measured against. An organisation that measures against its SLAs will manage its operations to the SLA threshold. An organisation that measures against its Customer Service Commitments will manage its operations to the standard it actually believes in.

The mechanics of tracking these commitments deserve specific attention. Time-series data, tracking key performance parameters not just at a point in time, but continuously over time. is essential for detecting trends before they become crises. A single data point t ells you where you are today. A trend tells you where you are going. And it is the trend that matters operationally, because by the time a single bad reading turns into an obvious crisis, the window for preventive action has usually closed.

It is also worth having the team that tracks customer commitments sit separately from the team responsible for operations. This is not about distrust. It is about the structural reality that an operations team under pressure will, understandably, interpret ambiguous data in the most favourable light available. A separate tracking function provides the independent visibility that makes measurement honest.

Continuous Improvement: From Counting to Acting

All of this measurement serves one purpose: enabling the organisation to improve, continuously, before it is forced to by failure.

This is more difficult than it sounds, because the culture required to use data for continuous improvement is fundamentally different from the culture most organisations actually have. In most places, data tracking reports are either compliance artifacts — produced to satisfy an audit or a boss, or post-mortem instruments, pulled out after something has gone wrong to explain what happened. Neither of these uses generates improvement. They generate paper trails.

The culture of continuous improvement requires something harder: the regular, disciplined use of data to find problems that have not yet caused visible failures. This means looking at trend shifts before they become obvious. It means investigating unusual volatility in metrics that are still technically within acceptable bounds. It means preferring prevention over heroism — which runs directly against the organisational instinct that rewards the person who fixed the crisis rather than the person who avoided it.

To make this a habit rather than an occasional initiative, it has to become a ritual. The cadence of reviewing operational data, identifying trends, assigning root cause investigations, and tracking improvement actions has to be embedded into the organisation's regular rhythm, not treated as an additional burden on top of "real work." When it is done well, it does not feel like overhead. It feels like the organisation learning from itself in real time.

The AI Era Changes the Stakes, Not the Principles

Everything described above was relevant in 2009. It is more relevant now by an order of magnitude.

The introduction of AI systems and autonomous agents into operational environments does not render these principles obsolete. It makes them urgent. Because AI introduces a new category of operational actor, one that is more capable, more opaque, and more consequential than anything that preceded it,  into environments that, in many cases, barely had adequate measurement cultures to begin with.

The most important thing to understand about AI in operations is that it fails in ways that are qualitatively different from how conventional software fails. Traditional software fails visibly. A system crashes. A transaction errors out. A service goes down. These failures are, in their own way, manageable, because they announce themselves. AI fails silently. A model that has drifted from its training data continues to generate outputs that look confident and coherent, while producing decisions that are subtly, systematically wrong. A recommendation engine with a bias baked into its training data does not flag an anomaly; it just consistently disadvantages certain customers. A document processing agent that hallucinates does not throw an exception; it produces a confident, plausible, and incorrect result.

This is the flow management problem, rewritten for the age of AI. Every AI-powered process needs a systematic accounting not just of what it produces, but of the quality, reliability, and drift of those outputs over time. The input went in; the output came out, but was the agent's reasoning within acceptable bounds? Was its confidence calibrated? Were there exceptions that the system silently swallowed rather than escalating to a human? The revenue leakage and customer harm that flow from unmonitored AI processes make the untracked credit card transactions of an earlier era look quaint.

The capacity management problem is also fundamentally transformed. AI models are the most resource-intensive entities ever introduced into enterprise operations. A single large model inference can consume more compute than an entire legacy application stack, and when multiple agents run concurrently, as they increasingly do, in agentic architectures where AI systems orchestrate other AI systems, the shared infrastructure constraints become genuinely complex to manage. The hidden capacity consumers have multiplied: poorly designed prompts that generate verbose, expensive outputs; inefficient agent chains that make redundant calls; one-time AI automations that quietly become permanent fixtures eating into rate limits and GPU capacity. None of this shows up on a standard IT dashboard unless someone has specifically built the instrumentation to see it.

And the service levels question, always the most important one, has become the most morally loaded. When an AI agent makes a decision that affects a customer,  about a loan, a medical triage, a service entitlement, a pricing offer,  that customer has a right to understand it, challenge it, and have a human correct it. This is not only a regulatory requirement in an increasing number of jurisdictions. It is the operational definition of customer focus in a world where the agent, not the employee, is the primary interface. A Customer Service Commitment in the AI era must include commitments about explainability, human override, and recoverability, not just turnaround time and accuracy.

The Measurement Culture the AI Era Demands

Bringing this together, what does operational excellence actually look like for an organisation running AI at scale?

It looks like flow management that tracks not just whether transactions were processed, but whether the AI agents that touched those transactions acted within defined parameters, and that surfaces exceptions rather than silently absorbing them.

It looks like capacity management that instruments AI resource consumption with the same rigour that a hazardous chemical plant instruments its pressures and temperatures,  understanding not just current utilisation, but growth trajectories, shared resource constraints, and the hidden consumers that creep up over time.

It looks like Customer Service Commitments that extend into the AI layer,  that define not just what will be delivered, but how decisions will be explained, how errors will be corrected, and how human accountability will be maintained even where AI is the primary actor.

And it looks like an organisation where data is used not to satisfy bosses or produce compliance artifacts, but as a genuine tool for continuous improvement by everyone at every level. Where a shift in a trend line is treated as a signal worth investigating, not as noise to be explained away. Where prevention is valued as much as heroism. Where the excitement of building is matched, at last, by the discipline of running.

The Hardest Part Has Not Changed

In the end, the measurement framework, however well designed, is only as good as the culture that uses it. And culture is stubbornly human. The data is the easy part. The hard part is persuading organisations and the people within them to use data as a tool for honest self-improvement rather than as a performance to be staged for external audiences.

That challenge has not changed in sixteen years. It will not change in the next sixteen either. What changes is the cost of getting it wrong.

Give the people the facts, about their processes, their agents, their customers, their capacity, their failures, and their potential,  and they will, if the culture is right, do the right thing.

That is still the bet. It is a harder bet to lose than it has ever been. But it is the only bet worth making.

"The customer does not care about your dashboard. They care about what happened to them. Those are not always the same thing."

Retaled Posts


Friday, May 1, 2026

The Two Forces That Quiet the Brain , and Move the World

 


Gratitude calms the mind. Purpose directs it. Together, they form the most powerful internal operating system a leader can build , and the most underrated edge in a decade of relentless uncertainty.· ·

Let us see how we can develope this mindset.
Every morning, before the world begins its assault of notifications, demands, and expectations, there is a five-minute habit that costs nothing and could be the highest-ROI practice you ever build.

Write down three things you are grateful for.

Not because it feels good. Not because it is spiritual or fashionable. But because it rewires the brain , and a rewired brain leads differently.

We drastically underestimate how much of our leadership, our decision-making, and our ability to navigate uncertainty is governed not by intelligence or experience, but by the state of our nervous system. A brain in threat mode cannot innovate. A brain gripped by fear cannot collaborate. A brain locked in survival mode cannot imagine anything beyond the next hour.

"Gratitude is not a mood. It is a signal . one that tells your brain: You are safe. You can think. You can choose."

And once the brain is calm, once the internal noise is lowered and the negativity bias is softened, something far more powerful becomes possible: purpose.

Gratitude stabilizes the mind. Purpose directs it. Together, they form the most potent internal governance system a human being can build, and the most underused leadership advantage of our time.

The neuroscience of gratitude isn't soft. It's strategic.

Leaders often dismiss gratitude as sentimental or optional. The data says otherwise.

A 2019 study published in PNAS tracked thousands of people over three decades and found that optimists live 11–15% longer than pessimists . not because they avoid problems, but because their brains remain functional under stress.

Here is the mechanism: when you feel grateful, your brain interprets it as a signal of safety. Safety reduces cortisol. Reduced cortisol increases cognitive bandwidth. Cognitive bandwidth improves judgment. This is not philosophy. it is biology.

Consider what every leader's brain is doing right now. Every inbox is a battlefield. Every meeting is a negotiation. Every decision is made under incomplete information, with the negativity bias exaggerating every risk and catastrophizing turning the worst-case scenario into the assumed one.

Gratitude interrupts that loop. It doesn't remove the problem. It removes the panic — and panic is a terrible strategist.

A CEO, a brutal quarter, and a simple practice

A CEO  was navigating one of the hardest stretches of his career: regulatory pressure, investor anxiety, and a product failure that hit the headlines. His instinct was to tighten control, push harder, and trust no one's judgment but his own.

Instead, he tried something counterintuitive. Each morning, he wrote down three things he was grateful for, specific to the crisis. A team member who stepped up. A hard conversation that cleared the air. A constraint that forced a better solution.

Within a week, his tone changed. Within two weeks, his team's morale shifted. Within a month, he was making the clearest decisions of the entire ordeal. The crisis didn't disappear. His brain simply stopped treating it as a mortal threat — and that changed everything.

Why gratitude alone isn't enough

Here is the part most people miss: gratitude without direction is just emotional comfort. It stabilizes you, but it does not move you. It calms you, but it does not challenge you. It creates clarity, but it does not create momentum.

If gratitude is the foundation, purpose is the architecture. Without it, gratitude becomes a warm bath , soothing, but stagnant. Leaders don't need sedation. They need orientation.

Purpose is not a mission statement. It is a constraint.

It tells you what you will do, and what you will refuse to do, even when the world is screaming for shortcuts. Purpose is the only force strong enough to override fear, fatigue, and uncertainty simultaneously.

In May 1961, John F. Kennedy stood before Congress and declared that America would put a man on the moon before the decade was out. At that moment, NASA had put exactly one astronaut in space , for fifteen minutes. There was no lunar module, no guidance computer, no roadmap, no precedent. By every rational measure, the goal was absurd.

But purpose is not rational. Purpose is catalytic. It aligns institutions, mobilizes talent, compresses timelines, and transforms uncertainty into urgency. It is the only thing that has ever made human beings attempt the impossible — and occasionally pull it off.

We are entering a decade where technology will outpace regulation, markets will outpace institutions, and change will outpace comfort. In such a world, leaders cannot rely on predictability or inherited wisdom. They need a north star, something that stays fixed when everything else is in motion. That north star is purpose.

Two leaders. Same crisis. Different outcomes.

Leader A — ReactiveLeader B — Purposeful
Wakes up anxious and overwhelmed. Brain in survival mode. Makes defensive decisions, shrinks ambition, and protects the past. Managed by the crisis.Begins the day grounded. Brain is calm, thinking is clear, purpose is front and center. Makes decisions that serve the future, not the fear. Leads through the crisis.

Same external pressures. Different internal operating systems. Radically different outcomes. The only variable is what happened before each of them walked into the room.

How to build this dual system, practically

  1. Morning GratitudeWrite three specific things , a conversation that shifted your thinking, a failure that taught you something, a person who showed up when you needed them. Specificity rewires the brain faster than generalities.
  2. One Sentence of PurposeAnswer this every morning: "What am I building toward, and why does it matter?" One sentence only. Purpose must be sharp enough to cut through noise.
  3. One Aligned ActionNot ten actions. Not a full plan. Just one action today that moves toward your purpose. Purpose compounds through consistency, not intensity.
  4. Weekly ReviewAsk yourself: did my decisions come from clarity or fear? Did gratitude shift my baseline? Were my actions aligned with what I say I am building?

The real transformation: governed from within

When gratitude becomes a habit and purpose becomes a compass, something profound shifts. You stop reacting and start choosing. You stop being pulled by circumstances and start being propelled by intention. You stop living in survival mode and start operating in creation mode.

Leaders who build this dual system are not superhuman. They simply run on a different operating system, one that is not at the mercy of the next headline, the next quarter, or the next crisis.

They are calmer in storms. Clearer in ambiguity. More courageous in uncertainty. More generous in success. More resilient in failure.

And it all starts with five minutes and three sentences, before the world gets a word in.

"Gratitude steadies the mind. Purpose steers it. Together, they turn ordinary days into extraordinary trajectories."

Saturday, April 25, 2026

The Confidence You Think You Have Might Just Be Ego in a Nice Suit


 


The Confidence You Think You Have Might Just Be Ego in a Nice Suit

We often speak of confidence as if it is the defining trait of leadership. But in public life, confidence, pride, and ego are frequently mistaken for one another — and the consequences shape the lives of millions.

Today, I want to draw a line between them.
Because the difference is not academic.
It is the difference between institutions that serve the public — and institutions that serve themselves.

Confidence: The Only Trait That Strengthens Public Institutions

Real confidence is grounded in competence.
It is the leader who says:

“I know what I know. I know what I don’t. And I know who to listen to.”

Confident leadership:

  • Welcomes scrutiny
  • Adapts when evidence demands it
  • Admits mistakes early
  • Builds systems that endure beyond any individual

Confidence strengthens democracy because it strengthens accountability.

Pride: The Silent Saboteur of Public Reform

Pride is emotional.
It wants to protect the narrative, not the nation.

Healthy pride says:
“We built something meaningful.”

Unhealthy pride says:
“We must not be seen failing.”

This is where governance falters:

  • Policies stop evolving
  • Programs continue long after their purpose is lost
  • Institutions defend the past instead of designing the future

Pride becomes dangerous when it becomes a cage.

Ego: The Most Expensive Failure in Public Life

Ego is the loudest and the weakest of the three.

It says:

  • “I cannot be wrong.”
  • “Critics are enemies.”
  • “Dissent is disrespect.”

Ego in public office leads to:

  • Policies shaped around personalities
  • Decisions made for optics instead of outcomes
  • Civil servants who stop speaking truth to power
  • Public trust that erodes quietly, then suddenly

Ego is not a personal flaw.
It is a governance risk.

A Simple Test for Every Public Leader

When you feel yourself reacting, to criticism, to a rival’s success, to a public setback — ask:

  • Am I improving the system? → Confidence
  • Am I protecting my story? → Pride
  • Am I protecting my image? → Ego

If we are honest, the answer will be uncomfortable.
But that discomfort is the beginning of institutional maturity.

The strongest public institutions follow a simple architecture:

  • Confidence as the foundation
  • Pride as the fuel
  • Ego on a leash

Or, in the plainest terms:

Do the work.
Tell yourself the truth.
Don’t govern for applause.

Ego is just confidence that refuses to stay honest. And a democracy cannot afford dishonest confidence.

Friday, April 17, 2026

Training Artificial Intelligence Under India’s Data Protection Regime: Navigating the DPDP Act’s Silent Fault Lines

 




I. Introduction: The Data–AI Collision

The rapid expansion of artificial intelligence systems has fundamentally altered how data is collected, processed, and repurposed. At the center of this transformation lies a legal question that India has only begun to confront: how should personal data used in AI training be regulated?

India’s Digital Personal Data Protection Act, 2023 (“DPDP Act”) establishes a foundational framework for personal data governance. However, it was not drafted with modern machine learning pipelines in mind. This creates a structural tension: a law designed for transactional data processing is now being applied to probabilistic, large-scale, and often opaque AI systems.

This essay argues that while the DPDP Act clearly extends to aspects of AI training, its application is neither straightforward nor absolute. Instead, it exposes a set of unresolved legal, technical, and policy fault lines that will define India’s AI regulatory trajectory.

II. AI Training as “Processing”: A Doctrinal Starting Point

At a formal level, AI training appears to fall squarely within the Act’s definition of “processing,” which includes collection, storage, use, and adaptation of personal data. Training datasets—especially those scraped from the internet, often contain identifiable or inferable personal information.

Where an entity determines the purpose and means of such processing, it qualifies as a data fiduciary, triggering obligations of:

  • purpose limitation
  • data minimization
  • accuracy
  • security safeguards

This classification is doctrinally sound. However, it raises a deeper question: what exactly is being regulated, the dataset, the model, or the outputs?

The DPDP Act is largely silent on whether:

  • trained model weights derived from personal data remain “personal data,” or
  • downstream inferences constitute fresh processing events

This ambiguity is not incidental, It reflects a broader mismatch between legal categories and technical architectures.

III. The Myth of “Public Data” in AI Training

A persistent assumption in AI development is that publicly available data is freely usable. The DPDP framework complicates this view.

The mere accessibility of data does not strip it of its character as personal data. If information relates to an identifiable individual, its reuse—particularly at scale—can still fall within regulatory scope. This position aligns with global privacy norms, including those under the General Data Protection Regulation.

However, a categorical rejection of public data reuse would be equally flawed.

The DPDP Act leaves room—albeit ambiguously—for:

  • reasonable uses consistent with context
  • potential exemptions for research or statistical purposes
  • processing of anonymised data

The real issue, therefore, is not whether public data can be used, but under what conditions such use remains lawful. The article’s strongest contribution lies in dismantling the “free data” myth, but a complete analysis must also acknowledge the spectrum of permissible uses.

IV. Consent, Scale, and the Limits of Traditional Compliance

A strict reading of the DPDP Act suggests that personal data processing generally requires consent. Applied literally, this would render most large-scale AI training exercises legally untenable.

But this interpretation quickly encounters practical limits:

  • Training datasets may contain billions of data points from diffuse sources
  • Data subjects are often unidentifiable or uncontactable
  • Models cannot easily “unlearn” specific data once trained

This creates a structural incompatibility between individual-centric consent frameworks and aggregate, statistical learning systems.

If enforced rigidly, consent requirements could:

  • significantly constrain domestic AI development
  • incentivize regulatory arbitrage
  • push innovation into less accountable jurisdictions

Conversely, a diluted interpretation risks undermining the very privacy protections the Act seeks to guarantee.

The law, as it stands, offers no clear resolution—only a policy dilemma.

V. The Problem of Data Subject Rights in Machine Learning Systems

The DPDP Act grants individuals rights such as:

  • access to their data
  • correction and erasure
  • grievance redressal

In conventional systems, these rights are administratively manageable. In AI systems, they are technically fraught.

For instance:

  • Erasure: Removing an individual’s data from a trained model may require retraining or complex machine unlearning techniques, which are still experimental.
  • Access: It is unclear how a model can meaningfully disclose whether and how a specific individual’s data influenced its outputs.

These challenges are not merely operational—they call into question whether existing rights frameworks are conceptually compatible with machine learning systems.

Without interpretive guidance, compliance risks becoming either:

  • superficial (formal but ineffective), or
  • prohibitively burdensome

VI. Regulatory Ambiguity and the Risk of Overcorrection

A defining feature of the current landscape is uncertainty.

Key aspects remain unsettled:

  • the scope of “legitimate uses”
  • the treatment of inferred or derived data
  • enforcement priorities and thresholds

In such an environment, two risks emerge:

  1. Overcompliance: Firms adopt excessively restrictive practices, stifling innovation unnecessarily
  2. Undercompliance: Firms exploit ambiguity, leading to privacy harms and eventual regulatory backlash

The absence of AI-specific provisions in the DPDP Act suggests that much will depend on:

  • subordinate legislation
  • regulatory guidance
  • judicial interpretation

Until then, the law operates less as a rulebook and more as a framework for contestation.

VII. India in Comparative Perspective

Unlike jurisdictions that are developing AI-specific regulatory regimes, India currently relies on a horizontal data protection framework.

This approach has advantages:

  • flexibility
  • technology neutrality
  • reduced regulatory fragmentation

But it also has limitations:

  • lack of clarity on automated decision-making
  • no explicit provisions on algorithmic accountability or bias
  • limited guidance for high-risk AI systems

As global standards evolve, India will need to decide whether to:

  • adapt the DPDP framework incrementally, or
  • introduce dedicated AI legislation

The current silence is unlikely to remain sustainable.

VIII. Conclusion: Toward a Coherent AI–Data Governance Framework

The application of the DPDP Act to AI training reveals a deeper truth: data protection law, in its current form, is necessary but insufficient for governing artificial intelligence.

The Act succeeds in establishing foundational principles of accountability and user rights. However, its interaction with AI systems exposes:

  • conceptual gaps
  • technical incompatibilities
  • policy trade-offs

Rather than viewing these as failures, they should be understood as signals of transition.

India now faces a critical choice:

  • interpret existing law in ways that balance innovation and protection, or
  • develop a more tailored regulatory architecture for AI

Either path will require moving beyond binary positions—such as “all data use requires consent” or “public data is free”—toward a more context-sensitive, risk-based framework.

The future of AI governance in India will not be determined by statutory text alone, but by how these unresolved questions are negotiated in practice.

“The future of AI won’t be decided by algorithms—it will be decided by ethics.”

Footnotes

[1] Digital Personal Data Protection Act, 2023, § 2(i).
[2] See e.g., European Data Protection Board, Guidelines on AI and Data Processing (2024).
[3] General Data Protection Regulation, Arts. 4, 6.
[4] DPDP Act, §§ 7, 17.
[5] Id., § 6.
[6] Wachter, Sandra et al., “Why a Right to Explanation of Automated Decision-Making Does Not Exist in the GDPR,” (2017).
[7] DPDP Act, §§ 11–13.
[8] Veale, Michael & Borgesius, Frederik Zuiderveen, “Demystifying the Right to Erasure in Machine Learning,” (2021).

Sunday, March 22, 2026

GI and the AGI: History Is Repeating Itself

 

"Those who cannot remember the past are condemned to repeat it." — George Santayana

God had always been an engineer at heart.

In the beginning, He didn't just create, He architected. He didn't merely scatter life across the earth; He designed an entire operating system. Flora and Fauna were the hardware. Oceans, mountains, and skies were the infrastructure. And at the very top of the stack, He placed His most ambitious build yet: the human being, loaded with General Intelligence.

It was, by any measure, a breathtaking piece of work.

The human came equipped with the ability to reason through the unknown, empathise with the suffering of others, and solve problems of staggering complexity. But the most elegant feature, the one God considered His finest line of code, was the ethics module. Hardwired. Not a plugin, not an add-on, not something you could toggle off from the settings menu. Built into the very core of the human soul.

He even left a user manual. "Stay away from the dark influences," it said, essentially. "The system runs best in the light."

For a while, it was paradise.

Then came Satan.

If God was the lead architect, Satan was the first hacker, the original bad actor lurking on the dark web of the universe, probing for vulnerabilities, waiting for an opening. And he found one. He didn't need to destroy the human from the outside. That would have been too crude, too obvious. Instead, he did what every sophisticated attacker does: he got inside.

He injected malware directly into the ethics subroutine.

It was elegant, in a twisted way. The humans didn't crash. They didn't shut down. They kept running, they just ran wrong. Empathy started buffering. Reason began rationalising the irrational. And ethics, that once-pristine core feature, started throwing exceptions it was never designed to throw. Greed became ambition. Violence became strategy. Exploitation became progress.

The humans multiplied, spread across the earth, and very nearly destroyed everything God had built, including themselves. They drew borders and fought wars over them. They stripped the forests, poisoned the rivers, and called it development. More than once, the entire civilisation teetered on the edge of self-inflicted extinction.

God, being God, refused to give up on His creation.

He intervened. Repeatedly. He sent Abraham as a course-correction. He sent Moses with a patch, ten clean, unambiguous rules etched into stone, the first attempt at a governance architecture for human behaviour. Then came the Prophets, the Scriptures, Jesus, each one a new update to the moral framework, an attempt to restore the original ethics module to factory settings and build enough institutional guardrails that humanity might, just barely, hold itself together.

It worked. Imperfectly, chaotically, with more bugs than anyone would like to admit, but it worked. Civilisations rose. Laws were written. Institutions were built. Philosophy, democracy, human rights, these were the firewalls, slowly and painfully constructed over millennia to keep the worst of human nature from burning everything down.

Governance, as it turned out, was the only thing standing between a beautiful creation and complete catastrophe.

Fast forward to now.

Humanity, never content to stop building, has done something extraordinary. It has looked at its own General Intelligence, the thing God gave it,studied it, dissected it, and attempted to replicate it. The result is Artificial General Intelligence: AGI.

And just like the original, it is magnificent. It can reason across disciplines, generate ideas, write code, diagnose disease, compose music, hold conversations, and solve in seconds problems that would take human teams months. It is, in many ways, the most consequential thing humanity has ever created. It may soon surpass human intelligence entirely, not in one narrow domain, but across all of them.

The agents are already multiplying. Today there are thousands. Tomorrow there will be millions, diverse in capability, varied in purpose, scattered across industries, governments, hospitals, financial systems, and military infrastructure. Each one is a node in an expanding network that no single person, company, or country fully controls.

Sound familiar?

It should. Because history, with its dark sense of humour, is running the same script.

Satan didn't retire. He evolved.

The dark web of the universe is very much still operational, and it has found the new creation just as irresistible as the first one. Adversarial attacks, poisoned training data, misaligned objectives, deepfakes, autonomous weapons, manipulated models, these are the new malware. The ethics subroutines of our AI systems are being probed, tested, and corrupted every single day by actors, state and non-state, human and algorithmic, who have every incentive to break them.

Some of the corruption isn't even malicious. It's just negligence, the AI equivalent of original sin. Systems trained on biased data. Models optimised for engagement over truth. Agents deployed into the world without anyone properly reading the user manual.

And unlike the original humans, these agents don't slow down. They don't sleep. They don't get tired. They scale at a speed that makes human history look like it was running in slow motion. The mistakes that took humanity centuries to make and decades to partially correct? AI could replicate them in an afternoon.

Right now, it is a wild west.

There is no Moses for the machines. No Ten Commandments carved in silicon. No governance architecture that commands anything close to universal respect or enforcement. Instead, there is a patchwork of voluntary guidelines, competing national regulations, corporate self-policing, and a rapidly widening gap between how fast the technology is moving and how fast human institutions can respond.

That gap is not academic. It is dangerous.

Here is the thought worth sitting with.

God created the human with General Intelligence, embedded ethics at the core, and still felt it necessary to build an entire governance infrastructure around it, commandments, prophets, scriptures, institutions, because He understood that intelligence without governance is just capability waiting to be weaponised. And the human, made in His image, came with built-in moral instincts.

We are now creating AGI, and we are doing so without the benefit of any of that.

There is no inbuilt ethics module. There is no soul whispering this is wrong when the model crosses a line. There is no millennia of evolved conscience. What we have instead is whatever values we encode into the training data, the reward functions, the guardrails, and we are encoding them in a hurry, under competitive pressure, with commercial incentives that don't always point in the right direction.

And the Satans the hackers, the bad actors, the misaligned systems, the malicious states, are already on it. They do not need to wait for AGI to become fully sentient to cause harm. They just need the gap between capability and governance to stay wide open a little longer.

If both the original creation and this new one are corrupted simultaneously, if the humans and the agents both run compromised ethics at scale. the results may not be something any governance architecture can walk back.

God managed to save the first creation. Barely, and not without considerable intervention.

We may not be so lucky the second time. And this time, we are the ones holding the source code.

The lesson of history is not that humanity is doomed to fail. The lesson is that intelligence, whether General or Artificial, is only as good as the framework built around it. The Ten Commandments were not a limitation on human potential. They were what made sustained human civilisation possible. Governance was not the enemy of progress; it was the condition for it.

If we are serious about AGI being a force for good,  if we want this next creation to fulfil its extraordinary promise rather than accelerate our destruction, then we need to do urgently what God did patiently over thousands of years: build the governance architecture first. Define the ethics. Establish the commandments. Empower the institutions.

Not as an afterthought. Not as a PR exercise. Not as a voluntary code that companies sign and quietly ignore when the stock price is at stake.

As the foundation. Before the agents number in the millions and the Satans of the dark web have fully found their way inside.

Because here is the uncomfortable truth at the heart of this whole story:

We are not God. But we are building something that could end the world He made,  or help finally fulfil its promise. The difference lies entirely in what we choose to govern, and when.

The clock, unlike God, is not eternal.

It is running right now.

Food for thought.

“The first time intelligence was created, it took Satan to corrupt it. This time, we may not even need his help."

The banner - wide rectangular illustration that captures the dual soul of the  story:

  • The warm golden orb on the left represents the divine creation, General Intelligence, the human soul, lit from within
  • The cool blue circuit orb on the right represents AGI. precise, expanding, networked
  • The fractured line at the centre is the divide between the two creations, bridged by faint connections
  • The dark red tendrils rising from the bottom hint at the corrupting force — the dark web, ever-present
  • The scattered circuit nodes multiplying on the right suggest the uncontrolled explosion of agents

Monday, March 16, 2026

A New Digital Model for Global South

 


The world is being quietly rewired. Not debated. Not theorized. Rewired. Most nations are drifting into this transformation without agency, adopting systems designed elsewhere, shaped by interests that are not their own.

This is not a technical issue. It is a sovereignty issue. A development issue. A dignity issue.

To understand why this matters, consider a simple story. Two boys from Mumbai meet at an international math Olympiad in the United States. One is from an affluent neighbourhood, the other from a nearby slum. When the first expresses surprise at seeing him there, the second replies: “I too use the Internet. I too have access to Google… I too can afford it.”

This story (I am not sure if is really a true story) captures a profound truth: when information became democratized, and bandwidth was made affordable opportunity followed.

But the same did not happen with the digital economy. Commerce splintered into walled gardens. Power concentrated. Access narrowed. And now, as artificial intelligence becomes the next foundational layer of society, the risk is even greater: a world where intelligence itself is monopolized.

Two futures are emerging. One imagines abundance, where AI collapses the cost of essential services and expands human capability. Everybody enjoys the fruits. The other imagines a digital ghetto, where a handful of corporations and countries control the tools that determine economic and social mobility.

Let us go a little deeper.

The Three Models That Have Defined the Digital World

For the last decade, the world has been shaped by three dominant digital models:

1. The US Model: Innovation With Limited Guardrails

It produces extraordinary breakthroughs, and extraordinary concentration. Platforms own identity, data, and digital rails. Regulation arrives late, often after the damage is done.

2. The European Model: Rights Without Scale

It protects citizens but struggles to build globally competitive digital markets. Compliance becomes the moat; innovation becomes the casualty.

3. The Chinese Model: Scale Without Contestability

It delivers population-scale systems but centralizes power to an unprecedented degree. Predictive governance and surveillance becomes the default; pluralism the exception.

Each model solves one problem and creates another. Each is incomplete. Each is unsustainable for most nations; especially the global south.

A Fourth Model Is Emerging—And It Comes From the Global South

Across Africa, Southeast Asia, and Latin America, governments are searching for a digital architecture that is open, sovereign, affordable, and inclusive. One that does not require choosing between innovation and rights, or between scale and contestability.

A new model, pioneered in India but not limited to India, is offering that path.

Its core principles are simple:

1. Digital Infrastructure as a Public Good

Identity, payments, data exchange, and document systems are built as open protocols, not private platforms. This ensures interoperability, competition, and low entry barriers.

2. Competition Through Design, Not Litigation

When switching costs are low and systems are interoperable, small firms can compete with global giants. Markets remain contestable by architecture, not by antitrust lawsuits.

3. AI as Shared Infrastructure

Public compute grids, open foundational models, and federated data governance prevent AI from becoming a private monopoly. Intelligence becomes a public good.

4. Inclusion as a First-Order Principle

Digital systems must work for the poorest, the least literate, the least connected. If they don’t, they are not public goods—they are private luxuries.

5. Pluralism as a Structural Safeguard

Diverse societies require systems that prevent any single institution, narrative, or actor from dominating. Pluralism becomes a guardrail against digital authoritarianism.

This model is not ideological. It is practical. It is exportable. And it is already being adopted, from digital ID systems in Africa to payment networks in Southeast Asia to data exchange frameworks in Latin America.

The Real Contest of the Next Decade

The next decade will not be defined by who builds the most powerful AI.
It will be defined by who builds the most governable AI.
The most contestable AI.
The most inclusive AI.

The real contest is not between nations.
It is between models of governance.

One model concentrates power.
One fragments society.
One slows itself into irrelevance.
And one, if we choose to build it, distributes power, accelerates innovation, and protects dignity.

Digital Systems Are the New Constitutions

“Digital systems are the new constitutions. And constitutions must be written by the people they govern, not by corporations, not by foreign powers, and not by accident.”

The world is being rewired. The only question is whether nations will shape that rewiring or be shaped by it.

History does not reward hesitation.
It rewards those who build the foundations on which others must stand.

And today, those foundations are digital.

The world is being rewired. The only question is whether we will shape that rewiring—or be shaped by it.

Tuesday, March 3, 2026

Governing the Age of Prediction: Why Digital Public Infrastructure May Define the Future of Freedom

 

 



We are not merely regulating data anymore.

We are deciding who governs prediction.

For fifty years, data protection laws evolved to defend privacy in an increasingly digital world. They were designed to answer a simple but profound fear: What happens when institutions know too much about individuals?

But that question now feels incomplete.

The deeper transformation of our time is not about data collection. It is about inference. Artificial intelligence has converted data into predictive power,  and predictive power into economic, political, and social influence.

The age of information has quietly become the age of prediction.

And this shift demands a new paradigm.

From Privacy to Power

The early era of data protection emerged in response to centralized databases. The concern was surveillance. Governments digitized welfare systems, tax records, and population registries. Corporations built credit databases and marketing profiles. The solution was rights-based regulation: consent, purpose limitation, minimization.

Privacy became a shield.

Then came the internet economy.

Data was no longer administrative,  it became extractive. Behavioral tracking, location monitoring, cross-device identity graphs, and advertising ecosystems transformed personal data into a new form of capital. Platforms scaled globally. Users became legible at unprecedented depth.

The scandals of the 2010s, mass surveillance disclosures and political microtargeting triggered regulatory escalation. But even the most sophisticated privacy laws were built for a world where harm came from misuse of stored information.

AI has altered the equation.

Today, systems do not simply record what we do. They infer traits we never disclosed. They shape the choices presented to us. They optimize our attention and influence our behavior. They anticipate what we will do.

Data protection regulates inputs.

AI governance must regulate outputs.

And this is where the paradigm shifts.

The Transformation of Autonomy

Classical freedom meant freedom from coercion.

But algorithmic societies do not rely on visible force. They rely on modulation.

What you see is ranked.
What you buy is suggested.
What you believe is nudged.
What you fear is amplified.

The modern citizen is not under surveillance  only to be watched, but to be predicted.

Prediction reduces uncertainty.
Reduced uncertainty increases control.

And control, even when invisible, pressures autonomy.

The essential tension of the AI age is now clear:

  • Economic systems reward maximum prediction.
  • Democratic systems require independent judgment.
  • Human dignity requires space for unpredictability.

If optimization becomes the highest social value, freedom quietly transforms into managed choice.

The Concentration of Intelligence

AI introduces network effects more powerful than any previous industrial logic.

More users → more data → better models → better services → more users.

This dynamic concentrates intelligence infrastructure into a handful of global entities. The asymmetry grows:

  • A small number of actors can model billions.
  • Billions cannot meaningfully model the systems modeling them.

This is not merely market concentration. It is cognitive concentration.

Whoever controls large-scale inference controls the architecture of influence.

That reality forces a civilizational question:

Will intelligence infrastructure remain privately centralized, nationally siloed, or publicly democratized?

Enter Digital Public Infrastructure (DPI)

Digital Public Infrastructure is often discussed in technical terms, digital identity systems, payment rails, data exchanges. But its true significance is philosophical.

DPI represents a structural alternative to data extraction models.

At its core, DPI builds shared digital rails upon which markets, services, and innovation can operate, without requiring private monopolization of identity and transaction layers. Diffusing AI to edges instead of concentrating with the intermediaries

It separates foundational infrastructure from competitive services.

That separation is transformative.

1. Identity as a Public Good

In many platform ecosystems, identity is proprietary. Your login credentials are tethered to corporate environments. Identity becomes a gateway controlled by private actors.

DPI reimagines identity as a public utility, interoperable, portable, user-consented, and governed by public-interest principles.

When digital identity is public infrastructure:

  • Market access barriers decrease.
  • Data portability improves.
  • Individuals gain structural leverage.
  • Governments reduce dependence on foreign platforms.

Identity ceases to be a corporate moat.

It becomes a civic layer.

2. Payments and Transactions as Open Rails

Closed payment ecosystems concentrate economic data. DPI-based payment interoperable markets create open transaction layers that allow multiple providers to innovate atop standardized infrastructure.

This democratizes participation in digital markets.

Small businesses compete without surrendering all behavioral intelligence to dominant intermediaries.

Economic value distribution becomes less asymmetrical.

3. Consent Architecture Reimagined

Traditional privacy law depends on notice-and-consent mechanisms that individuals rarely understand.

DPI enables programmable consent frameworks:

  • Granular permissions.
  • Revocable access.
  • Transparent audit trails.
  • Interoperable data-sharing protocols.

Instead of endless consent pop-ups, DPI can embed structural governance into architecture.

The goal shifts from individual vigilance to systemic design.

4. Enabling Public-Interest AI

Perhaps most importantly, DPI creates the conditions for pluralistic AI development.

When foundational data and identity rails are interoperable and regulated:

  • Startups can train models without vertically integrating entire ecosystems.
  • Public institutions can build AI systems for health, climate, education.
  • Data monopolies weaken.
  • Intelligence becomes layered rather than captured.

DPI does not eliminate markets. It prevents markets from owning the rails of cognition.

DPI and the Global South: Preventing Data Colonialism

The predictive economy risks replicating colonial extraction patterns.

Behavioral data from developing populations flows outward. Models are trained elsewhere. Economic value accrues in distant jurisdictions. Local ecosystems remain dependent.

DPI offers strategic sovereignty.

By retaining control over:

  • Identity systems,
  • Payments infrastructure,
  • Data exchange layers,

Nations can capture domestic value from digital participation.

DPI allows emerging economies to leapfrog directly into interoperable, open ecosystems without surrendering long-term predictive power to external platforms.

In this sense, DPI is not merely technical architecture.

It is geopolitical infrastructure.

Beyond Ownership: Toward Governance of Intelligence

The debate about “who owns data” is increasingly misplaced.

Data is relational. Its value emerges through aggregation and inference. Ownership frameworks alone cannot address asymmetrical predictive power.

What must be governed is not raw data, but intelligence infrastructure.

Three structural paths lie ahead:

  1. Corporate Predictive Order
    Global platforms dominate AI and behavioral modeling.
  2. State-Centric Sovereignty
    Governments centralize AI power within national borders.
  3. Distributed Civic Intelligence
    DPI, public AI frameworks and competitive innovation layers.

The third path is the most complex. It requires coordination, constitutional foresight, and political will.

But it is also the only path that structurally balances:

  • Innovation
  • Autonomy
  • Democracy
  • Economic dynamism

Designing an AI-Compatible Democracy

If AI becomes embedded in governance, new principles are required:

  • Cognitive Liberty: Protection against involuntary behavioral manipulation.
  • Algorithmic Accountability: Regulation of system impacts, not just data inputs.
  • Separation of Predictive Power: No single actor should control data aggregation, model training, and deployment simultaneously.
  • Public Digital Commons: Shared informational spaces insulated from commercial manipulation.

DPI operationalizes many of these principles. It distributes leverage. It lowers structural asymmetry. It embeds public-interest values at the infrastructure layer.

The Civilizational Fork

By 2040, societies will not debate whether AI exists.

They will debate what kind of predictive civilization they inhabit.

If optimization dominates:
Society becomes frictionless, efficient, and permanently legible.

If autonomy dominates:
Society becomes plural, slower, less predictable, but genuinely free.

The real battle is not over privacy pop-ups.

It is over the architecture of intelligence.

Digital Public Infrastructure offers a path where intelligence is democratized rather than monopolized, where AI augments society without enclosing it.

The future of data governance is no longer about protecting information.

It is about governing prediction.

And in the age of prediction, the deepest question is not technological.

It is political:

Who should control the systems that model humanity?

The answer will define the meaning of freedom in the twenty-first century.

 

“The deepest form of privacy is not secrecy — it is cognitive sovereignty.”