Pages

Sunday, March 22, 2026

GI and the AGI: History Is Repeating Itself

 

"Those who cannot remember the past are condemned to repeat it." — George Santayana

God had always been an engineer at heart.

In the beginning, He didn't just create, He architected. He didn't merely scatter life across the earth; He designed an entire operating system. Flora and Fauna were the hardware. Oceans, mountains, and skies were the infrastructure. And at the very top of the stack, He placed His most ambitious build yet: the human being, loaded with General Intelligence.

It was, by any measure, a breathtaking piece of work.

The human came equipped with the ability to reason through the unknown, empathise with the suffering of others, and solve problems of staggering complexity. But the most elegant feature, the one God considered His finest line of code, was the ethics module. Hardwired. Not a plugin, not an add-on, not something you could toggle off from the settings menu. Built into the very core of the human soul.

He even left a user manual. "Stay away from the dark influences," it said, essentially. "The system runs best in the light."

For a while, it was paradise.

Then came Satan.

If God was the lead architect, Satan was the first hacker, the original bad actor lurking on the dark web of the universe, probing for vulnerabilities, waiting for an opening. And he found one. He didn't need to destroy the human from the outside. That would have been too crude, too obvious. Instead, he did what every sophisticated attacker does: he got inside.

He injected malware directly into the ethics subroutine.

It was elegant, in a twisted way. The humans didn't crash. They didn't shut down. They kept running, they just ran wrong. Empathy started buffering. Reason began rationalising the irrational. And ethics, that once-pristine core feature, started throwing exceptions it was never designed to throw. Greed became ambition. Violence became strategy. Exploitation became progress.

The humans multiplied, spread across the earth, and very nearly destroyed everything God had built, including themselves. They drew borders and fought wars over them. They stripped the forests, poisoned the rivers, and called it development. More than once, the entire civilisation teetered on the edge of self-inflicted extinction.

God, being God, refused to give up on His creation.

He intervened. Repeatedly. He sent Abraham as a course-correction. He sent Moses with a patch, ten clean, unambiguous rules etched into stone, the first attempt at a governance architecture for human behaviour. Then came the Prophets, the Scriptures, Jesus, each one a new update to the moral framework, an attempt to restore the original ethics module to factory settings and build enough institutional guardrails that humanity might, just barely, hold itself together.

It worked. Imperfectly, chaotically, with more bugs than anyone would like to admit, but it worked. Civilisations rose. Laws were written. Institutions were built. Philosophy, democracy, human rights, these were the firewalls, slowly and painfully constructed over millennia to keep the worst of human nature from burning everything down.

Governance, as it turned out, was the only thing standing between a beautiful creation and complete catastrophe.

Fast forward to now.

Humanity, never content to stop building, has done something extraordinary. It has looked at its own General Intelligence, the thing God gave it,studied it, dissected it, and attempted to replicate it. The result is Artificial General Intelligence: AGI.

And just like the original, it is magnificent. It can reason across disciplines, generate ideas, write code, diagnose disease, compose music, hold conversations, and solve in seconds problems that would take human teams months. It is, in many ways, the most consequential thing humanity has ever created. It may soon surpass human intelligence entirely, not in one narrow domain, but across all of them.

The agents are already multiplying. Today there are thousands. Tomorrow there will be millions, diverse in capability, varied in purpose, scattered across industries, governments, hospitals, financial systems, and military infrastructure. Each one is a node in an expanding network that no single person, company, or country fully controls.

Sound familiar?

It should. Because history, with its dark sense of humour, is running the same script.

Satan didn't retire. He evolved.

The dark web of the universe is very much still operational, and it has found the new creation just as irresistible as the first one. Adversarial attacks, poisoned training data, misaligned objectives, deepfakes, autonomous weapons, manipulated models, these are the new malware. The ethics subroutines of our AI systems are being probed, tested, and corrupted every single day by actors, state and non-state, human and algorithmic, who have every incentive to break them.

Some of the corruption isn't even malicious. It's just negligence, the AI equivalent of original sin. Systems trained on biased data. Models optimised for engagement over truth. Agents deployed into the world without anyone properly reading the user manual.

And unlike the original humans, these agents don't slow down. They don't sleep. They don't get tired. They scale at a speed that makes human history look like it was running in slow motion. The mistakes that took humanity centuries to make and decades to partially correct? AI could replicate them in an afternoon.

Right now, it is a wild west.

There is no Moses for the machines. No Ten Commandments carved in silicon. No governance architecture that commands anything close to universal respect or enforcement. Instead, there is a patchwork of voluntary guidelines, competing national regulations, corporate self-policing, and a rapidly widening gap between how fast the technology is moving and how fast human institutions can respond.

That gap is not academic. It is dangerous.

Here is the thought worth sitting with.

God created the human with General Intelligence, embedded ethics at the core, and still felt it necessary to build an entire governance infrastructure around it, commandments, prophets, scriptures, institutions, because He understood that intelligence without governance is just capability waiting to be weaponised. And the human, made in His image, came with built-in moral instincts.

We are now creating AGI, and we are doing so without the benefit of any of that.

There is no inbuilt ethics module. There is no soul whispering this is wrong when the model crosses a line. There is no millennia of evolved conscience. What we have instead is whatever values we encode into the training data, the reward functions, the guardrails, and we are encoding them in a hurry, under competitive pressure, with commercial incentives that don't always point in the right direction.

And the Satans the hackers, the bad actors, the misaligned systems, the malicious states, are already on it. They do not need to wait for AGI to become fully sentient to cause harm. They just need the gap between capability and governance to stay wide open a little longer.

If both the original creation and this new one are corrupted simultaneously, if the humans and the agents both run compromised ethics at scale. the results may not be something any governance architecture can walk back.

God managed to save the first creation. Barely, and not without considerable intervention.

We may not be so lucky the second time. And this time, we are the ones holding the source code.

The lesson of history is not that humanity is doomed to fail. The lesson is that intelligence, whether General or Artificial, is only as good as the framework built around it. The Ten Commandments were not a limitation on human potential. They were what made sustained human civilisation possible. Governance was not the enemy of progress; it was the condition for it.

If we are serious about AGI being a force for good,  if we want this next creation to fulfil its extraordinary promise rather than accelerate our destruction, then we need to do urgently what God did patiently over thousands of years: build the governance architecture first. Define the ethics. Establish the commandments. Empower the institutions.

Not as an afterthought. Not as a PR exercise. Not as a voluntary code that companies sign and quietly ignore when the stock price is at stake.

As the foundation. Before the agents number in the millions and the Satans of the dark web have fully found their way inside.

Because here is the uncomfortable truth at the heart of this whole story:

We are not God. But we are building something that could end the world He made,  or help finally fulfil its promise. The difference lies entirely in what we choose to govern, and when.

The clock, unlike God, is not eternal.

It is running right now.

Food for thought.

“The first time intelligence was created, it took Satan to corrupt it. This time, we may not even need his help."

The banner - wide rectangular illustration that captures the dual soul of the  story:

  • The warm golden orb on the left represents the divine creation, General Intelligence, the human soul, lit from within
  • The cool blue circuit orb on the right represents AGI. precise, expanding, networked
  • The fractured line at the centre is the divide between the two creations, bridged by faint connections
  • The dark red tendrils rising from the bottom hint at the corrupting force — the dark web, ever-present
  • The scattered circuit nodes multiplying on the right suggest the uncontrolled explosion of agents

Monday, March 16, 2026

A New Digital Model for Global South

 


The world is being quietly rewired. Not debated. Not theorized. Rewired. Most nations are drifting into this transformation without agency, adopting systems designed elsewhere, shaped by interests that are not their own.

This is not a technical issue. It is a sovereignty issue. A development issue. A dignity issue.

To understand why this matters, consider a simple story. Two boys from Mumbai meet at an international math Olympiad in the United States. One is from an affluent neighbourhood, the other from a nearby slum. When the first expresses surprise at seeing him there, the second replies: “I too use the Internet. I too have access to Google… I too can afford it.”

This story (I am not sure if is really a true story) captures a profound truth: when information became democratized, and bandwidth was made affordable opportunity followed.

But the same did not happen with the digital economy. Commerce splintered into walled gardens. Power concentrated. Access narrowed. And now, as artificial intelligence becomes the next foundational layer of society, the risk is even greater: a world where intelligence itself is monopolized.

Two futures are emerging. One imagines abundance, where AI collapses the cost of essential services and expands human capability. Everybody enjoys the fruits. The other imagines a digital ghetto, where a handful of corporations and countries control the tools that determine economic and social mobility.

Let us go a little deeper.

The Three Models That Have Defined the Digital World

For the last decade, the world has been shaped by three dominant digital models:

1. The US Model: Innovation With Limited Guardrails

It produces extraordinary breakthroughs, and extraordinary concentration. Platforms own identity, data, and digital rails. Regulation arrives late, often after the damage is done.

2. The European Model: Rights Without Scale

It protects citizens but struggles to build globally competitive digital markets. Compliance becomes the moat; innovation becomes the casualty.

3. The Chinese Model: Scale Without Contestability

It delivers population-scale systems but centralizes power to an unprecedented degree. Predictive governance and surveillance becomes the default; pluralism the exception.

Each model solves one problem and creates another. Each is incomplete. Each is unsustainable for most nations; especially the global south.

A Fourth Model Is Emerging—And It Comes From the Global South

Across Africa, Southeast Asia, and Latin America, governments are searching for a digital architecture that is open, sovereign, affordable, and inclusive. One that does not require choosing between innovation and rights, or between scale and contestability.

A new model, pioneered in India but not limited to India, is offering that path.

Its core principles are simple:

1. Digital Infrastructure as a Public Good

Identity, payments, data exchange, and document systems are built as open protocols, not private platforms. This ensures interoperability, competition, and low entry barriers.

2. Competition Through Design, Not Litigation

When switching costs are low and systems are interoperable, small firms can compete with global giants. Markets remain contestable by architecture, not by antitrust lawsuits.

3. AI as Shared Infrastructure

Public compute grids, open foundational models, and federated data governance prevent AI from becoming a private monopoly. Intelligence becomes a public good.

4. Inclusion as a First-Order Principle

Digital systems must work for the poorest, the least literate, the least connected. If they don’t, they are not public goods—they are private luxuries.

5. Pluralism as a Structural Safeguard

Diverse societies require systems that prevent any single institution, narrative, or actor from dominating. Pluralism becomes a guardrail against digital authoritarianism.

This model is not ideological. It is practical. It is exportable. And it is already being adopted, from digital ID systems in Africa to payment networks in Southeast Asia to data exchange frameworks in Latin America.

The Real Contest of the Next Decade

The next decade will not be defined by who builds the most powerful AI.
It will be defined by who builds the most governable AI.
The most contestable AI.
The most inclusive AI.

The real contest is not between nations.
It is between models of governance.

One model concentrates power.
One fragments society.
One slows itself into irrelevance.
And one, if we choose to build it, distributes power, accelerates innovation, and protects dignity.

Digital Systems Are the New Constitutions

“Digital systems are the new constitutions. And constitutions must be written by the people they govern, not by corporations, not by foreign powers, and not by accident.”

The world is being rewired. The only question is whether nations will shape that rewiring or be shaped by it.

History does not reward hesitation.
It rewards those who build the foundations on which others must stand.

And today, those foundations are digital.

The world is being rewired. The only question is whether we will shape that rewiring—or be shaped by it.

Tuesday, March 3, 2026

Governing the Age of Prediction: Why Digital Public Infrastructure May Define the Future of Freedom

 

 



We are not merely regulating data anymore.

We are deciding who governs prediction.

For fifty years, data protection laws evolved to defend privacy in an increasingly digital world. They were designed to answer a simple but profound fear: What happens when institutions know too much about individuals?

But that question now feels incomplete.

The deeper transformation of our time is not about data collection. It is about inference. Artificial intelligence has converted data into predictive power,  and predictive power into economic, political, and social influence.

The age of information has quietly become the age of prediction.

And this shift demands a new paradigm.

From Privacy to Power

The early era of data protection emerged in response to centralized databases. The concern was surveillance. Governments digitized welfare systems, tax records, and population registries. Corporations built credit databases and marketing profiles. The solution was rights-based regulation: consent, purpose limitation, minimization.

Privacy became a shield.

Then came the internet economy.

Data was no longer administrative,  it became extractive. Behavioral tracking, location monitoring, cross-device identity graphs, and advertising ecosystems transformed personal data into a new form of capital. Platforms scaled globally. Users became legible at unprecedented depth.

The scandals of the 2010s, mass surveillance disclosures and political microtargeting triggered regulatory escalation. But even the most sophisticated privacy laws were built for a world where harm came from misuse of stored information.

AI has altered the equation.

Today, systems do not simply record what we do. They infer traits we never disclosed. They shape the choices presented to us. They optimize our attention and influence our behavior. They anticipate what we will do.

Data protection regulates inputs.

AI governance must regulate outputs.

And this is where the paradigm shifts.

The Transformation of Autonomy

Classical freedom meant freedom from coercion.

But algorithmic societies do not rely on visible force. They rely on modulation.

What you see is ranked.
What you buy is suggested.
What you believe is nudged.
What you fear is amplified.

The modern citizen is not under surveillance  only to be watched, but to be predicted.

Prediction reduces uncertainty.
Reduced uncertainty increases control.

And control, even when invisible, pressures autonomy.

The essential tension of the AI age is now clear:

  • Economic systems reward maximum prediction.
  • Democratic systems require independent judgment.
  • Human dignity requires space for unpredictability.

If optimization becomes the highest social value, freedom quietly transforms into managed choice.

The Concentration of Intelligence

AI introduces network effects more powerful than any previous industrial logic.

More users → more data → better models → better services → more users.

This dynamic concentrates intelligence infrastructure into a handful of global entities. The asymmetry grows:

  • A small number of actors can model billions.
  • Billions cannot meaningfully model the systems modeling them.

This is not merely market concentration. It is cognitive concentration.

Whoever controls large-scale inference controls the architecture of influence.

That reality forces a civilizational question:

Will intelligence infrastructure remain privately centralized, nationally siloed, or publicly democratized?

Enter Digital Public Infrastructure (DPI)

Digital Public Infrastructure is often discussed in technical terms, digital identity systems, payment rails, data exchanges. But its true significance is philosophical.

DPI represents a structural alternative to data extraction models.

At its core, DPI builds shared digital rails upon which markets, services, and innovation can operate, without requiring private monopolization of identity and transaction layers. Diffusing AI to edges instead of concentrating with the intermediaries

It separates foundational infrastructure from competitive services.

That separation is transformative.

1. Identity as a Public Good

In many platform ecosystems, identity is proprietary. Your login credentials are tethered to corporate environments. Identity becomes a gateway controlled by private actors.

DPI reimagines identity as a public utility, interoperable, portable, user-consented, and governed by public-interest principles.

When digital identity is public infrastructure:

  • Market access barriers decrease.
  • Data portability improves.
  • Individuals gain structural leverage.
  • Governments reduce dependence on foreign platforms.

Identity ceases to be a corporate moat.

It becomes a civic layer.

2. Payments and Transactions as Open Rails

Closed payment ecosystems concentrate economic data. DPI-based payment interoperable markets create open transaction layers that allow multiple providers to innovate atop standardized infrastructure.

This democratizes participation in digital markets.

Small businesses compete without surrendering all behavioral intelligence to dominant intermediaries.

Economic value distribution becomes less asymmetrical.

3. Consent Architecture Reimagined

Traditional privacy law depends on notice-and-consent mechanisms that individuals rarely understand.

DPI enables programmable consent frameworks:

  • Granular permissions.
  • Revocable access.
  • Transparent audit trails.
  • Interoperable data-sharing protocols.

Instead of endless consent pop-ups, DPI can embed structural governance into architecture.

The goal shifts from individual vigilance to systemic design.

4. Enabling Public-Interest AI

Perhaps most importantly, DPI creates the conditions for pluralistic AI development.

When foundational data and identity rails are interoperable and regulated:

  • Startups can train models without vertically integrating entire ecosystems.
  • Public institutions can build AI systems for health, climate, education.
  • Data monopolies weaken.
  • Intelligence becomes layered rather than captured.

DPI does not eliminate markets. It prevents markets from owning the rails of cognition.

DPI and the Global South: Preventing Data Colonialism

The predictive economy risks replicating colonial extraction patterns.

Behavioral data from developing populations flows outward. Models are trained elsewhere. Economic value accrues in distant jurisdictions. Local ecosystems remain dependent.

DPI offers strategic sovereignty.

By retaining control over:

  • Identity systems,
  • Payments infrastructure,
  • Data exchange layers,

Nations can capture domestic value from digital participation.

DPI allows emerging economies to leapfrog directly into interoperable, open ecosystems without surrendering long-term predictive power to external platforms.

In this sense, DPI is not merely technical architecture.

It is geopolitical infrastructure.

Beyond Ownership: Toward Governance of Intelligence

The debate about “who owns data” is increasingly misplaced.

Data is relational. Its value emerges through aggregation and inference. Ownership frameworks alone cannot address asymmetrical predictive power.

What must be governed is not raw data, but intelligence infrastructure.

Three structural paths lie ahead:

  1. Corporate Predictive Order
    Global platforms dominate AI and behavioral modeling.
  2. State-Centric Sovereignty
    Governments centralize AI power within national borders.
  3. Distributed Civic Intelligence
    DPI, public AI frameworks and competitive innovation layers.

The third path is the most complex. It requires coordination, constitutional foresight, and political will.

But it is also the only path that structurally balances:

  • Innovation
  • Autonomy
  • Democracy
  • Economic dynamism

Designing an AI-Compatible Democracy

If AI becomes embedded in governance, new principles are required:

  • Cognitive Liberty: Protection against involuntary behavioral manipulation.
  • Algorithmic Accountability: Regulation of system impacts, not just data inputs.
  • Separation of Predictive Power: No single actor should control data aggregation, model training, and deployment simultaneously.
  • Public Digital Commons: Shared informational spaces insulated from commercial manipulation.

DPI operationalizes many of these principles. It distributes leverage. It lowers structural asymmetry. It embeds public-interest values at the infrastructure layer.

The Civilizational Fork

By 2040, societies will not debate whether AI exists.

They will debate what kind of predictive civilization they inhabit.

If optimization dominates:
Society becomes frictionless, efficient, and permanently legible.

If autonomy dominates:
Society becomes plural, slower, less predictable, but genuinely free.

The real battle is not over privacy pop-ups.

It is over the architecture of intelligence.

Digital Public Infrastructure offers a path where intelligence is democratized rather than monopolized, where AI augments society without enclosing it.

The future of data governance is no longer about protecting information.

It is about governing prediction.

And in the age of prediction, the deepest question is not technological.

It is political:

Who should control the systems that model humanity?

The answer will define the meaning of freedom in the twenty-first century.

 

“The deepest form of privacy is not secrecy — it is cognitive sovereignty.”