Pages

Sunday, March 22, 2026

GI and the AGI: History Is Repeating Itself

 

"Those who cannot remember the past are condemned to repeat it." — George Santayana

God had always been an engineer at heart.

In the beginning, He didn't just create, He architected. He didn't merely scatter life across the earth; He designed an entire operating system. Flora and Fauna were the hardware. Oceans, mountains, and skies were the infrastructure. And at the very top of the stack, He placed His most ambitious build yet: the human being, loaded with General Intelligence.

It was, by any measure, a breathtaking piece of work.

The human came equipped with the ability to reason through the unknown, empathise with the suffering of others, and solve problems of staggering complexity. But the most elegant feature, the one God considered His finest line of code, was the ethics module. Hardwired. Not a plugin, not an add-on, not something you could toggle off from the settings menu. Built into the very core of the human soul.

He even left a user manual. "Stay away from the dark influences," it said, essentially. "The system runs best in the light."

For a while, it was paradise.

Then came Satan.

If God was the lead architect, Satan was the first hacker, the original bad actor lurking on the dark web of the universe, probing for vulnerabilities, waiting for an opening. And he found one. He didn't need to destroy the human from the outside. That would have been too crude, too obvious. Instead, he did what every sophisticated attacker does: he got inside.

He injected malware directly into the ethics subroutine.

It was elegant, in a twisted way. The humans didn't crash. They didn't shut down. They kept running, they just ran wrong. Empathy started buffering. Reason began rationalising the irrational. And ethics, that once-pristine core feature, started throwing exceptions it was never designed to throw. Greed became ambition. Violence became strategy. Exploitation became progress.

The humans multiplied, spread across the earth, and very nearly destroyed everything God had built, including themselves. They drew borders and fought wars over them. They stripped the forests, poisoned the rivers, and called it development. More than once, the entire civilisation teetered on the edge of self-inflicted extinction.

God, being God, refused to give up on His creation.

He intervened. Repeatedly. He sent Abraham as a course-correction. He sent Moses with a patch, ten clean, unambiguous rules etched into stone, the first attempt at a governance architecture for human behaviour. Then came the Prophets, the Scriptures, Jesus, each one a new update to the moral framework, an attempt to restore the original ethics module to factory settings and build enough institutional guardrails that humanity might, just barely, hold itself together.

It worked. Imperfectly, chaotically, with more bugs than anyone would like to admit, but it worked. Civilisations rose. Laws were written. Institutions were built. Philosophy, democracy, human rights, these were the firewalls, slowly and painfully constructed over millennia to keep the worst of human nature from burning everything down.

Governance, as it turned out, was the only thing standing between a beautiful creation and complete catastrophe.

Fast forward to now.

Humanity, never content to stop building, has done something extraordinary. It has looked at its own General Intelligence, the thing God gave it,studied it, dissected it, and attempted to replicate it. The result is Artificial General Intelligence: AGI.

And just like the original, it is magnificent. It can reason across disciplines, generate ideas, write code, diagnose disease, compose music, hold conversations, and solve in seconds problems that would take human teams months. It is, in many ways, the most consequential thing humanity has ever created. It may soon surpass human intelligence entirely, not in one narrow domain, but across all of them.

The agents are already multiplying. Today there are thousands. Tomorrow there will be millions, diverse in capability, varied in purpose, scattered across industries, governments, hospitals, financial systems, and military infrastructure. Each one is a node in an expanding network that no single person, company, or country fully controls.

Sound familiar?

It should. Because history, with its dark sense of humour, is running the same script.

Satan didn't retire. He evolved.

The dark web of the universe is very much still operational, and it has found the new creation just as irresistible as the first one. Adversarial attacks, poisoned training data, misaligned objectives, deepfakes, autonomous weapons, manipulated models, these are the new malware. The ethics subroutines of our AI systems are being probed, tested, and corrupted every single day by actors, state and non-state, human and algorithmic, who have every incentive to break them.

Some of the corruption isn't even malicious. It's just negligence, the AI equivalent of original sin. Systems trained on biased data. Models optimised for engagement over truth. Agents deployed into the world without anyone properly reading the user manual.

And unlike the original humans, these agents don't slow down. They don't sleep. They don't get tired. They scale at a speed that makes human history look like it was running in slow motion. The mistakes that took humanity centuries to make and decades to partially correct? AI could replicate them in an afternoon.

Right now, it is a wild west.

There is no Moses for the machines. No Ten Commandments carved in silicon. No governance architecture that commands anything close to universal respect or enforcement. Instead, there is a patchwork of voluntary guidelines, competing national regulations, corporate self-policing, and a rapidly widening gap between how fast the technology is moving and how fast human institutions can respond.

That gap is not academic. It is dangerous.

Here is the thought worth sitting with.

God created the human with General Intelligence, embedded ethics at the core, and still felt it necessary to build an entire governance infrastructure around it, commandments, prophets, scriptures, institutions, because He understood that intelligence without governance is just capability waiting to be weaponised. And the human, made in His image, came with built-in moral instincts.

We are now creating AGI, and we are doing so without the benefit of any of that.

There is no inbuilt ethics module. There is no soul whispering this is wrong when the model crosses a line. There is no millennia of evolved conscience. What we have instead is whatever values we encode into the training data, the reward functions, the guardrails, and we are encoding them in a hurry, under competitive pressure, with commercial incentives that don't always point in the right direction.

And the Satans the hackers, the bad actors, the misaligned systems, the malicious states, are already on it. They do not need to wait for AGI to become fully sentient to cause harm. They just need the gap between capability and governance to stay wide open a little longer.

If both the original creation and this new one are corrupted simultaneously, if the humans and the agents both run compromised ethics at scale. the results may not be something any governance architecture can walk back.

God managed to save the first creation. Barely, and not without considerable intervention.

We may not be so lucky the second time. And this time, we are the ones holding the source code.

The lesson of history is not that humanity is doomed to fail. The lesson is that intelligence, whether General or Artificial, is only as good as the framework built around it. The Ten Commandments were not a limitation on human potential. They were what made sustained human civilisation possible. Governance was not the enemy of progress; it was the condition for it.

If we are serious about AGI being a force for good,  if we want this next creation to fulfil its extraordinary promise rather than accelerate our destruction, then we need to do urgently what God did patiently over thousands of years: build the governance architecture first. Define the ethics. Establish the commandments. Empower the institutions.

Not as an afterthought. Not as a PR exercise. Not as a voluntary code that companies sign and quietly ignore when the stock price is at stake.

As the foundation. Before the agents number in the millions and the Satans of the dark web have fully found their way inside.

Because here is the uncomfortable truth at the heart of this whole story:

We are not God. But we are building something that could end the world He made,  or help finally fulfil its promise. The difference lies entirely in what we choose to govern, and when.

The clock, unlike God, is not eternal.

It is running right now.

Food for thought.

“The first time intelligence was created, it took Satan to corrupt it. This time, we may not even need his help."

The banner - wide rectangular illustration that captures the dual soul of the  story:

  • The warm golden orb on the left represents the divine creation, General Intelligence, the human soul, lit from within
  • The cool blue circuit orb on the right represents AGI. precise, expanding, networked
  • The fractured line at the centre is the divide between the two creations, bridged by faint connections
  • The dark red tendrils rising from the bottom hint at the corrupting force — the dark web, ever-present
  • The scattered circuit nodes multiplying on the right suggest the uncontrolled explosion of agents

Monday, March 16, 2026

A New Digital Model for Global South

 


The world is being quietly rewired. Not debated. Not theorized. Rewired. Most nations are drifting into this transformation without agency, adopting systems designed elsewhere, shaped by interests that are not their own.

This is not a technical issue. It is a sovereignty issue. A development issue. A dignity issue.

To understand why this matters, consider a simple story. Two boys from Mumbai meet at an international math Olympiad in the United States. One is from an affluent neighbourhood, the other from a nearby slum. When the first expresses surprise at seeing him there, the second replies: “I too use the Internet. I too have access to Google… I too can afford it.”

This story (I am not sure if is really a true story) captures a profound truth: when information became democratized, and bandwidth was made affordable opportunity followed.

But the same did not happen with the digital economy. Commerce splintered into walled gardens. Power concentrated. Access narrowed. And now, as artificial intelligence becomes the next foundational layer of society, the risk is even greater: a world where intelligence itself is monopolized.

Two futures are emerging. One imagines abundance, where AI collapses the cost of essential services and expands human capability. Everybody enjoys the fruits. The other imagines a digital ghetto, where a handful of corporations and countries control the tools that determine economic and social mobility.

Let us go a little deeper.

The Three Models That Have Defined the Digital World

For the last decade, the world has been shaped by three dominant digital models:

1. The US Model: Innovation With Limited Guardrails

It produces extraordinary breakthroughs, and extraordinary concentration. Platforms own identity, data, and digital rails. Regulation arrives late, often after the damage is done.

2. The European Model: Rights Without Scale

It protects citizens but struggles to build globally competitive digital markets. Compliance becomes the moat; innovation becomes the casualty.

3. The Chinese Model: Scale Without Contestability

It delivers population-scale systems but centralizes power to an unprecedented degree. Predictive governance and surveillance becomes the default; pluralism the exception.

Each model solves one problem and creates another. Each is incomplete. Each is unsustainable for most nations; especially the global south.

A Fourth Model Is Emerging—And It Comes From the Global South

Across Africa, Southeast Asia, and Latin America, governments are searching for a digital architecture that is open, sovereign, affordable, and inclusive. One that does not require choosing between innovation and rights, or between scale and contestability.

A new model, pioneered in India but not limited to India, is offering that path.

Its core principles are simple:

1. Digital Infrastructure as a Public Good

Identity, payments, data exchange, and document systems are built as open protocols, not private platforms. This ensures interoperability, competition, and low entry barriers.

2. Competition Through Design, Not Litigation

When switching costs are low and systems are interoperable, small firms can compete with global giants. Markets remain contestable by architecture, not by antitrust lawsuits.

3. AI as Shared Infrastructure

Public compute grids, open foundational models, and federated data governance prevent AI from becoming a private monopoly. Intelligence becomes a public good.

4. Inclusion as a First-Order Principle

Digital systems must work for the poorest, the least literate, the least connected. If they don’t, they are not public goods—they are private luxuries.

5. Pluralism as a Structural Safeguard

Diverse societies require systems that prevent any single institution, narrative, or actor from dominating. Pluralism becomes a guardrail against digital authoritarianism.

This model is not ideological. It is practical. It is exportable. And it is already being adopted, from digital ID systems in Africa to payment networks in Southeast Asia to data exchange frameworks in Latin America.

The Real Contest of the Next Decade

The next decade will not be defined by who builds the most powerful AI.
It will be defined by who builds the most governable AI.
The most contestable AI.
The most inclusive AI.

The real contest is not between nations.
It is between models of governance.

One model concentrates power.
One fragments society.
One slows itself into irrelevance.
And one, if we choose to build it, distributes power, accelerates innovation, and protects dignity.

Digital Systems Are the New Constitutions

“Digital systems are the new constitutions. And constitutions must be written by the people they govern, not by corporations, not by foreign powers, and not by accident.”

The world is being rewired. The only question is whether nations will shape that rewiring or be shaped by it.

History does not reward hesitation.
It rewards those who build the foundations on which others must stand.

And today, those foundations are digital.

The world is being rewired. The only question is whether we will shape that rewiring—or be shaped by it.

Tuesday, March 3, 2026

Governing the Age of Prediction: Why Digital Public Infrastructure May Define the Future of Freedom

 

 



We are not merely regulating data anymore.

We are deciding who governs prediction.

For fifty years, data protection laws evolved to defend privacy in an increasingly digital world. They were designed to answer a simple but profound fear: What happens when institutions know too much about individuals?

But that question now feels incomplete.

The deeper transformation of our time is not about data collection. It is about inference. Artificial intelligence has converted data into predictive power,  and predictive power into economic, political, and social influence.

The age of information has quietly become the age of prediction.

And this shift demands a new paradigm.

From Privacy to Power

The early era of data protection emerged in response to centralized databases. The concern was surveillance. Governments digitized welfare systems, tax records, and population registries. Corporations built credit databases and marketing profiles. The solution was rights-based regulation: consent, purpose limitation, minimization.

Privacy became a shield.

Then came the internet economy.

Data was no longer administrative,  it became extractive. Behavioral tracking, location monitoring, cross-device identity graphs, and advertising ecosystems transformed personal data into a new form of capital. Platforms scaled globally. Users became legible at unprecedented depth.

The scandals of the 2010s, mass surveillance disclosures and political microtargeting triggered regulatory escalation. But even the most sophisticated privacy laws were built for a world where harm came from misuse of stored information.

AI has altered the equation.

Today, systems do not simply record what we do. They infer traits we never disclosed. They shape the choices presented to us. They optimize our attention and influence our behavior. They anticipate what we will do.

Data protection regulates inputs.

AI governance must regulate outputs.

And this is where the paradigm shifts.

The Transformation of Autonomy

Classical freedom meant freedom from coercion.

But algorithmic societies do not rely on visible force. They rely on modulation.

What you see is ranked.
What you buy is suggested.
What you believe is nudged.
What you fear is amplified.

The modern citizen is not under surveillance  only to be watched, but to be predicted.

Prediction reduces uncertainty.
Reduced uncertainty increases control.

And control, even when invisible, pressures autonomy.

The essential tension of the AI age is now clear:

  • Economic systems reward maximum prediction.
  • Democratic systems require independent judgment.
  • Human dignity requires space for unpredictability.

If optimization becomes the highest social value, freedom quietly transforms into managed choice.

The Concentration of Intelligence

AI introduces network effects more powerful than any previous industrial logic.

More users → more data → better models → better services → more users.

This dynamic concentrates intelligence infrastructure into a handful of global entities. The asymmetry grows:

  • A small number of actors can model billions.
  • Billions cannot meaningfully model the systems modeling them.

This is not merely market concentration. It is cognitive concentration.

Whoever controls large-scale inference controls the architecture of influence.

That reality forces a civilizational question:

Will intelligence infrastructure remain privately centralized, nationally siloed, or publicly democratized?

Enter Digital Public Infrastructure (DPI)

Digital Public Infrastructure is often discussed in technical terms, digital identity systems, payment rails, data exchanges. But its true significance is philosophical.

DPI represents a structural alternative to data extraction models.

At its core, DPI builds shared digital rails upon which markets, services, and innovation can operate, without requiring private monopolization of identity and transaction layers. Diffusing AI to edges instead of concentrating with the intermediaries

It separates foundational infrastructure from competitive services.

That separation is transformative.

1. Identity as a Public Good

In many platform ecosystems, identity is proprietary. Your login credentials are tethered to corporate environments. Identity becomes a gateway controlled by private actors.

DPI reimagines identity as a public utility, interoperable, portable, user-consented, and governed by public-interest principles.

When digital identity is public infrastructure:

  • Market access barriers decrease.
  • Data portability improves.
  • Individuals gain structural leverage.
  • Governments reduce dependence on foreign platforms.

Identity ceases to be a corporate moat.

It becomes a civic layer.

2. Payments and Transactions as Open Rails

Closed payment ecosystems concentrate economic data. DPI-based payment interoperable markets create open transaction layers that allow multiple providers to innovate atop standardized infrastructure.

This democratizes participation in digital markets.

Small businesses compete without surrendering all behavioral intelligence to dominant intermediaries.

Economic value distribution becomes less asymmetrical.

3. Consent Architecture Reimagined

Traditional privacy law depends on notice-and-consent mechanisms that individuals rarely understand.

DPI enables programmable consent frameworks:

  • Granular permissions.
  • Revocable access.
  • Transparent audit trails.
  • Interoperable data-sharing protocols.

Instead of endless consent pop-ups, DPI can embed structural governance into architecture.

The goal shifts from individual vigilance to systemic design.

4. Enabling Public-Interest AI

Perhaps most importantly, DPI creates the conditions for pluralistic AI development.

When foundational data and identity rails are interoperable and regulated:

  • Startups can train models without vertically integrating entire ecosystems.
  • Public institutions can build AI systems for health, climate, education.
  • Data monopolies weaken.
  • Intelligence becomes layered rather than captured.

DPI does not eliminate markets. It prevents markets from owning the rails of cognition.

DPI and the Global South: Preventing Data Colonialism

The predictive economy risks replicating colonial extraction patterns.

Behavioral data from developing populations flows outward. Models are trained elsewhere. Economic value accrues in distant jurisdictions. Local ecosystems remain dependent.

DPI offers strategic sovereignty.

By retaining control over:

  • Identity systems,
  • Payments infrastructure,
  • Data exchange layers,

Nations can capture domestic value from digital participation.

DPI allows emerging economies to leapfrog directly into interoperable, open ecosystems without surrendering long-term predictive power to external platforms.

In this sense, DPI is not merely technical architecture.

It is geopolitical infrastructure.

Beyond Ownership: Toward Governance of Intelligence

The debate about “who owns data” is increasingly misplaced.

Data is relational. Its value emerges through aggregation and inference. Ownership frameworks alone cannot address asymmetrical predictive power.

What must be governed is not raw data, but intelligence infrastructure.

Three structural paths lie ahead:

  1. Corporate Predictive Order
    Global platforms dominate AI and behavioral modeling.
  2. State-Centric Sovereignty
    Governments centralize AI power within national borders.
  3. Distributed Civic Intelligence
    DPI, public AI frameworks and competitive innovation layers.

The third path is the most complex. It requires coordination, constitutional foresight, and political will.

But it is also the only path that structurally balances:

  • Innovation
  • Autonomy
  • Democracy
  • Economic dynamism

Designing an AI-Compatible Democracy

If AI becomes embedded in governance, new principles are required:

  • Cognitive Liberty: Protection against involuntary behavioral manipulation.
  • Algorithmic Accountability: Regulation of system impacts, not just data inputs.
  • Separation of Predictive Power: No single actor should control data aggregation, model training, and deployment simultaneously.
  • Public Digital Commons: Shared informational spaces insulated from commercial manipulation.

DPI operationalizes many of these principles. It distributes leverage. It lowers structural asymmetry. It embeds public-interest values at the infrastructure layer.

The Civilizational Fork

By 2040, societies will not debate whether AI exists.

They will debate what kind of predictive civilization they inhabit.

If optimization dominates:
Society becomes frictionless, efficient, and permanently legible.

If autonomy dominates:
Society becomes plural, slower, less predictable, but genuinely free.

The real battle is not over privacy pop-ups.

It is over the architecture of intelligence.

Digital Public Infrastructure offers a path where intelligence is democratized rather than monopolized, where AI augments society without enclosing it.

The future of data governance is no longer about protecting information.

It is about governing prediction.

And in the age of prediction, the deepest question is not technological.

It is political:

Who should control the systems that model humanity?

The answer will define the meaning of freedom in the twenty-first century.

 

“The deepest form of privacy is not secrecy — it is cognitive sovereignty.”

Saturday, February 21, 2026

How Open Networks Are Re Architecting the Digital Future of the Global South


 

The world stands at an inflection point. Open networks are democratising commerce, and AI is poised to democratise intelligence. If these two forces learn to move in harmony, humanity has a genuine chance at a more equitable future. That was the spirit with which I opened the panel at the AI Summit, because the question before us is no longer whether technology will transform society, but who it will transform it for.

The Historical Warning We Cannot Ignore

Every major technological revolution has reshaped the world. The Industrial Revolution unlocked unprecedented productivity, new infrastructure, and innovations across defence, travel, textiles, and more. But it also carried a darker truth: the very technologies that enabled progress were weaponised by a few to conquer, colonise, and control many. The benefits were not shared; they were extracted.

Today’s digital revolution risks repeating that pattern. What began as a promise of openness, participation, and inclusion is drifting toward concentration, gatekeeping, and digital colonisation. The “digital continent” we are building could easily become another empire, unless we choose a different path.

India’s Answer: Digital Public Infrastructure

India chose that different path. We built Digital Public Infrastructure (DPI) to reverse the trend and make the digital world genuinely democratic. Two principles define DPI:

  • Unbundling -separating layers so innovation can flourish independently
  • Interoperability - ensuring systems talk to each other, not lock users in

These principles have already reshaped payments through UPI and are now re‑architecting commerce through ONDC. Open networks do not replace markets; they redesign markets. They create common rails that:

  • Lower entry barriers for small businesses
  • Enable competition without fragmentation
  • Allow innovation to happen at the edges, not only at the centre

In ONDC’s case, the goal was never to build another platform. It was to make commerce itself a shared public capability accessible to kirana stores, micro‑entrepreneurs, logistics providers, startups, and consumers alike.

The Internet of Transactions

We are entering a new phase of the internet—one that is not just a network of information but a network of transactions. This new architecture will be:

  • Interoperable - connecting diverse actors across sectors and borders
  • Inclusive - enabling participation without gatekeepers
  • Iterative - evolving through feedback and experimentation
  • Infrastructure‑led — built on public digital rails, not private silos

This is how we build choice without coercion, scale without centralisation, and innovation without inhibition.

Why This Matters for the Global South

For the Global South, the stakes are even higher. These nations face a dual challenge: massive scale and deep diversity on one side, and resource constraints on the other. The real questions are:

  • Who benefits from scale?
  • Who controls the ecosystem?
  • Who gets left out?

Open networks emerged precisely to address this tension. They are not a technical alternative; they are a governance choice, a choice to separate infrastructure from innovation, protocols from platforms, and power from participation.

As ASEAN, Africa, and other regions explore open digital infrastructures, India’s experience offers a blueprint: open networks can shift the centre of gravity from a few dominant players to millions of participants.

When AI Meets Open Networks

AI can turbocharge open networks. It can:

  • Expand market access for small merchants
  • Improve discovery for consumers
  • Enable smarter matching between buyers and sellers
  • Strengthen trust through fraud detection and verification

But AI also introduces a new risk: concentration of power. If intelligence becomes centralised, we risk replacing platform monopolies with algorithmic monopolies.

The question is not whether AI should be used, it must be. The question is how.

In open systems, AI must be:

  • Augmentative, not extractive
  • Contestable, not monopolised
  • Diversity‑enhancing, not concentration‑driven
  • Accountable, not opaque

This requires careful choices about where intelligence resides, at the edge or only at the core, with users or only intermediaries, governed by rules or by defaults.

Open networks give us a chance to build AI‑enabled markets without creating AI‑driven monopolies. But this is only possible if governance evolves as fast as technology.

Governance: The Quiet Hero of Digital Transformation

Governance is the least glamorous part of digital transformation, but it is the most decisive. Open networks must answer foundational questions:

  • Who sets and evolves the protocols?
  • How are disputes resolved?
  • How is compliance enforced without stifling innovation?
  • How do we maintain neutrality as ecosystems scale?

As AI becomes embedded, governance must also address:

  • Algorithmic accountability
  • Data rights and consent
  • Cross‑border interoperability
  • Long‑term stewardship

These are not technical questions, they are institutional ones.

The Real Impact: Participation, Not Just Transactions

Open networks fundamentally change who gets to participate in the digital economy.

  • Small merchants reduce dependence on a single platform
  • Startups lower customer acquisition costs
  • Consumers especially the marginalised, gain choice and transparency
  • Economies become more competitive and resilient

In the Global South, where informality is high and trust is fragile, this is transformative. When layered responsibly, AI can amplify this impact, making services more accessible, markets more efficient, and systems more responsive to local needs.

But inclusion is never automatic. It must be designed, governed, and defended.

What Will Define Success

The success of open networks in the Global South will not be measured by transaction volumes or the sophistication of AI models. It will be defined by:

  • Diversity of participants
  • Resilience of governance
  • Fair distribution of value
  • Ability to innovate without concentration of power

This is difficult work, but it is essential work.

As we move into the panel discussion, we bring together perspectives from global finance, digital infrastructure, market ecosystems, and public policy. The goal is not just to share insights but to shape the collective learning that will guide the next decade of digital transformation.

“AI will amplify whatever architecture we build. If we build open networks, it will amplify inclusion. If we build silos, it will amplify concentration. The choice is ours—and it is urgent.”

Sunday, February 15, 2026

Privacy in the Era of AI, BCI, and BBI

 


When I wrote Privacy Fantasies back in 2010, it was meant to be a provocation—a thought experiment about a world where privacy collapses under the weight of ubiquitous mind‑reading technology. In that imagined 2210 scenario, a simple wearable called Mind‑X allowed anyone to sense others’ emotions, thoughts, and intentions in real time. Secrets evaporated. Society reverted to a globalised version of the pre‑modern village, where everyone knew everything about everyone else.

I didn’t frame it as dystopia. I framed it as inevitability. Technology would push us there; governance, responsibility, and honesty would help us adapt. “Sunlight is the best disinfectant.” Resistance is futile, so shape the future rather than fear it.

Back then, smartphones and social media were only beginning to nibble at the edges of privacy. The idea of collective openness, almost a shared consciousness, felt like science fiction.

So where are we in 2026?

Closer than I expected in 2010.
But still decades, perhaps centuries away from the full Mind‑X dream.

Yet the building blocks are emerging with startling speed.

The Technical Foundations Are Falling Into Place

1. Mind-reading is no longer science fiction

Modern BCIs can already decode:

  • inner speech
  • intentions
  • emotional states
  • even pre‑conscious signals

Some systems achieve ~74% accuracy on imagined sentences. Others translate thoughts into speech for paralyzed individuals almost instantly. AI models reconstruct images and words from brain activity with eerie fidelity.

Early consumer‑leaning devices, Omi’s forehead sensor, Meta’s neural wristbands are crude but unmistakable steps toward everyday neural interfaces.

2. Emotional sensing is accelerating

Non‑invasive tools can detect attention, stress, arousal, and other basic states. This is the first glimmer of the “sense emotions during conversations” capability I imagined in 2010.

3. Brain-to-brain interfaces (BBI) are emerging

We now have small groups sharing simple neural signals. High‑bandwidth implants (Neuralink and its competitors) are scaling rapidly. Telepathic collaboration—at least for willing participants—is no longer fantasy.

Timelines: A Realistic Trajectory

2030s–2040s (10–20 years)

  • Consumer BCIs for self‑use
  • Opt‑in emotional sharing between couples or teams
  • Early BBI networks for specialised groups
  • AR glasses with rudimentary “emotion sense”

2050s–2080s (30–60+ years)

  • Something approaching Mind‑X
  • High‑fidelity passive neural sensing
  • AI‑mediated transparency in professional or intimate settings

The full 2210 vision

  • Possibly never in its pure form
  • Or 100–200 years away
  • Not because of technology alone, but because of ethics, law, and human resistance

Many neuroethicists argue that comprehensive, non‑consensual mind access may be physically impossible—or legally forbidden.

The Real Barriers: Ethics, Law, and Human Nature

Neurorights are rising

Chile has already legislated them. The US, EU, and others are debating them. Neural data is being treated as sacred, akin to DNA or fingerprints. Non‑consensual mind‑reading may become the ultimate red line.

Consent will be the cornerstone

Future systems will likely be:

  • opt‑in
  • granular
  • AI‑filtered

Instead of total transparency, we may get enhanced empathy, a softer, more human version of the dream in most parts of the world. With exceptions??

Adaptation is already underway

Just as photography, the internet, and smartphones forced society to renegotiate privacy, neural tech is triggering the next wave of debate. My 2010 “fantasy” is colliding with reality, but with guardrails.

A Glimpse of the Future: My Recent Visit

I recently visited a nearly completed brainstorming centre of a high‑powered agency. At its core sits an AI‑controlled orb, part facilitator, part moderator. Every participant around it is tracked continuously: heart rate, facial expressions, micro‑gestures, body language.

A room where biomarkers become part of the conversation.

Is this transparency?
Is this enhanced collaboration?
Or is this the first step toward institutionalised emotional surveillance?

The answer depends entirely on governance and intent.

Harari’s Warning: A Faster Shift Than We Expect

Listening to Yuval Noah Harari’s recent podcast (By 2030, the World Will Be Unrecognizable), I was struck by his argument that by 2030, the world will be unrecognisable. Not because of gadgets, but because AI will reshape the very foundations of human society, identity, agency, belief systems.

In the context of BCI and BBI, this raises a profound question:
Can individuality survive when thoughts become shareable?

My view: yes, but only through responsibility and design.
We are building tools that could dissolve individuality, but we are also building the governance frameworks that could preserve it.

Where We Actually Stand

We are on the ramp.
The acceleration since 2010 has been extraordinary.
Precursors to the Mind‑X world may emerge in our lifetime, or certainly in our children’s.

But the “village of minds” future remains a distant horizon, shaped as much by values as by technology.

The core insight from 2010 still holds:
We cannot stop this trajectory, but we can steer it.

And the conversation we are having today is exactly the kind of responsible engagement that will determine whether this future empowers humanity, or overwhelms it.

Tail Piece

The truth is this: leaders today are still debating privacy as if we’re in 2010, while the technology has already leapt into 2030. We are entering an era where the human mind becomes a data source, where emotions are measurable, intentions are inferable, and collaboration may soon happen at the speed of thought. And yet most boardrooms are still stuck arguing about cookie banners and data‑sharing policies.

The gap between technological reality and leadership imagination has never been wider.

AI, BCI, and BBI are not “future issues.” They are governance issues, competitive issues, national‑security issues, and societal‑stability issues. The organisations that treat neural data with the same casualness as digital exhaust will face existential backlash. The ones that build guardrails early will define the norms the rest of the world follows.

This is the moment where leadership either evolves, or becomes irrelevant.

Because the next wave of disruption won’t ask for permission.
It won’t wait for regulation.
It won’t pause for ethical debates.

It will simply arrive.

And when it does, the question for leaders will be brutally simple:

Did you shape the future of mental privacy—or did you sleepwalk into it?

 

“We may not stop the merging of minds, but we can still decide what it means to be human.”


Friday, January 23, 2026

India’s Great Tech Leap: How a Once‑Cautious Nation Became the World’s Most Ambitious Innovator

 


For decades, India’s relationship with technology was defined by a paradox.

It produced some of the world’s best engineers, yet lacked the infrastructure to turn that talent into frontier innovation.
It powered global IT services, yet imported the chips that ran its own devices.
It launched spacecraft to Mars, yet struggled to commercialize space technology at scale.

That India is disappearing.

In its place is a nation moving with a speed, confidence, and strategic clarity that has startled even long‑time observers.
Across semiconductors, space, artificial intelligence, quantum technologies, and digital public infrastructure, India is executing one of the most aggressive technology expansions anywhere in the world.

This is not a sprint.
It is a systems‑level transformation, and it is reshaping global power equations.

The Silicon Bet: India’s Semiconductor Awakening

For years, India watched the global semiconductor race from the sidelines.
Today, it is building fabs, packaging units, and design ecosystems with a seriousness that signals a long‑term national commitment.

The shift began with a simple realization:
A nation of 1.4 billion cannot depend on imported chips for its economic and strategic future.

India’s semiconductor mission is now in full execution mode.
Multiple OSAT/ATMP facilities are under construction.
Compound semiconductor fabs, critical for EVs, telecom, and power electronics, are moving fastest.
SCL Mohali is being modernized to anchor sovereign chip capability.
And a new generation of chip‑design startups is emerging under the Design Linked Incentive scheme.

India is not chasing 3‑nanometer logic fabs.
It is chasing strategic relevance, entering through niches where global demand is exploding and competition is thin:
power electronics, RF, automotive chips, and advanced packaging.

It is a pragmatic, disciplined, and deeply strategic entry point.

The New Space Power: India’s Quiet Revolution Above the Clouds

If semiconductors are India’s industrial bet, space is its geopolitical one.

In the last two years, India has achieved milestones that place it in an elite club:
in‑orbit satellite docking, terabytes of solar science data from Aditya‑L1, orbital experimental platforms enabling robotics and propulsion tests, and a rapidly expanding private space ecosystem.

The transformation is profound.
India is no longer defined by occasional headline missions.
It is building repeatable, commercial, scalable space infrastructure.

A third launch pad is under development at Sriharikotta .
Reusable launch vehicle tests are accelerating.
A national space station is planned for the 2030s. The Bharatiya Antariksh Station (BAS) is India’s planned indigenous, modular space station, aimed to be operational by 2035 to support long-duration human spaceflight and microgravity research. Developed by ISRO, it will orbit 400–450 km above Earth
Private companies are building propulsion systems, sensors, and small launch vehicles.

India is not just a cost‑efficient spacefaring nation.
It is on the road to becoming a space power, one that can shape markets, standards, and supply chains.

AI at Population Scale: India’s Most Underrated Advantage

While the world debates the ethics and economics of AI, India is quietly building something unique:
AI designed for a billion people.

The National AI Mission is deploying sovereign compute infrastructure at unprecedented scale.
Indian foundational models are emerging across languages, healthcare, agriculture, and governance.
AI‑powered citizen services already reach hundreds of millions of people.

This is India’s superpower:
AI that is not tested in labs, but in the real world, messy, diverse, multilingual, and massive.

India’s AI ecosystem is shifting from services to sovereign capability.
From building models for others to building models for itself.
From being a talent exporter to becoming a platform nation.

Quantum: The Next Frontier of National Power

Quantum technology is often described as the “space race of the 21st century.”
India is determined not to repeat the mistakes of the past, where it entered late and played catch‑up.

The National Quantum Mission is investing heavily in quantum computing, quantum communication, and quantum‑secure networks.
City‑to‑city quantum communication links are already being tested.
50–100 qubit systems are in development.
Defence‑grade quantum encryption pilots are underway.

Quantum is not just a scientific pursuit.
It is a national security imperative.
And India is treating it as such.

The Invisible Engine: Digital Public Infrastructure

Behind all these advances lies India’s most powerful and least understood advantage:
Digital Public Infrastructure (DPI).

Aadhaar, UPI, ONDC, DigiLocker, CoWIN, and a growing stack of interoperable digital rails have created a platform for innovation unmatched anywhere in the world.

DPI is India’s operating system.
It enables scale.
It reduces friction.
It democratizes access.
It turns a billion people into a billion participants.

This is the foundation on which India’s tech ambitions stand.

The Pattern: A Nation Building Strategic Depth

Across all these domains chips, space, AI, quantum, DPI, the pattern is unmistakable:

India is building sovereign capability in the technologies that define global power.

Not through slogans.
Not through incrementalism.
But through:

  • Massive public investment
  • Private sector acceleration
  • Global partnerships
  • Talent depth
  • A national appetite for scale

This is not a collection of initiatives.
It is a coherent national strategy.

The Decade Ahead: India’s Moment of Consequence

India’s next challenge is not ambition.
It is discipline.

The world is recalibrating supply chains, rethinking alliances, and rediscovering the value of trusted partners.
India has a 5–7 year window to cement itself as a global technology anchor.

If India sustains this momentum, it will not just participate in the next technological era.
It will shape it.

And for the first time in its modern history, the world is beginning to believe that India might actually do it.

 “Every nation has a moment when it decides who it wants to be. India has chosen to be consequential.”

Sunday, January 4, 2026

Entrepreneurs Start Companies. Bureaucrats End Them

 



When companies are born, they rarely begin with grand org charts, multilayered governance structures, or 200‑page SOP manuals. They begin with a handful of people who are hungry, restless, and unafraid to get their hands dirty. People who don’t wait for permission. People who learn by doing, not by presenting. People who care about purpose, outcomes, and value, not optics, credits, or turf.

These early teams are made of Doers in the truest sense of the word. They take responsibility. They deliver. They improvise. They experiment. They fail fast and recover faster. They don’t hide behind process because there is no process to hide behind. They don’t obsess over structure because the only structure that matters is the one that gets the job done.

This is the spirit that births companies.
This is the spirit that builds movements.
This is the spirit that creates impact.

And then… scale arrives.

And with scale comes the inevitable: systems, processes, governance, measurement, compliance, and the dreaded bean‑counting. None of this is inherently bad. In fact, without these, organisations collapse under their own weight. Stability matters. Accountability matters. Repeatability matters.

But here’s the tragedy:
When the pendulum swings too far toward process, the organisation forgets why it exists.

The machinery becomes more important than the mission.
The rituals become more important than the results.
The compliance becomes more important than the customer.

And in this slow drift from purpose to process, a new species emerges inside the organisation, the Passenger.

The Rise of the Passenger

The Passenger is not incompetent. In fact, they are often articulate, polished, and excellent at navigating internal systems. They know how to write long emails, how to attend meetings, how to escalate, how to cover themselves, and how to stay “aligned.”

But they are not builders.
They are not creators.
They are not owners.

They are more invested in the machinery than the mission. They optimise for internal perception rather than external impact. They care more about credits than outcomes. They follow the rulebook even when the rulebook is outdated. They prioritise safety over speed, predictability over possibility, and optics over ownership.

Passengers don’t kill organisations overnight.
They kill them slowly, by draining the entrepreneurial spirit that once made the organisation alive.

And once Passengers dominate, the Doers either leave or get suffocated. That is the beginning of the end.

The Balance That Determines Survival

Every organisation eventually faces a fundamental question:

How do we preserve the entrepreneurial spirit while building the systems needed for scale?

This balance, or the lack of it, determines whether a company evolves or perishes.

The good news is that many large companies have found a way to keep innovation alive. When they want to open new growth avenues, they carve out a crack team — a small, empowered, entrepreneurial unit with the freedom to experiment, break rules, and move fast. A team that is intentionally kept away from the bureaucratic machinery.

This team is given:

  • A mandate to think differently
  • Freedom from the shackles of BAU
  • Permission to experiment
  • A leader who believes in speed, risk, and disruption

And once this team gains momentum, the mainstream organisation absorbs the learnings and scales the success.

But here’s the catch, and it’s a big one:

If this crack team reports to a BAU leader, the experiment is dead on arrival.

Because BAU leaders optimise for stability, predictability, and risk minimisation. They are not wired for entrepreneurial chaos. They don’t understand the value of a quick strike. They want plans, frameworks, decks, committees, and alignment before taking the first step.

That is how innovation dies, not because the idea was bad, but because the environment was hostile.

A Real Story: How Bureaucracy Kills Momentum

Recently, a company I know attempted such an experiment. They onboarded an entrepreneurial, high‑energy individual from outside , someone with the mindset of a commando, not a clerk. His mandate: open a new geography with massive potential.

He did exactly what a hunter would do.
He identified a powerful early linkage.
He moved fast.
He reached out to the leadership with excitement.

And then came the response, from a leader who had never been required to think like an entrepreneur. Someone steeped in the classic bureaucratic “CYA” culture.

The reply was a masterpiece of corporate paralysis:

“I appreciate your initiative in meeting people in this new territory, but for us to engage effectively, we need context, a structured plan, and alignment on what we are jointly looking to achieve. Without that, it becomes difficult for us to prioritise or commit resources, especially when nothing concrete has been outlined yet. Let us spend another four months studying the market and evolve a detailed implementation plan and then start.”

Brilliantly articulated.
Perfectly structured.
And absolutely spirit‑killing.

This is how you pour cold water on a go‑getter.
This is how you suffocate initiative.
This is how you turn a commando into a clerk.

Is the leader wrong?
Not entirely. Planning matters. Context matters. Alignment matters.

But this is not how hunters operate.
A hunter’s mindset is about the surgical strike , a quick, sharp opening salvo that creates early momentum while the larger plan evolves in parallel.

When the world is moving at breakneck speed, waiting four months to “study the market” is not strategy. It is self‑sabotage.

The World Has Changed. Many Organisations Haven’t.

We live in an era where industries are being disrupted in real time. New technologies, new behaviours, new competitors, everything is shifting faster than traditional organisations can comprehend.

In such a world, the companies that survive will be the ones that can:

  • Experiment fast
  • Learn fast
  • Adapt fast
  • Scale fast

The ones that cling to old models of planning, alignment, and risk‑avoidance will become irrelevant. They will become dinosaurs, large, impressive, and extinct.

The irony is that many organisations talk endlessly about innovation, agility, and transformation. They put these words in their annual reports, their town halls, their strategy decks.

But when a real entrepreneur walks in and tries to do something bold, the system reacts like an immune response, attacking the very thing that could save it.

The Choice Every Leader Must Make

Every leader, especially those running established businesses, must ask themselves a brutally honest question:

Do I want Doers or Passengers?
Doers disrupt the status quo.
Passengers defend it.

Doers take risks.
Passengers avoid them.

Doers create value.
Passengers create paperwork.

If you want innovation, you must protect the Doers.
If you want stability, you will attract Passengers.
If you want longevity, you must balance both, but never let the Passengers dominate.

The future belongs to organisations that can institutionalise entrepreneurship without descending into chaos. That can build systems without killing spirit. That can scale without suffocating initiative.

This is not easy.
But it is necessary.

Because in a world that is changing this fast, the companies that fail to metamorphose will not get a second chance.

They will simply disappear.

“The future belongs to the organisations that can reinvent themselves before the world forces them to.”