The Founder Asking What AI Is For

Days after receiving Oxford's Bodleian Medal, Dr. Shekhar Natarajan is making a case the field has tried not to ask.
Oxford's Bodleian Medal, Dr. Shekhar Natarajan
Written By:
Arundhati Kumar
Published on
Updated on

OXFORD — On April 20, Dr. Shekhar Natarajan was awarded Oxford University's Bodleian Medal for his work in artificial intelligence ethics. He used his lecture to open with a line his audience was not expecting.

"Poverty is being completely invisible," he said.

Not poor in the conventional sense. Invisible — to the systems that decide whose problems get solved, and whose lives a market is built to serve.

It was the cleanest summary yet of why Natarajan, founder of Orchestro.AI, has become one of the more disruptive voices in a field crowded with them.

A Different Thesis

While Sam Altman, Demis Hassabis, and Dario Amodei have organized the AI debate around capability and control, Natarajan is arguing the field has been measuring the wrong thing. The problem, he says, is not whether powerful AI can be controlled. It is whether intelligence built around optimization is adequate at all.

"Virtue cannot be a guardrail," he said at Oxford. "It has to be the foundation."

Most contemporary AI safety work — reinforcement learning from human feedback, red-teaming, content filters — operates on what Natarajan calls a retroactive model. The system is built first, then taught what it must not do. Through Orchestro.AI, his Dublin, California–based company, he is proposing the inverse: embed moral reasoning into the foundation, before a single decision is made.

He has been making versions of this argument for two years, in venues that have widened in stature — Davos, the AI Summit, now Oxford.

Four Philosophies, Four Different Questions

To understand what makes Natarajan's position distinct, it helps to lay it next to the philosophies of the figures dominating the conversation.

Sam Altman, the chief executive of OpenAI, has organized his work around the pursuit of artificial general intelligence — systems that can match or exceed human capability across most cognitive tasks. The implicit philosophy is one of trajectory: build the most capable system possible, distribute its benefits broadly, and trust that capability, properly governed, will be civilizationally net-positive. The central question, in this frame, is how soon.

Demis Hassabis, the chief executive of Google DeepMind, comes at the problem as a scientist. His work, from AlphaGo to AlphaFold, has framed AI as a tool for solving humanity's hardest scientific problems — protein folding, disease, climate. The philosophy is one of cognition as discovery. The central question, in this frame, is what AI can help us understand.

Dario Amodei, the chief executive of Anthropic, has articulated what he calls safe acceleration — the position that the same labs pushing capability forward are also best positioned to make it safe, provided their safety work keeps pace. The philosophy is one of responsible speed. The central question, in this frame, is how to scale capability without scaling risk.

Natarajan's question is different from all three.

He is not asking how soon the machine can match human capability. He is not asking what the machine can help humans discover. He is not even asking how to keep the machine from causing harm. He is asking, more fundamentally, what the machine is for — and whose lives the answer to that question is implicitly built around.

"Most of the field is asking how to make a more powerful machine and keep it from going wrong," Natarajan said in his Oxford remarks. "I am asking what we want the machine to be doing in the first place — and for whom."

The contrast, in his telling, is not adversarial. He has spoken with respect of the work being done at OpenAI, DeepMind, and Anthropic, and he does not dispute the importance of capability research or alignment science. His argument is that those efforts, taken together, do not address the more basic question underneath them. A system can be powerful, scientifically useful, and aligned — and still be optimizing for the problems of people the builders can already see.

Where Altman's philosophy is one of trajectory, Hassabis's of discovery, and Amodei's of safe acceleration, Natarajan's is one of orientation. The difference is not in how fast or how carefully a machine should be built. It is in whether the machine, once built, is pointed at the right thing.

The Founder

Natarajan came to AI by an unusual route. He spent more than two decades in supply-chain technology at Walmart, Coca-Cola, Disney, and PepsiCo, and now holds more than 200 patents, according to coverage of his work.

His earlier life was very different. According to coverage in international outlets, he was raised in South Central India in a household without electricity and studied under a streetlight outside. His mother pawned her wedding ring to pay his school fees. He arrived in the United States with thirty-four dollars and, at one point, lived out of his car.

Audiences hear that arc and reach for the obvious interpretation: the distance between the streetlight and the boardroom is what taught him. Natarajan pushes back.

"The journey did not require me to become someone else," he said at Oxford. "It required the world to look at the person who had already been there."

The teacher, in his telling, was not the crossing. It was his mother — a woman who never crossed at all.

How Angelic Intelligence Works

Natarajan described his system at Oxford as a patented architecture organized around four pillars.

Wisdom Engine. Filters and curates the data the system learns from, Natarajan said, training AI on what he called human wisdom rather than internet chaos.

Virtue Stack. Configurable virtues that, he said, allow the system to adapt to a specific field — healthcare, logistics, finance, or education.

MACI. Short for Multi-Architecture Consequential Intelligence. Multiple AI agents debate every decision before the system acts, producing reasoning Natarajan said is more consistent than that of a single model.

Human-Centric Scoring and Explainability. Every decision is measured against human benefit, Natarajan said, and explained in a transparent reasoning chain.

At the operational core sit twenty-seven specialized agents Natarajan calls Digital Angels, each embodying a virtue from a different cultural tradition. The architecture, the company says, is built. More than forty patents have been filed. Deployments are beginning.

Whose Problem Gets Solved

Natarajan illustrates the stakes with an example from his old field. A luxury handbag and an urgent medical parcel sit side by side in a warehouse. Conventional logistics software, optimized for margin, routes the handbag first. An Angelic Intelligence system, he said, routes the medicine — not because a rule said so, but because the architecture itself can register a value the previous architecture could not.

Frontier AI, he argued, is largely being built by people and institutions that have always been visible — financially, geographically, institutionally. The questions those builders ask the technology to answer are, by virtue of where they sit, the questions of the visible. Whose problem deserves a solution? In his telling, the answer drifts toward the problems of people the builders can already see.

The Skeptics

Researchers in mainstream alignment circles raise the obvious objection: a system that has only seen one side of human moral experience risks being naïve before the other. Some traversal of the moral terrain, they argue, is necessary for the moral muscle to function.

Natarajan's response is that recognition does not require enactment. His mother, he points out, recognized injustice without ever having committed it. The asymmetry of moral knowledge — that a person can know harm intimately without ever inflicting it — is, he notes, present in the Sanskrit, Greek, Arabic, and Confucian traditions that inform the Digital Angels at the system's core.

What Comes After Oxford

The Bodleian Medal does not settle the technical questions surrounding Angelic Intelligence. Whether the architecture works at the scale Natarajan is proposing remains, as he himself has said, an open question.

What the medal signals is that the question he is asking has stopped being a marginal one.

In his closing remarks, Natarajan reframed the debate.

The question, he said, is no longer how powerful the machine can become.

The question is what, and whom, the machine is finally able to see.

logo
Analytics Insight: Latest AI, Crypto, Tech News & Analysis
www.analyticsinsight.net