Sam Altman says early superintelligence could arrive by 2028

Speaking at the AI India Impact Summit in New Delhi, OpenAI CEO Sam Altman said early superintelligence could emerge by 2028, and cautioned against centralised AI control

author-image
Deepali
New Update
17-02-26 (11)
Listen to this article
0.75x1x1.5x
00:00/ 00:00

OpenAI CEO Sam Altman said early forms of superintelligence could emerge within the next few years. He was speaking at the AI India Impact Summit 2026 in New Delhi on Thursday.

Advertisment

Altman said that by the end of 2028, “more intellectual capacity may reside inside data centres than outside them,” adding that future systems could outperform human CEOs and leading research scientists. However, he cautioned that such timelines remain uncertain. “We could be wrong, but it bears serious consideration,” he said.

Altman noted that India is one of OpenAI’s most important markets, with 100 million people using ChatGPT weekly, more than a third of them students. He said India is also the fastest-growing market for Codex, the company’s coding agent, and highlighted the country’s growing focus on sovereign AI infrastructure and smaller language models.

Altman said AI systems have moved quickly from struggling with high school mathematics to operating at research-science level. The next few years, he said, will be critical in deciding how AI is governed and who controls it.

Advertisment

Against centralised control

On the question of control, Altman argued that concentrating AI power in one company or one country would be dangerous.

“Some people want effective totalitarianism in exchange for a cure for cancer. I don't think we should accept that trade-off,” he said. In his view, AI should expand human agency, not override it.

He acknowledged that distributing control comes with risks, but said empowering people is preferable to concentrating power. “We can choose to either empower people or concentrate power,” he said.

Safety beyond technology

Altman said AI safety is not just about aligning systems in labs. It also includes broader societal risks, such as open-source biological models that could potentially be misused. Addressing these issues, he said, requires global coordination, not just company-level safeguards.

He raised questions about how to respond if AI systems align with authoritarian regimes or are used in new forms of warfare, suggesting that existing governance systems may need to evolve.

Jobs and economic impact

Altman said AI will make many goods and services cheaper over time, including healthcare and education. However, he acknowledged disruption to workers. “It is very hard to outwork a GPU in many ways,” he said, though he added that humans will continue to value human interaction.

Taking a longer view, Altman said technological disruption is not new, and future generations may look back at today’s concerns differently.

Call for global oversight

Altman called for international cooperation on AI governance, suggesting a global body similar to the International Atomic Energy Agency. He said such coordination would complement national regulations.

He also defended OpenAI’s approach of releasing AI systems gradually to allow society time to adapt. “This has been working surprisingly well so far,” he said.

chatgpt Sam Altman