/dq/media/media_files/2026/02/19/sam-altman-2026-02-19-18-01-34.png)
At the AI Impact Summit 2026, Sam Altman, CEO of OpenAI, went beyond predicting the arrival of superintelligence. He framed the coming years as a defining test for democracy, governance, and global power.
“If our predictions hold, by the end of 2028 more of the world’s intellectual capacity could exist inside data centres than outside them,” Altman said. “We could be wrong, but it bears serious consideration.”
That projection anchored his central warning: the future of artificial intelligence will be shaped not just by capability, but by who controls it.
India’s Role in a Democratic AI Order
Altman opened by pointing to India’s rapid AI adoption. More than 100 million people in the country now use ChatGPT weekly, with students accounting for over a third of users. India has also become the fastest-growing market for Codex, OpenAI’s coding agent.
“It is striking how much progress India has made in its mission to put AI to work for more people,” he said.
He positioned India as uniquely placed to influence global AI norms. As the world’s largest democracy, India has the opportunity to build AI at scale while shaping how it is governed. Advances in sovereign AI infrastructure and small language models, he argued, signal that India is actively shaping the technology rather than merely consuming it.
Altman suggested that if intelligence increasingly resides in data centres, democratic countries like India will play a critical role in determining whether that power is distributed or centralised.
Superintelligence, Democratisation and Safety
Altman offered one of his strongest forecasts yet, suggesting that early forms of true superintelligence could emerge within two years. Such systems, he said, could outperform top executives and leading scientists.
“At some point on its development curve, a superintelligence would be able to do a better job being the CEO of a major company than any executive; certainly me, or conduct better research than our best scientists.”
He traced the rapid acceleration of AI capabilities, from struggling with high-school mathematics to handling research-level problems in just a few years. While progress remains unpredictable, the pace suggests the next leap may arrive sooner than expected.
Democratisation, Altman argued, is not just a moral choice but a safety strategy. Concentrating AI power within a single company or country increases systemic risk.
“Democratisation of AI is the only fair and safe path forward,” he said, rejecting the idea that society must trade freedom for breakthroughs. “Some people want effective totalitarianism in exchange for a cure for cancer. I don’t think we should accept that trade-off.”
He warned that safety cannot be solved inside AI labs alone. Risks range from misuse of powerful open-source biomodels to new forms of conflict enabled by AI. Addressing them will require governments, institutions, and civil society to work together.
Economic Disruption and Long-Term Implications
Altman acknowledged that rapid AI progress will reshape economies. As systems improve, the cost of many goods and services, from healthcare and education to manufacturing, could fall sharply.
At the same time, labour disruption is inevitable. “It will be very hard to outwork a GPU in many ways,” he said, while noting that humans retain advantages in empathy, judgement, and social connection.
Placing disruption in historical context, Altman argued that each generation inherits a more powerful technological foundation. The responsibility today, he said, is to ensure that future generations benefit from AI’s upside rather than bear the cost of concentrated power.
He closed with a clear warning. The coming years will force a choice.
“As this technology continues to improve rapidly, we can either empower people or concentrate power.”
Altman called for international coordination, potentially through institutions similar to the IAEA, to oversee advanced AI systems and respond quickly to emerging risks.
If intelligence increasingly resides in data centres, the defining question will not be how smart machines become, but whether they expand human freedom or entrench centralised control. The technology is moving fast; the political and ethical decisions must keep pace.
/dq/media/agency_attachments/UPxQAOdkwhCk8EYzqyvs.png)
Follow Us