/dq/media/media_files/2026/02/16/india-ai-summit2-2026-02-16-13-47-33.jpg)
AI Innovators Exchange: Accelerating Startups Through Collaboration
AI has crossed a threshold. It is no longer a frontier technology confined to pilots and prototypes. It is beginning to shape real outcomes—credit access, welfare delivery, healthcare triage, language translation, payments and public services. Once AI enters that territory, the question shifts from capability to governability: can these systems be trusted, validated and held accountable at population scale?
That was the premise of a responsible and ethical AI discussion at the India AI Impact Summit 2026. The session brought together voices across government ecosystem-building, public policy, enterprise AI, agriculture innovation, academia and critical infrastructure, including Arvind Kumar (STPI), Ravi Arora (Mastercard), Ankush Sabharwal ( Founder & CEO of CoRover.ai),Vivek Raj (Panama Corporation), Prof. Nitin Saxena (IIT Kanpur) and Dr. Tripta Thakur (NPTI).
STPI’s pitch: tier-2/3 startups, domain CoEs, and “AI everywhere”
The opening set the context with a snapshot of India’s distributed startup infrastructure. STPI was described as an autonomous body under the Ministry of Electronics and Information Technology, supporting more than 1,800 tech startups, with a significant share coming from tier-2 and tier-3 cities. A network of 70 centres across the country—62 in tier-2/3 locations—was linked to 24 domain-specific Centres of Entrepreneurship designed to take startups from pre-ideation to prototype to market access, supported by incubation, nurturing, funding schemes and exposure through conferences and global programmes.
A key shift was highlighted through a simple observation: AI is no longer a standalone stream. From having only two centres dedicated to AI, the ecosystem has, over the last three years, effectively become “AI-first” because it is now difficult to imagine solutions in any domain without AI embedded somewhere in the stack.
Responsible AI vs ethical AI: FAST-P and the bigger umbrella
A notable attempt was made to separate two terms that are often used interchangeably. Responsible AI was framed through a practical checklist—FAST-P: Fairness, Accountability, Security, Transparency and Privacy. Ethical AI was positioned as the broader leadership umbrella, where the larger questions sit: environmental effects, societal disruption, and whether AI systems may worsen job displacement or other long-term consequences.
The distinction matters in a summit context because it shifts responsibility from being only a technical issue to being both a technical discipline and a leadership choice—what is built, how it is governed, and whether it should be deployed in a given form.
Trust drives adoption; adoption drives impact
The discussion then moved to why this matters at the India scale. As AI becomes embedded in critical systems, the central question becomes whether AI is governable at scale. The argument was that governance at scale is only possible when there is trust underneath it—because trust determines adoption, and adoption determines impact.
The “AI for Bharat” framing was presented as distinctive because it emphasises population-scale deployment, social and economic relevance, and system-level resilience. In this view, responsible AI is not defined by “zero risk” but by the presence of resilience and accountability—systems that can fail safely, recover quickly, and remain transparent enough to be audited and corrected.
From AI users to AI creators: build in your domain, ignore the noise
The session also carried a direct call to builders. It argued that becoming an AI creator is more accessible than ever—not only for those building platforms, but for domain practitioners who can translate deep domain understanding into practical AI applications.
The advice was unglamorous but useful: go deep into your domain first, identify gaps and opportunities, and then apply AI in service of that purpose. The emphasis was on outcomes rather than signalling—less about proving “India is the best,” and more about becoming better in your own field, using the tools now available to build and ship.
Agriculture’s next leap: from controlled farming to “controlled ecosystem engineering"
A sharp sectoral intervention came from agriculture, framed as the world’s most ignored sector despite being fundamental to life. The point was made bluntly: agriculture receives less than 5% of global AI investments, even though food is required “every four to five hours” for human life.
The argument traced the first chapter of vertical farming—optimising photosynthesis, maximising plant growth with red and blue LED lighting, and controlling nutrients and climate with precision. That wave helped scale leafy greens. But the next chapter is harder: high-value fruiting crops and medicinal plants introduce a deeper biological bottleneck—pollination. Without effective pollination, yield consistency drops, fruit quality declines and commercial scalability stalls.
The proposed direction was to move beyond simplistic approaches—importing bees or relying only on robotics—and instead “engineer ecosystems.” In that framing, AI enables modelling microclimates, simulating environmental cues, regulating airflow patterns, and measuring plant response in real time. The claim was that AI is no longer only optimising growth; it is beginning to replicate biological systems inside controlled environments—linking light-spectrum engineering, airflow, sensor networks and climate control into intelligent indoor ecosystems.
Academia’s missing pivot: a startup mindset, not only publications
From the academic lens, the session challenged a familiar policy assumption—that more AI research output will automatically create impact. In the current AI era, traditional research alone was described as insufficient. What is needed is a “startup mindset” inside labs: dynamic teams, utility-driven work, product orientation, clear clients, and a willingness to build with adoption in mind.
A technical warning followed: AI intelligence is statistical and “jagged.” It can work impressively and fail unpredictably—and often without clear explanations. That lack of guarantee, it was argued, limits full autonomy in high-stakes areas such as medical treatment and complex transport systems, unless responsibility remains with an expert human in the loop. Another India-specific requirement was emphasised: AI must be frugal and utilitarian so that people can actually use it at scale.
One concrete proposal captured the spirit of the argument: advanced degrees and theses should have the option of being awarded based on products, not only on publications.
Power sector: scale ambitions, cyber resilience, and making AI core to engineering
The critical infrastructure lens focused on the power sector’s scale and the skills challenge. India’s generation capacity was referenced as about 500 GW achieved over 78 years, with an ambition to double this capacity within 20 years. The argument was that such scaling is possible only with technology embedded across the value chain—while ensuring cyber resilience because power systems are critical infrastructure.
The skills gap between academia and industry was framed as a key bottleneck. In an era where everything is changing due to AI, the suggestion was to treat AI as a compulsory core competency rather than an elective—similar to how environmental studies became core across engineering disciplines. Ethics and responsibility, it was argued, must begin with students entering engineering fields.
India’s AI moment is an ecosystem moment
In the closing discussion, the focus returned to ecosystems. The arc was framed historically—electricity, then the internet, then AI—with an important difference: this time, government is described as being actively “with the innovators.” The IndiaAI mission was positioned as supporting multiple layers of AI development—compute, applications, LLMs, foundation models and enabling wrappers.
Democratising compute was described as a distinctive India lever, including a claim of more than 38,000 GPUs being made available to innovators at highly subsidised access. This model was presented as something other countries want to understand and replicate.
A simple “why now” explanation was offered: AI as an academic idea is not new, but what has changed is data availability and the ability to train systems to a point where they can produce surprisingly intelligent outputs. But capability alone is not the finish line; systems still need validation and trust.
The final governance argument positioned India as a potential leader by demonstration—embedding trust, safety, accountability and inclusion into real-world systems at national scale rather than relying only on abstract rule-making. If such an approach proves itself under India’s constraints—diversity, scale, and uneven infrastructure—it becomes a replicable blueprint for many Global South economies facing similar realities.
The key message
The session’s message was practical and unromantic: India’s AI advantage will not be decided by who builds the biggest model first. It will be decided by who can deploy AI responsibly—fairly, transparently, securely and with accountability—across sectors where failure has real consequences. In that sense, the summit’s responsible AI discussion offered a clear north star: move from models to ecosystems, and make trust the operating system for scale.
/dq/media/agency_attachments/UPxQAOdkwhCk8EYzqyvs.png)
Follow Us