Spot Light
AI Safety Connect (AISC) convened international experts, policymakers, and researchers in New Delhi on February 17 for a high-level press conference on strengthening global coordination for frontier AI safety aspart of the India AI Impact Summit 2026.
Hosted by Eugene Yiga, Communication Lead of AI Safety Connect, the strategic briefing set the tonefor Summit week, laying out AI Safety Connect’s roadmap for advancing global coordination onfrontier AI safety and underscoring India’s growing influence in shaping a more inclusive, accountable,and internationally aligned approach to AI governance.
Framed under the headline “First Global South AI Summit: India Demands Accountability from FrontierLabs Racing to AGI,” the event outlined five interconnected priorities: India’s distinctive approach to AIgovernance; the most pressing global risks from frontier AI; the role of middle powers in AIcoordination; practical mechanisms for AI safety verification and evaluation; and internationalcoordination models already underway.
Speaking at the press conference, Nicolas Miailhe, Co-Founder of AI Safety Connect, emphasised India’s dual AI responsibility, said, “India faces a dual AI challenge. On one hand, we are already seeing real-world harms from rapidly deployed AI systems. On the other, the global race toward increasingly powerful AI systems are accelerating. India cannot afford to address one without engaging the other.”
Speakers emphasised that India faces a dual responsibility: managing present-day harms from rapidly deployed AI systems while also engaging with frontier risks emerging from the global race toward increasingly advanced AI systems. As one of the world’s largest digital societies and a rising technology power, India’s governance choices could significantly influence how global AI norms evolve, particularly as Global South voices seek a more meaningful role in shaping international frameworks.
Miailhe added: “The race toward artificial general intelligence is no longer theoretical. Significant resources are being deployed globally. The question is not whether advanced AI will develop - it is whether governance will keep pace.”
Participants highlighted that AI governance must simultaneously confront current harms, including risks to children’s safety, misinformation, cybersecurity vulnerabilities, and exposure of critical infrastructure, while preparing for systemic risks associated with increasingly capable AI systems. Speakers stressed that present-day harms and frontier risks are not separate challenges but part of the same accelerating technological trajectory.
The discussion underscored the role of middle powers and Global South countries in influencing the pace and direction of frontier AI development through coalition diplomacy, coordinated standards-setting, and procurement leverage. Coordinated action, speakers noted, can shape accountability norms for frontier AI systems even when development is concentrated among a small number of actors.
Drawing on findings from the 2026 International AI Safety Report, speakers outlined the current state of AI capabilities and safety measures, identifying areas of scientific consensus as well as significant uncertainty.
Closing the press conference, Cyrus Hodes, Co-Founder of AI Safety Connect, focused on the urgency of building governance infrastructure before crisis forces reactive responses: “We cannot wait for a failure event to build the infrastructure for cooperation. Coordination mechanisms must be in place before advanced AI systems reach critical capability thresholds.”
He further emphasised the importance of enforceable safety commitments: “If international agreements on AI safety are to carry weight, they must be backed by credible verification and certification mechanisms. Trust in this domain will depend on enforceability.”
AI Safety Connect reiterated its mission to build durable coordination infrastructure through Track 1.5 and Track 2.0 diplomacy, convening policymakers, researchers, frontier AI labs, and civil society before advanced AI systems reach critical capability thresholds.
See What’s Next in Tech With the Fast Forward Newsletter
Tweets From @varindiamag
Nothing to see here - yet
When they Tweet, their Tweets will show up here.



