Next week, as world and corporate leaders arrive in India for the AI Impact Summit, there are some questions the country has to contend with — the dependence on importing compute, its ambitions to be the world’s data centre hub, and how to ensure that AI’s opportunities are enough to offset its known risks.
Katharina Frey, Executive Director of the International Computation and AI Network (ICAIN), which is based at ETH Zurich, offers a roadmap on how countries such as India could chart the path ahead. Frey brings a strong track record in digital diplomacy, AI governance, and global partnerships. She was a career diplomat with the Swiss Federal Department of Foreign Affairs for 17 years and helped establish Switzerland’s Digital Foreign Policy Division and shaped its strategy on digital governance and cybersecurity.
In this interview with Anil Sasi and Soumyarendra Barik, Frey speaks about how sharing global resources and building with transparency are among the key ways to ensure that AI benefits all, how India should look at expanding AI infrastructure sustainably, and how the world can collaborate on regulating AI.
Access to computing resources is one of the most difficult parts for smaller companies trying to build in AI to access. From an Indian perspective, the government is trying to subsidise GPU costs by incentivising companies to set up data centres in India. Is that a viable long-term strategy?
Subsidising GPU access and building domestic data centres is an important and forward-looking step, especially if they are limited to education and research. It helps to lower barriers for startups and researchers and strengthens national AI capacity. Sustainable AI ecosystems, however, are built not only on hardware. Such systems also depend on data, skills, partnerships, and open collaboration… Access to computational resources has to be accompanied with the training of talented people, as the transfer from classic computing to high-performance data centres requires expertise.
Many big tech companies have, in recent months, announced massive data centres in India, and the government in its latest Budget also announced a tax holiday for foreign companies setting up such infrastructure in the country. From a resource perspective, what could be the load on things like energy and water? Is it actually sustainable for developing economies like India to house massive data centres in the long run?
Large-scale data centres inevitably place significant demands on energy and water. There is no question about that. Whether this is sustainable in the long run depends less on the number of data centres and more on how they are designed, powered, and integrated into local infrastructure. There are global examples showing that, with the right choices, data centres can operate in a highly resource-efficient way — for instance by relying on renewable energy, climate-appropriate cooling, and the re-use of waste heat.
Story continues below this ad
At the Swiss National Supercomputing Centre for instance, the Alps supercomputer runs on hydropower, uses lake-water cooling, and re-uses waste heat, making it close to CO₂-neutral. Sustainability outcomes change dramatically, however, when fossil fuels power the grid or when cooling relies heavily on scarce freshwater.
India will need its own solutions for operating such infrastructure in tropical conditions and could collaborate with countries in similar situations. It might also be interesting to cooperate with partners where conditions for green power and cooling are more favourable.
For economies like India, the key question is, therefore, not whether to host data centres, but how to align them with renewable energy expansion and local resource constraints.
Story continues below this ad
Today, there are some conversations that the whole AI hype might, after all, be just a bubble, especially as companies are finding it difficult to figure out viable revenue streams. A leading company in this space has said it will show advertisements to users who don’t pay a subscription fee. What is your take on that discourse? Is there an AI bubble?
Business models such as advertising or subscriptions are not new — they have existed in the digital economy long before AI. In my view, this is not a strong indicator of whether AI is a ‘bubble’ or not. What matters more is that we are in the middle of major technological and societal transitions driven by AI. AI is real and will affect all countries, their economies, and their societies.
I believe that AI offers tremendous opportunities, also for developing countries. For this to happen we need to be inclusive with the technology and work globally towards more equitable access and sharing of capabilities… At this stage, the priority should be less about short-term hype cycles and more about building responsible, long-term ecosystems that benefit societies globally.
A few years ago, many AI leaders around the world were calling for global cooperation to regulate AI. Since then, many economies have become increasingly protectionist and are looking to prioritise strategic autonomy and digital sovereignty. What does global cooperation to regulate AI look like in an increasingly insulated world?
Story continues below this ad
Balancing sovereignty and global cooperation is indeed a challenge, but the two are not mutually exclusive. In fact, there is a viable way to combine them. Effective AI governance takes time, especially at the global level, and it is unrealistic to expect comprehensive regulation to emerge overnight.
In recent years, important foundations have been laid — through multilateral processes for instance at the UN, through the adoption of the first international AI convention by the Council of Europe, or through a series of global AI summits bringing together governments, civil society, and industry. India’s leadership through the AI Impact Summit is another important signal of constructive engagement.
At the same time, many organisations and institutions are concerned about their dependency on a small number of dominant AI providers whose models are often not transparent and not always aligned with local legal and societal requirements. This makes strategic autonomy an understandable priority.
Story continues below this ad
I am not alone in the strong belief that the world should have a fully open and explainable AI. In my view, cooperation on regulation must, therefore, go hand in hand to build open, transparent, and trustworthy alternatives. Reducing dependency through shared infrastructure, open standards, and joint capacity-building is part of responsible sovereignty. Global cooperation should not mean uniformity. Countries and institutions need the ability to make informed, context-specific choices about how AI is developed and used in their societies. Supporting diversity approaches, while aligning on core principles such as transparency, safety, and human-centred design, is the real task ahead. We should work together to diversify the cultural and societal foundations of AI.
China’s focus on its education system is being talked about as a real differentiator in the AI space, due to the availability of highly skilled talent domestically. Countries like India are trying to focus on making it easy for start-ups to build AI-led solutions. But does that put India at a disadvantage compared to China, particularly because the country does not have nearly as much high-quality talent readily available?
As I stated above, talent, training and education are crucial. China has built an impressive system. At the same time, India has enormous talent in software and IT. I am confident this will translate into strong AI capabilities over time. We all face similar challenges. They are manageable if we bring society along and invest seriously in education.
Many countries face the same situation and could develop a winning model by collaborating on basic infrastructure. There is no reason why every country should train its own large LLMs. Pre-training could be shared, know-how could be shared, and local resources should focus on post-training and fine-tuning.















