Speeches

Governing AI: A Friend and Foe - Transcript of speech by President Tharman Shanmugaratnam at the Asia Tech X Singapore (ATxSG) 5th Anniversary Opening Gala 2025 on 27 May 2025

27 May 2025

Minister Josephine Teo,

Excellencies,

Distinguished guests,

Thank you for inviting me to join you.

Let me start with three broad observations.

First, it's incontrovertible that technology is advancing much faster than our understanding of it. I say this not just with regard to policymakers or lay persons, but scientists themselves. Scientists who are on top of the field tell us that our understanding of AI in particular is being far outpaced by the rate at which AI is advancing.

The second observation is that, more than in any previous wave of technological innovation, we face both huge upsides and downsides in the AI revolution.

We face a larger-than-ever ‘optimism gap’ – the gap between the boosters and doomsters of AI, the gap between the belief that AI and other recent technological advances will bring significant and widespread benefits to society, and concern that it brings profound risks. There's no more prominent, recent warning or those risks than that by Pope Leo, in his first address to his Cardinals, when he cautioned that AI imposes new challenges to human dignity, justice and labour.

But it’s important to see this not as an ‘either-or’. There is enough that is plausible in what the boosters and doomsters say about AI for them both to be taken seriously. AI is very likely to be both hero and villain, and that's not a contradiction. That's the nature of this new wave of innovation – the good will come with the bad.

Our whole objective must therefore be to view the good with the bad, and seek to maximise the good – the benefits to human well-being, not just in one city, one country, but globally – while minimising the risk of the worst of the bad things that could come about. I'll elaborate on this later.

So that's the second observation — that we have an unusually large ‘optimism gap’ with this wave of innovation, and it's inherent in AI. It's not an accident, and we're not having to decide who's right and who's wrong, because both the boosters and the doomsters are very likely right.

The third observation is that there are inherent tensions between the interests and goals of the leading actors in AI and the interests of society at large. There are inherent tensions, and I don't think it's because they are mal-intentioned. It is in the nature of the incentives they have, and particularly incentives defined by narrowly defined goals.

The seven or eight leading companies in the AI space, are all in a race to be the first to develop artificial general intelligence (AGI), because they believe the gains to getting there first are significant. And in the race to get there first, speed of advance in AI models is taking precedence over safety.

 

That's not a value-laden judgement, by the way. Listen to the scientists. And look at what the incentives are for the firms and their investors, and you can see why, for those in the race, safety is in truth a secondary objective. There are efforts to improve AI models –such as to reduce hallucinations by introducing systems of reinforcement or human feedback. But those efforts are taking place at the same time that models are becoming much larger and much more complex.

 

So there’s an inherent tension between the race to be first in the competition to achieve AGI or superintelligence, and building guardrails that ensure AI safety.

 

Likewise, the incentives are skewed if we leave AI development to be shaped by geopolitical rivalry. We will see a trade-off between the competition to be ahead in building AGI and the well-being of global society, and indeed the safety and wellbeing of each society in its own right. Having a lead in the most advanced AI systems, or holding the biggest trophy, doesn’t translate directly into improvements in healthcare, in learning, in food systems, new materials, and the like.

 

So if we recognise these three points – that our understanding seriously lags the rate of advance of technology; that there are both huge upsides and downsides in AI; and third, that there are inherent tensions between the interests of the leading players and the interests of society at large – it means we can't leave this to competitive forces alone. We can't leave it to the law of the jungle – be it the jungle of the markets or geopolitics.

 

We need some form of consensus around calibration of AI developments. We need coalitions of the willing to come together to develop the guidelines and rules to enable us to maximise the good that can truly come out of AI’s use cases, and minimise the risk of the bads, or at least the worst of the bads.

 

We can't leave it to the future to see how much bad actually comes out of the AI race. It would be equivalent to waiting to see what happens as we pass the tipping points in climate change, waiting to see if things turn out to be as bad as the scientists warn – and if it does turn out to be bad, we let someone else pick up the pieces further down the road.

That would be plainly irresponsible.

 

Tech leaders have to be respected for the amazing advances that they've been able to achieve. But we cannot leave it to them to shape the future of society, and indeed, the future of politics. We can't leave it to them to set the terms of the future. Just as we do not leave it to the leading fossil fuel companies to dictate strategy on climate change.

 

So if that's not a responsible option, what other options do we have?

 

A second option is to slow down the pace of innovation in AI in general. It's not as crazy an idea as it might seem. It boils down to accepting that, rather than getting to superintelligence and the uses of superintelligence in as many sectors as possible in say twenty years, we take say thirty years. Take another ten years. That's not going to be a huge setback to human development, because it still gives us the potential to be in a far more advanced state, technologically, productivity-wise, health-wise than we are today.

 

So the advocates for slowing down AI development say, why not give us time to develop standards, develop norms, and develop some international consensus around them to we stand a better chance of avoiding the worst and maximising the good?

 

It's a tempting thought, but that option is not realistic. It’s going to be very hard to decide on how we regulate AI across the board. You can't just have general precepts about ethics and corporate responsibility. You need regulations for very different contexts, in every sector and in different regions. It's extraordinarily difficult to get there, and to achieve enough of a consensus internationally to get this done. There will always be some who free-ride, who take advantage of others slowing down in order that they move ahead –that's the unfortunate reality of the world.

 

So we need both ambition and humility in what is achievable in regulation.

 

Which therefore brings us to the third option, which I believe the most practical option. That is to focus efforts on encouraging innovation and regulating its use in the sectors where it can yield the greatest benefits. Act sectorally, and act together, working a lot harder on what some call ‘small AI’ to achieve big social gains. Rather than just big AI, and who gets the biggest trophy first. Work on AI in healthcare, in agriculture, in tackling climate change, in developing new materials. Work on the applications that can improve the quality of jobs and human life.

 

Healthcare is the prime candidate. AI is already transforming how we detect and treat diseases – how we spot the signs much earlier, how we avoid complications. And importantly too, how we improve the enormous backend of healthcare systems – how we anticipate and ease the pressures on overburdened healthcare systems around the world.

 

It's happening already, but we need to take it much further and ensure that patient safety is ensured, and AI is used ethically through regulation. Singapore itself has developed guidelines for the use of AI in healthcare – guidelines for the developers as well as the users. The developers are obliged to get feedback from clinicians as well as patients. And it's in their interests – it's in the interest of every developer of an AI tool in healthcare that trust is preserved.

 

The EU has gone further. It's taken the legislative route, after extensive consultation to ensure it's a risk-based regulatory approach. We talk closely to the EU and other countries that are adopting different combinations of guidelines and regulations.

 

The second big use-case has to do with human productivity. And I mean here not just the potential for transformative changes that are taking place in factories, call centres, banks and other industries. (It's not a dramatic scaling-up of productivity yet – it takes some time before what the leaders are doing gets diffused to the rest of industry, but it will eventually happen.)

 

But we have to think not just in terms of productivity in a factory or a call centre or a bank, not just productivity for a firm within each sector, but productivity for human society. If you take a simple thought experiment: if you can use AI and other recent technologies to halve the number of workers required, but still achieve the same output, that's a doubling in labour productivity for the firm. But if the workers that were displaced are left out of work, it leaves unchanged the amount of output relative to the full human workforce, including those that are no longer at work.

 

So we have to think about productivity more broadly. How do we improve the productivity of human society? And that means maximising our potential to create good jobs for everyone who wishes to be in the workforce. That's what productivity has to be about most broadly. It's not just micro-productivity.

 

And that's a real challenge, because we're going to see a lot of the micro-productivity improvements being made, with fewer workers needed in many cases. You're already seeing it in the tech sector – significant reductions in the tech workforce. It’s part of creative destruction. Buy the key is to ensure that people displaced because of creative disruption are redeployed, into good jobs in other sectors. It takes systems of training, systems of motivation and mentorship, to give them the best chance of getting onto a new career, that is still about meaningful work.

 

That's the central challenge we face economically. So we have to plan for that challenge, and make AI an enabler for that transformation. It can be an enabler in augmenting human abilities and creating meaningful jobs, but it's not just going to come about is we leave it to the market. It requires forethought, planning, collaboration between industry, government, universities, and other educational and training institutions.

 

And if we think even more deeply about human productivity, it goes back to education and learning. It goes back to how the kids learn, and how people learn all along the way. And then again, I think we have an opportunity coming through from the use of AI, as well as risks.

The easy game, which every young person knows, is, well, there's a way in which you can rely on ChatGPT, or some other AI app, to provide a rather good answer for the homework you've been given. And that's happening around the world.

We have to reflect on the fundamentals of learning. What do we mean by learning outcomes or educational outcomes? It is not about how quickly you can access knowledge. It's about how well you think. It's about how curious our minds are, and whether we can retain that curiosity through life. It's about how we can create alternatives in every field of life. That's the real outcome in learning. It's not about an output. It's about what goes on in our minds, and developing that through education has always been a difficult task. It's never a straightforward task.

Countries are now trying to use AI to enhance that process, to enhance that ability to develop curiosity and thinking, but I must say it's easier said than done. It requires a lot of craft in education, so that AI is used as a tool not to substitute for students' thinking, but to ensure that learning remains about thinking long and hard. That's the only way in which you improve your cognitive abilities, your creative abilities, and your ability to stay curious through life. It's by thinking long and thinking hard, and there's unfortunately a lot about AI that incentivises you not to have to think long and hard.

So let's go about this thoughtfully at every level of the education system around the world, and not just assume that AI is a tool that speeds everything up. Speed is not of the essence in learning.

A third major use case: tackling climate change. Here too, AI has risks, but in my view, enormous potential benefits.

The risks, of course, come from the fact that AI requires a huge amount of compute and hence a huge amount of energy, and a huge amount of water as well. Data centres as designed today are guzzlers of energy as well as freshwater for cooling. It's not sustainable as it stands. But there are innovations that are on their way – innovations to make both CPUs and GPUs far more energy-efficient, and require less cooling. Industry leaders expect major improvements in five years.

We should also look at the positive side of the ledger. How do we use AI to improve energy efficiency across the economy – not just in data centres – through development new materials, optimisation systems, the way factories and every sector is organised? How do we enable more productive food systems so that we don't keep encroaching into the forests in order to expand food production? How do we use AI there as well? How do we monitor environmental degradation using AI so that we can prevent it in time? And how do we anticipate major disruptions to the climate so that societies and economies can prepare in advance and be more resilient? These are all valuable use cases for AI and can be a major tool in our efforts to tackle climate change.

Coming back to the issue of the risks, and particularly avoiding the worst of the risks, we need a lot more thinking and collaboration to avoid the worst of the bads.

First, the fact that AI, together with social media platforms and rogue actors is eroding trust in democracy. They are forcing people into bubbles. They are hardening divisions within society, and we are not in a good place in too many societies.

We do not yet have a solution to this, but it is a dangerous problem.

Second, we have to ensure that AI doesn't transform warfare for the worst. This is what Henry Kissinger's final book was about. There are some positives in how AI can be used, particularly in early threat detection systems, but there are also huge risks, particularly the risk of unintended escalation in warfare. So this is a really urgent issue, and it must engage, in particular, both the US and China.

 

China, for one, has acknowledged this risk and made its own proposals for controling the use of AI in warfare, and I'm sure there's a lot of thinking taking place in the US as well.

We need to keep decisions in the hands of humans.

 

Finally, to achieve each of these objectives – acting sectorally so as to drive the use of AI where it creates real value for human wellbeing, and avoiding the worst risks that we face, we need global governance.

 

We can't pretend that we're going to get a whole new multilateral architecture around this very soon. We should aim to develop multilateral governance over AI, but that's a journey that has begun but takes time. In the meantime, we really have to accelerate the building of coalitions of the willing.

 

These coalitions cannot just be amongst governments. They must involve the scientists. They must involve corporates – not just the tech players, but also the major users of AI. They must involve civil society.

 

So it must involve government, scientists, corporates and civil society, working towards advancing in each sector the guidelines, the norms, the degree of transparency required, and the commonality of standards and benchmarks so we can compare safety in one player with that in another.

 

The leading corporates are not evil. But they need rules and transparency so that they all play the game, and we don't get free riders. Governments must therefore be part of the game. And civil society can be extremely helpful in providing the ethical guardrails.

 

So let's avoid the simple binary of viewing regulation as at odds with innovation. There is no alternative to either regulation or innovation. They have to go together. And intelligent regulation – particularly those formed through collaboration that is global and not just left to the major countries, and that formed collectively by scientists, corporates, governments and civil society – that intelligent regulation is more likely to ensure that innovation itself will be sustainable, and that trust in AI can be built.

 

I would be hopeful, looking at the momentum on coalition-building that we are seeing. We had a very good conference in Singapore just recently – the Singapore Conference on AI – amongst the scientists and technicians. They developed a consensus on global AI safety research priorities. A good example of what it takes. Building coalitions of the willing and shaping the rules that everyone needs, including the tech world itself, to sustain trust in AI.

 

You may want to read about