In an event earlier this year, attendees at a conference in Geneva were captivated by a live video of a 25-year-old man in Portugal who was suffering from ‘locked-in syndrome’. Despite being unable to move or speak, he was able to communicate with the audience using a digital, AI-powered tool that translated his thoughts into spoken words.
This experience had a profound impact on many in the audience, with some even moved to tears. Fred Werner, Head of Strategic Engagement at the International Telecommunications Union (ITU), who helped organize the AI for Good summit, recalls having to compose himself. He believes that AI is not just a topic of discussion around safety, privacy, ethics, and sustainability, but also has the potential to save lives.
Werner also emphasizes that the UN is not overlooking the positive aspects of AI, and has identified over 400 applications of AI across the UN system. These include areas such as natural hazards management, human rights monitoring, and sustainable development activities.
While the demonstration in Geneva showcased the positive impact of AI, Werner acknowledges that there are also risks involved. He believes that the rapid advancement of AI leaves no time to waste and calls for international collaboration in creating standards to address issues such as deepfakes and misinformation.
In September, at the Summit of the Future, the UN will adopt a Global Digital Compact, which includes warnings about the potential consequences of malicious use of AI. The Compact aims to promote trust in the internet, give individuals more control over their data, and hold accountable those who spread discriminatory and misleading content.
This is the latest effort by the UN to regulate AI on an international level. In November 2021, the UNESCO adopted the first global agreement on human-centric AI, the Recommendation on the Ethics of AI, which provides guidelines for governments to protect human rights and freedoms in the use of AI.
In 2023, UN Secretary-General António Guterres formed the Advisory Body on Artificial Intelligence, made up of experts from the public and private sectors, to address the governance of AI. The body’s report concluded that AI requires governance to harness its potential and ensure that no one is left behind.
This work has contributed to the development of the Global Digital Compact, which outlines commitments and actions to bridge the digital divide and address the threats of AI. These include providing internet access to all schools and hospitals, promoting digital literacy, establishing an International Scientific Panel on AI, and holding an Annual Global Dialogue on AI Governance. By 2030, the aim is to have global AI standards that benefit everyone.