29 governments meet in the AI Safety Summit to discuss the main risks of artificial intelligence and seek agreements.
The responsible development of frontier AI, the responsibility of national policy makers and the future of AI in education were some of the topics covered.
There is a need to establish ethical standards, ensure transparency and accountability, and promote awareness and education on AI, the latter objective being related to the LivAI project.
November 7th, 2023, Brussels. The UK hosted the first AI Safety Summit at Bletchley Park on the 1 and 2 November, with the aim of coordinating international action to seize the opportunities and manage the risks of cutting-edge AI. A theme very much in line with the European project LivAI to promote adult education through artificial intelligence.
The important challenge that LivAI project will address focuses on how technology and data should be approached from an ethical perspective, within a certification framework that guarantees standards for any kind of learning and work. This project provides knowledge, skills, and a certification framework that will standardise the competences gained, which is a small but significant progress towards improving quality assurance in adult education. LivAI builds a digital platform that enables monitoring and evaluation (award certification scheme) of the three main groups adult educators, adult education institutions and adult learners. Finally, the complete framework provides practical guidance for the digital transformation of adult learning centres, a training framework for high quality digital learning pathways and a certification mechanism that will ensure high quality standards in the digital age.
Risks associated with frontier AI
The summit featured leaders from all 29 countries, as well as prominent industry leaders and researchers. Participants highlighted the enormous potential of AI to drive economies, serve the public good and improve people’s lives. However, they also recognised the risks associated with frontier AI, such as deep fakes, disinformation, online harm, financial crime and the impact on employment. The summit was the first time that leaders from around the world have agreed on urgent joint lines of action to address AI. It has been an historical opportunity to bring together global expertise and explore common measures to address the risks of AI.
On the agenda, roundtable discussions were established addressing the topics of «What should Frontier AI developers do to scale responsibly?», «What should National Policymakers do in relation to the risks and opportunities of AI?», and a Panel Discussion on AI’s potential to revolutionise education for the coming generations («AI for good – AI for the next generation»).
The main outcome of the summit was the signing of the Bletchley Declaration. Is the first international agreement that lists the opportunities, risks, and needs for global action on frontier AI. The declaration, signed by the 29 participating countries, aims to establish global cooperation on AI safety. It emphasizes a historical commitment to achieve greater scientific collaboration to understand and monitor the risks of AI as it develops, calling for the development of national policies to prevent these threads, through risk studies, public sector capacity assessments and bolstering research sectors.
Critics contend that the summit failed to foster an inclusive multidisciplinary discussion, as crucial perspectives such as ethical, legal, and technological considerations were absent. This omission raises concerns about the long-term trajectory of AI development, with fears that it may cater primarily to the interests of some industry leaders, potentially leaving unresolved collateral damage. Furthermore, a notable lack of consensus persists regarding data management and the imperative need to safeguard digital security in these emerging technologies. Projects such as LivAi will undoubtedly help further the EU’s goal of creating a reliable and human-centred framework for the development and deployment of AI.
« EFCoCert Foundation is proud to be part of the LivAI project consortium, in which EFCoCert will develop the ethical AI competence and good practices certification system.”
Want to discuss a potential project idea? Just email didier.blanc[a]efcocert.eu or book a time slot for a call on my calendar.
Looking forward to exchanging with you!
Dr Didier BLANC, EFCoCert President