Internationally, The UK Is Prioritizing AI Security Over Safety
Last week, together with the US, the UK refused to sign an international agreement on artificial intelligence at a global AI summit in Paris. The agreement aims to align 60 countries on a commitment to develop AI in an open, inclusive, and ethical way. According to the UK government, however, it fails to address global AI governance issues and leaves questions on national security unanswered.
Yes, these types of agreements rarely produce any immediate changes to policy or practices (in fact, this is not what they are for!), but it’s an odd justification, and it’s puzzling that the UK, which championed “AI safety” globally and promoted the adoption of a range of agreements in the past, is walking away from it now.
Meanwhile, the UK Department for Science, Innovation, and Technology announced that the global “AI Safety Institute” changed its name to become the “AI Security Institute.” Make no mistake: This is more than a name change. The new focus of the AI Security Institute is primarily on cybersecurity, and previous goals — such as understanding societal impacts of AI and mitigating risks such as unequal outcomes and harming individual welfare — are no longer explicit parts of its mission.
Domestically, The UK Wants To Drive Public-Sector AI Innovation
Not only was the UK government busy building new tech/geopolitical relationships, it also made some domestic decisions that UK citizens and consumers should be watching. These include:
- An agreement with Anthropic to start building AI-powered services. Last week, the UK government and AI provider Anthropic signed a memorandum of understanding, marking the beginning of a collaboration that will enable the UK public sector to harness the power of AI for a range of services and experiences. The immediate goal is to use Claude, Anthropic’s family of large language models (LLMs), to launch a chatbot that will improve the way citizens in the UK access public-sector information and services.
- Bold future plans. This is just the beginning. Future plans include the use of Anthropic’s LLMs across a range of public-sector activities, from scientific research to policy-making, supply chain management, and much more. As the UK government embraces over 50 different initiatives that bring AI to the core of its public sector and government activities, according to the latest “AI opportunities action plan,” future collaboration with other AI providers beyond Anthropic is the obvious next step.
- New AI guidelines for government departments. To complete the fray of AI-related activity, new guidelines for the use of AI and generative AI in the public sector also saw the light of day last week. The Artificial Intelligence Playbook for the UK Government expands the 2024 Generative AI Framework for His Majesty’s Government, but it substantially remains a set of basic, common-sense principles that public servants should apply when using AI and genAI. It seems to be too little, though, especially if compared with the volume and magnitude of the UK’s AI ambitions and projects.
Innovation Without Citizen Trust Will Be Meaningless
AI is an incredible opportunity for virtually every organization, including the public sector. The enthusiasm that the UK government is putting into its current and future AI projects is refreshing to see, but a commitment to trustworthy AI is paramount to keep the enthusiasm going and avoid backlash — especially in a country where there currently aren’t, and in the future probably won’t be, any rules and governance for trustworthy AI.
As Forrester’s government trust research shows, when trust in institutions is strong, governments reap social, economic, and reputational benefits that enable them to expand and extend their relationship with the people they serve. When trust is weak, they lose those benefits and must work harder to create and maintain economic well-being and social cohesion in order for people to prosper. According to the latest Forrester data, overall trust in UK government organizations is weak, with a score of 42.3 on our 100-point scale.
There are two main priorities for the UK public sector and its partners as they embrace AI:
- Establish and follow a trustworthy framework for every AI project. The new AI playbook is a good starting point. Other AI risk frameworks can further increase the effectiveness of the playbook to deliver responsible and trustworthy AI. The EU AI Act, which is not binding for the UK public sector and its partners, for example, can still provide a set of valid principles to assess AI risks and select risk mitigation strategies.
- Design and build AI applications that engender citizen trust. It’s vital that you understand and act on the drivers that impact how UK citizens trust the UK government the most as well as the effects that trust has on specific governmental mission-critical activities. Once the dynamics that govern trust are clear, public servants can more effectively develop strategies that specifically address the “trust gap” and help grow and safeguard citizens’ trust.
If you want to know more about Forrester’s government trust research or AI trustworthy frameworks, please schedule a guidance session with us.