The UK’s AI landscape: The ICO’s role in balancing AI development and data protection


The Innovation Platform spoke with Sophia Ignatidou, Group Manager, AI Policy at the Information Commissioner’s Office, about its role in regulating the UK’s AI sector, balancing innovation and economic growth with robust data protection measures.

Technology is evolving rapidly, and as artificial intelligence (AI) becomes more integrated into various aspects of our lives and industries, the role of regulatory bodies like the Information Commissioner’s Office (ICO) becomes crucial.

To explore the ICO’s role in the AI regulatory landscape, Sophia Ignatidou, Group Manager of AI Policy at the ICO, elaborates on the office’s comprehensive approach to managing AI development in the UK, emphasising the opportunities AI presents for economic growth, the inherent risks associated with its deployment, as well as the ethical considerations organisations must address.

What is the role of the Information Commissioner’s Office (ICO) in the UK’s AI landscape, and how does it enforce and raise awareness of AI legislation?

The ICO is the UK’s independent data protection authority and a horizontal regulator, meaning our remit spans both the public and private sectors, including government. We regulate the processing of personal data across the AI value chain: from data collection to model training and deployment. Since personal data underpins most AI systems that interact with people, our work is wide-ranging, covering everything from fraud detection in the public sector to targeted advertising on social media.

Our approach combines proactive engagement and regulatory enforcement. On the engagement side, we work closely with industry through our Enterprise and Innovation teams, and with the public sector via our Public Affairs colleagues. We also provide innovation services to support responsible AI development, with enforcement reserved for serious breaches. We also focus on public awareness, including commissioning research into public attitudes and engaging with civil society.

What opportunities for innovation and economic growth does AI present, and how can these be balanced with robust data protection?

AI offers significant potential to drive efficiency, reduce administrative burdens, and accelerate decision-making by identifying patterns and automating processes. However, these benefits will only be realised if AI addresses real-world problems rather than being a “solution in search of a problem.”

The UK is home to world-class AI talent and continues to attract leading minds. We believe that a multidisciplinary approach, combining technical expertise with insights from social sciences and economics, is essential to ensure AI development reflects the complexity of human experience.

Crucially, we do not see data protection as a barrier to innovation. On the contrary, strong data protection is fundamental to sustainable innovation and economic growth. Just as seatbelts enabled the safe expansion of the automotive industry, robust data protection builds trust and confidence in AI.

What are the potential risks associated with AI, and how does the ICO assess and mitigate them?

AI is not a single technology but an umbrella term for a range of statistical models with varying complexity, accuracy, and data requirements. The risks depend on the context and purpose of deployment.

When we identify a high-risk AI use case, we typically require the organisation, whether developer or deployer, to conduct a Data Protection Impact Assessment (DPIA). This document should outline the risks and the measures in place to mitigate them. The ICO assesses the adequacy of these DPIAs, focusing on the severity and likelihood of harm. Failure to provide an adequate DPIA can lead to regulatory action, as seen in our preliminary enforcement notice against Snap in 2023.

On a similar note, how could emerging technologies like blockchain or federated learning help resolve data protection issues?

Emerging technologies such as federated learning can help address data protection challenges by reducing the amount of personal information processed and improving security. Federated learning allows models to be trained without centralising raw data, which lowers the risk of large-scale breaches and limits exposure of personal information. When combined with other privacy-enhancing technologies, it further mitigates the risk of attackers inferring sensitive data.

Blockchain, when implemented carefully, can strengthen integrity and accountability through tamper-evident records, though it must be designed to avoid unnecessary on-chain disclosure. Our detailed guidance on blockchain will be published soon and can be tracked via the ICO’s technology guidance pipeline.

What ethical concerns are associated with AI, and how should organisations address them? What is the ICO’s strategic approach?

Data protection law embeds ethical principles through its seven core principles: lawfulness, fairness and transparency; purpose limitation; data minimisation; accuracy; storage limitation; security; and accountability. Under the UK GDPR’s “data protection by design and by default” requirement, organisations must integrate these principles into AI systems from the outset.

Our recently announced AI and Biometrics Strategy sets out four priority areas: scrutiny of automated decision-making in government and recruitment, oversight of generative AI foundation model training, regulation of facial recognition technology in law enforcement and development of a statutory code of practice on AI and automated decision-making. This strategy builds on our existing guidance and aims to protect individuals’ rights while providing clarity for innovators.

How can the UK keep pace with emerging AI technologies and their implications for data protection?

The UK government’s AI Opportunities Plan rightly emphasises the need to strengthen regulators’ capacity to supervise AI. Building expertise and resources across the regulatory landscape is essential to keep pace with rapid technological change.

How does the ICO engage internationally on AI regulation, and how influential are other countries’ policies on the UK’s approach?

AI supply chains are global, so international collaboration is vital. We maintain active relationships with counterparts through forums such as the G7, OECD, Global Privacy Assembly, and the European Commission. We closely monitor developments like the EU AI Act, while remaining confident in the UK’s approach of empowering sector regulators rather than creating a single AI regulator.

What is the Data (Use and Access) Act, and what impact will it have on AI policy?

The Data (Use and Access) Act requires the ICO to develop a statutory Code of Practice on AI and automated decision-making. This will build on our existing non-statutory guidance and incorporate recent positions, such as our expectations for generative AI and joint guidance on AI procurement. The code will provide greater clarity on issues such as research provisions and accountability in complex supply chains.

How can the UK position itself as a global leader in AI, and what challenges does the ICO anticipate?

The UK already plays a leading role in global AI regulation discussions. For example, the Digital Regulation Cooperation Forum, bringing together the ICO, Ofcom, CMA and FCA, has been replicated internationally. The ICO was also the first data protection authority to provide clarity on generative AI.

Looking ahead, our main challenges include recruiting and retaining AI specialists, providing regulatory clarity amid rapid technical and legislative change, and ensuring our capacity matches the scale of AI adoption.

Please note, this article will also appear in the 23rd edition of our quarterly publication.


Добавить комментарий

Ваш адрес email не будет опубликован. Обязательные поля помечены *