Bolstering legislation for data protection in the age of artificial intelligence is crucial
The UK government's recent data protection reform, as outlined in the Data (Use and Access) Act 2025 (DUAA), has significantly impacted the regulation of AI decision-making. The reform, which loosens restrictions on automated decision-making (ADM) under UK data protection law, has the potential to reduce existing protections against AI-related harms.
Currently, people affected by automated decisions do not have the right to receive detailed contextual or personalized information about how a decision was reached. However, the DUAA reform removes strict limits on ADM when decisions are made solely by automated means, allowing this in cases where legitimate interests are the legal basis, except for decisions involving sensitive special category data (like biometric or health data), where previous restrictions still apply. This change means businesses have more flexibility to use AI-driven ADM, for example in recruitment processes, without necessarily requiring explicit consent or human intervention.
While the reform offers businesses more flexibility, it could potentially weaken existing protections against AI harms. The restriction on significant ADM decisions now only applies when special category data is used, reducing safeguards for many types of automated decisions. This could increase risks of lack of transparency, bias, and unfairness, as fewer constraints mean AI systems might operate without sufficient human oversight or clarity on how decisions are made.
The reform also shifts UK data protection towards a more business-friendly and innovation-oriented framework, emphasizing pragmatism over strict enforcement. However, this regulatory shift could exacerbate challenges already noted in AI governance, such as potential perpetuation of discrimination and inequality through biased AI models, limited accountability or contestability for individuals affected by automated decisions, and erosion of privacy rights if AI systems make significant decisions without adequate transparency or due process mechanisms.
The Data Protection and Digital Information Bill, currently before the House of Lords, aims to reform data protection law but is expected to weaken existing protections. Ada, a UK-based organisation focused on AI rights, is calling on the Government and Parliamentarians from all parties to work with them on making improvements to the Bill, ensuring that data protection law is fit for the AI era.
The reform provides an opportunity to provide people with greater transparency about when automated decision-making is being used, and the right to opt out of this. However, without meaningful human oversight, it can be difficult for people to appeal decisions when things go wrong. Systematic bias, technical failings, or individual circumstances not accounted for by the system can result in unfair outcomes.
One example of the dangers of integrating complex technological systems into the economy is the UK Post Office scandal, where hundreds of postmasters were prosecuted for theft and fraud based on flawed accounting software, Horizon. This incident underscores the importance of ensuring that AI systems are fair, transparent, and accountable.
As AI and data become increasingly embedded in the UK economy, it is crucial to prioritise safeguards against AI-related harms. The lack of detailed personalized explanations makes it difficult for individuals to understand whether a mistake has been made and to meaningfully pursue redress. Independent legal analysis commissioned by Ada has found that these changes are likely to erode the incentives that currently exist for organisations to properly assess and manage any systems being used to make automated decisions.
In summary, the DUAA reforms relax previous UK GDPR restrictions on fully automated decision-making in many contexts, potentially reducing existing protections against AI decision-related harms by enabling broader use of AI ADM on a legitimate interests basis, except when special category data is involved. This marks a regulatory recalibration favouring innovation but raising important concerns about transparency, fairness, and accountability in AI applications.
- The Data Protection and Digital Information Bill, currently before the House of Lords, could further reduce existing protections against AI-related harms, as it is expected to weaken data protection law.
- To ensure fairness and transparency in AI-driven business operations, particularly in areas like finance, education-and-self-development, and data-and-cloud-computing, it's essential to implement robust regulatory measures and encourage technology innovators to focus on creating accountable AI systems.