UN Committee on the Rights of Persons with Disabilities calls on the UK to act on AI human rights risks
Following a submission by Privacy International, the UN Committee for the Convention on the Rights of Persons with Disabilities has produced a damning report which calls upon the UK government to take action in order to more effectively regulate the use of artificial intelligence in social services, to ensure that the use of this technology is not discriminatory and that it respects human rights.
The United Nations (UN) Committee on the Convention of the Rights of Persons with Disabilities (CRPD Committee) has published a damning "Report on follow-up to the inquiry concerning the United Kingdom of Great Britain and Northern Ireland" which calls upon the United Kingdom (UK) to take action against the human rights risks posed by the use of Artificial Intelligence (AI) for automated decision-making in the social security system in order to decide who can receive benefits.
Published in the wake of a submission made by Privacy International (PI) to the UN CRPD Secretariat ahead of their hearing of the UK government's oral defence on this issue, the report takes the concerns raised by PI into account. The submission followed PI's previous engagement with the UN on the rights of persons with disabilities, and drew from previous investigations PI had conducted. These investigations focused on the privacy and human rights implications of the UK's Department for Work and Pensions surveillance practises when assessing persons with disabilities' access to benefits, and the rights of persons with disabilities in a digitised world when it comes to social protection schemes. Within the submission, PI stated the following:
"By profiling individuals who interact with caseworkers and the DWP on the basis of unknown data points, the DWP is creating derived, inferred, and predicted profiles which may be inaccurate or systematically biased. This type of profiling can lead to individuals being misidentified, misclassified, or misjudged. Compounding the threat that such policies pose to the rights of persons with disabilities are the propensity of computer algorithms to discriminate against them. Following a legal challenge by an Organisation of Persons with Disabilities over the discriminatory nature of these systems, the DWP admitted before the UK Parliament in January 2024 that these algorithmic systems “do have biases in”, stating that “the issue is whether they are biases that are not allowed in the law, because you have to bias to catch fraudsters”, thus shockingly admitting that the bias is an intended feature of the system."
Media reported on the hearing of the UK government's oral defence that followed days later (see here and here), during which the CRPD Committee asked the UK “How was it ensured that artificial intelligence tools did not have inherent biases?”. The CPRD Committee then published its report which finds significant failures on the part of the UK, and highlighted that the Committee "is appalled by reports of “benefit deaths” referring to fatalities among disabled people in the State party, subsequent to their engagement with the process for determining eligibility for benefits. The evidence received revealed a disturbingly consistent theme: disabled people resorting to suicide following the denial of an adequate standard of living and social protection", going on cite an estimated 600 suicides in the span of three years.
The CRPD Committee also reflected concerns specifically raised by PI in our submission about the use of AI-powered automated decision making in social protection and risks around biases in this technology, stating: "There is significant concern about clause 14 of the Bill (Automated decision-making) replacing article 22 of the UK General Data Protection Regulation with new articles, 22A-22D, that will allow automated decision-making with some safeguards, whereby Artificial Intelligence will be responsible for making decisions within the social security system . There is a tangible concern that artificial intelligence (AI) tools and algorithms may harbour inherent biases, potentially leading to punitive measures that, fundamentally, could impart a sense of criminalization and psychological distress among individuals."
The Committee made damning concluding remarks including that:
- The UK has made "no significant progress" concerning the situation of the rights of persons with disabilities.
- The UK has "failed to take all appropriate measures to address grave and systematic violations of the human rights of persons with disabilities and has failed to eliminate the root causes of inequality and discrimination failed to take all appropriate measures to address grave and systematic violations of the human rights of persons with disabilities and has failed to eliminate the root causes of inequality and discrimination" .
As a result, the Committee makes recommendations that the UK take a series of actions, including with specific regards to privacy and data protection, stating that the UK should:
"Ensure that the Data Protection and Digital Information Bill establishes safeguards and review mechanisms to prevent the risk of encoded biases in artificial intelligence (AI) tools and algorithms ensuring that such technologies are deployed in a manner that respects human rights, prevents discrimination, and upholds the principles of transparency, accountability, and fairness."