
Photo by Henry & Co. on Unsplash.
Support systems are undergoing significant digitisation and automation under the banner of efficiency. Privacy International calls for the impacts of these innovations on the rights of people with disabilities to be comprehensively assessed and addressed.
Photo by Henry & Co. on Unsplash.
Our world is undergoing a seismic process of increasing digitisation, which sees the proliferation of new technologies and the growing integration of these technologies into public services, which rely more and more on copious amounts of personal data and on automated processes.
This phenomenon has a unique impact upon the rights of persons with disabilities. As the era of global digitisation causes societies worldwide to undergo a digital metamorphosis, persons with disabilities find themselves at an intersection of empowerment and vulnerability. It is critical that the incorporation of data-driven technologies into the fabric of our societies does not come at the expense of their fundamental rights and freedoms. This article will unpack the unique challenges this digital world poses to the realisation of the rights of persons with disabilities and put forward some key considerations to be taken into account by implementing bodies and oversight mechanisms.
This piece was informed by and expands upon our submission to the UN Office of the High Commissioner for Human Rights (OHCHR), as well as consultations we have conducted with Organisations of Persons with Disabilities (OPDs) across the world over the last year who have provided us with invaluable support and guidance.
The rights of persons with disabilities are enshrined in the international legal framework. The starting point is the Convention on the Rights of Persons with Disabilities (CRPD) and its Optional Protocol, both of which came into force in 2008. Other key elements of international human rights frameworks pertaining to the rights of persons with disabilities include the International Covenant on Civil and Political Rights (ICCPR) and the Universal Declaration of Human Rights (UDHR), as well as the Protocol to the African Charter on Human and People’s Rights of Persons with Disabilities in Africa and the Inter-American Convention on the Elimination of All Forms of Discrimination against Persons with Disabilities.
There is a balancing act to be had between preserving the fundamental human right to privacy and collecting personal data, which is required in many cases for persons with disabilities to gain access to essential social benefits and welfare schemes because of the way digital welfare and social protection programmes have evolved over time. International law sets out specific standards when it comes to the processing of the data of persons with disabilities. For example, Article 31 of the CRPD imposes an obligation to collect data, and states that states’ process of collection and maintaining data on persons with disabilities must:
“[c]omply with legally established safeguards, including legislation on data protection, to ensure confidentiality and respect for the privacy of persons with disabilities” as well as adhering to “norms to protect human rights and fundamental freedoms and ethical principles in the collection and use of statistics”.
The obligations imposed upon states by Article 31 are laid out in detail in a December 2021 report from the UN OHCHR which states that “data protection laws and policies should include persons with disabilities”, and goes on to emphasise that states should use data privacy and data protection principles when developing policies that may affect persons with disabilities.
This balancing act makes it all the more important that states take persons with disabilities explicitly into account when digitising access to public services and active participation in society in ways that will impact, even indirectly, persons living with disabilities.
The significance of the shifting context of the increasing digitisation of public services has been recognised by the UN Department of Social and Economic Affairs, which stated that “Information and communication technologies and infrastructures are rapidly growing in importance in the provision of information and services to the population”. Whilst such a context presents key opportunities in terms of accessibility, as documented by the World Health Organisation, it also risks undermining the right to privacy in the absence of appropriate safeguards and mitigations.
Imagine a world where decisions, pivotal and mundane alike, are shaped not by human intuition but by silent algorithms orchestrating our daily lives. These algorithms tend to be designed by third-parties who are far removed from the reality of those whose lives will be impacted. This is the world of automated decision-making (ADM), where invisible hands of technology thread strands into the fabric of our societies. ADM increasingly is involved in making decisions about who can access a public service or benefit, and the implications for persons with disabilities are huge.
The threats posed by ADM to fundamental rights
ADM harnesses the power of algorithm-powered Artificial Intelligence (AI) and plays a growing role in making social welfare determinations. The use of this technology by governments in the provision of social services brings forth an array of challenges, from the opacity of its operations to concerns about reliability and bias, and thus has the potential to seriously undermine the rights of persons with disabilities, particularly when lacking meaningful human intervention.
These concerns have been repeatedly expressed by consensus in the international community. For example, in 2023, the UN Special Rapporteur on the right to health voiced such concerns, as did the UN Special Rapporteur on extreme poverty and human rights in 2019, and the EU Parliament voted to prohibit AI systems that pose an “unacceptable level of risk to people’s safety”, such as “biometric categorisation systems using sensitive characteristics […]”. Similarly, the UN Special Rapporteur on the rights of persons with disabilities commented on the “well-known discriminatory impacts” of ADM, in part due to its black-box operation which makes transparency and accountability for the decisions that it makes extremely difficult to obtain. They went on to recognise the specific impacts ADM can have on the rights of persons with disabilities, stating: “Biased data sets and discriminatory algorithms can restrict persons with disabilities from employment or benefits making them even more vulnerable to poverty and marginalization, and in ways that are more systematic and harder to detect”.
Broadly, there exist two types of ADM; full-ADM, where decisions unfold without any human intervention, and semi-ADM, where a human is involved in the decision-making process, and both have been subject to scrutiny because of the concerns they raise for people and their rights. The European General Data Protection Regulation's (GDPR) Article 22 recognizes the right not to be subjected to full ADM. It is especially important when it comes to social welfare systems that decisions that will significantly impact individuals’ lives should not be left to the cold calculations of machines alone and that the human component of a semi-ADM decision-making process be meaningful; something the Wisconsin Supreme Court recognised in 2016 when it sanctioned the racially discriminatory COMPAS risk assessment system. In the Netherlands, a 2023 ruling by the Amsterdam High Court went further and stated that purely symbolic human involvement did not absolve a system from being labelled as fully automated, or full-ADM.
Real-World Scenarios: Unveiling the Impact of ADM and digital social welfare on persons with disabilities
As part of the spreading digitisation of our societies, public services around the world are increasingly relying on technologies including ADM to make decisions such as who is eligible to receive a government benefit. The eruption of the Covid-19 pandemic rapidly accelerated governments’ roll-out of digital welfare programmes. Government-run social protection programmes’ growing reliance on digitisation and technology raises real concerns over the risk that the use of technology will infringe upon fundamental rights and discriminate against persons with disabilities.
These risks were recognised by the UN Special Rapporteur on extreme poverty and human rights who highlighted the “ various forms of rigidity and the robotic application of the rules” involved in digital welfare states and noted that digital contexts often don’t take into account extenuating circumstances resulting from a disability. Similarly, in 2023 the UN Special Rapporteur on the rights of persons with disabilities warned of serious risks accompanying the advancement of technologies, despite the opportunities they present for realising the rights of persons with disabilities.
Real-world examples of the use of ADM illustrate these concerns. Cases of ADM in public services being found to discriminate on the basis of nationality and race were recorded in the Netherlands, in Rotterdam City's welfare fraud algorithm, as well as the automated fraud detection system adopted by Dutch tax authorities. Equally, the lack of transparency of ADM systems were highlighted in Colombia where the social protection benefit rolled out in response to the Covid-19 pandemic, Solidarity Income (“Ingreso Solidario”) made flawed eligibility decisions of who would benefit from the programme, but it was impossible to assess how the technology came to choose each person.
In Nigeria, Angola and Mozambique, the eligibility criteria dictating who would and would not benefit from digital Covid-19 social protection response programmes were not made public as documented by Privacy International's research, and serve as cautionary tales.
On top of case studies which exemplify the risks of digitising public services and incorporating ADM, the following case studies demonstrate the specific harms upon the rights of persons with disabilities that digitised social protection programmes can cause:
When it comes to preserving the privacy of persons with disabilities, data protection is just a single, albeit critical, piece of the puzzle. There are well established and internationally accepted norms and principles when it comes to data protection of persons with disabilities. States should incorporate these principles into their national legislations and frameworks in order to ensure the rights of persons with disabilities are protected.
Article 31 of the CRPD obliges states to “Comply with legally established safeguards, including legislation on data protection, to ensure confidentiality and respect for the privacy of persons with disabilities” and further to “Comply with internationally accepted norms to protect human rights and fundamental freedoms and ethical principles in the collection and use of statistics”.
In our submission to the UN OHCHR, we urged the office to call upon governments to ensure that their deployment of digital technologies is in line with data protection principles as well as for them to adopt and enforce related national regulatory laws and frameworks that enshrine these principles. More than that, however, we underlined the urgency for governments to systematically conduct human rights and data protection due diligence assessments that take into account persons with disabilities, as well as the dangers that the use of ADM can pose to their rights. We also encouraged the OHCHR to recall the responsibility of businesses in the context of public-private collaborations and facilitating the provision of assistive technologies, products or services.
Key issues pertaining to data protection principles when it comes to persons with disabilities include the following:
Assistive Technologies (ATs) can include assistive products and services, as well as medical assistive devices, like a hearing aid for example, and these may need to be accessed through state-run social welfare programmes or other public services.
For persons with disabilities, accessing ATs raises unique concerns over the preservation of their right to privacy including:
In an increasingly digitised world, safeguarding the rights of persons with disabilities is paramount. The challenges to upholding these rights posed by automation, data collection, and assistive technologies underscore the need for states and international funders to ensure that the rights of persons with disabilities are specifically addressed and centred. Privacy International remains committed to advocating for the rights of persons with disabilities in this ever-evolving landscape, and to this end we made an August 2023 submission to the UN High Commissioner for human rights, laying out in more detail the concerns described in this article and making a series of recommendations.
In our submission, we laid out some of the key human rights issues that an expanding digital world poses for persons with disabilities, as our rapidly changing global context introduces new and mounting challenges when it comes to preserving their rights to non-discrimination, equal access to public services, the right to health and the right to privacy. In response to their call for inputs to which we responded, the UN OHCHR are expected to publish their report on "good practices on support systems to ensure community inclusion of persons with disabilities, including as a means of building forward better after the COVID-19 pandemic" which will make direct recommendations to governments and which is set to be presented at the 55th session of the UN Human Rights Council in Geneva, from February 2024. These recommendations will be key to inform changes to the social protection field going forward.
PI will be closely following the publication of the UN report and will continue our global advocacy work in order to better ensure the full realisation of the full spectrum of human rights for persons with disabilities.
Read more about our project work on disabilities and go here to read our UN submission in full.