Privacy, a precondition for social protection
By Ailidh Callander, Legal Officer
This piece first appeared in the 500th edition of the Scottish Legal Action Group Journal (2019 SCOLAG (500, June) 124
Political scandal, stronger regulation on privacy but what about social protection?
In an increasingly digitalised and data driven world, an era of government and corporate mass data exploitation, the right to privacy and data protection and what this means in practice is more important than ever. Surveillance is a power generator and opportunity to exert control. Privacy is a counter-balance, a protector of human dignity, an enabler of autonomy and self-determination – a necessary precondition for democracy.
2018 started off with two major ‘privacy moments’, in March, the Facebook Cambridge Analytica scandal broke, followed in May by the EU General Data Protection Regulation taking effect. These moments brought much needed public and parliamentary outrage and regulation. The effects of which are only starting to be felt.
Yet too often the right to privacy is neglected as governments around the world (often in tandem with companies) rush to introduce new technologies in social protection systems. This exacerbates the risk and the reality that these become systems of surveillance, control, punishment and exclusion. Concerns have and continue to be raised, including in the UK by the Special Rapporteur on Extreme Poverty (UNSR) in his 2019 report on the UK. Among its many criticisms it notes, “The British welfare state is gradually disappearing behind a webpage and an algorithm, with significant implications for those living in poverty.” Similar issues were raised in submissions in preparation for the Thematic report to the UN General Assembly on digital technology, social protection and human rights.
Scotland, at least on the face of it, is trying to do things differently, with legal recognition of the human right to social security and the newly established Social Security Scotland identifying fairness, dignity and respect as core values. Yet what does this mean in practice? As it moves forward on this ‘journey’, Scotland can learn from international examples, where too often trade-offs between rights arise in the context of the application of digital technologies in social protection systems.
Privacy vs. Social Protection around the world
The right to privacy is crucial in its own right, they are also essential to providing a foundation upon which other rights may thrive. This includes civil and political rights, such as freedom of expression and freedom of assembly (for example, protestors in Hong Kong avoiding using smart transport cards for fear of being tracked) and crucially the spectrum of social, economic and cultural rights.
However, the stark reality is that often, instead of recognising the interdependent nature of human rights and embedding protection for privacy into systems to provide for socio-economic rights, people at their most vulnerable are faced with a trade-off: major intrusions on their privacy through increased surveillance and data exploitation in exchange for social security and protection. This extends to the rights to food, to adequate housing and health, with inherent implications from a non-discrimination and equality perspective.
Social protection programmes which integrate technology have been increasing steadily for years. Technological advancements bring potential benefits, and are allegedly introduced for numerous reasons, this may include to facilitate the process, to increase efficiency and ensure equal distribution. However, whether these purported goals are achieved and at what cost to rights is the concern.Over the last decade advancements in technology and data processing capabilities are providing ever increasing powers to collect, process and deploy intelligence. Systems have become founded and reliant upon the collection and processing of vast amounts of data (primarily personal data). Access is often conditional on increased surveillance online and offline and tied to the provision of a unique identifier. The increase in automated decision-making and reliance on profiling, further erodes individual’s control and agency over decisions.
The UK is not alone and there are concerning examples from around the world. One issue of key concern is connecting national identity systems (which in themselves raise rights concerns) with social protection programmes. For example, Aadhaar, India’s national ID system, is linked to bank accounts used for cash benefit payments.It is also increasingly used to access health care. A similar system PhilSys has been set up in the Philippines. In the United States, some states require government issued ID before people can access public benefits, despite more than 21 million adults not having such valid ID. Tied to this is the increased integration of biometric technology into social protection programmes. Biometric data is particularly sensitive, as it is by definition inseparably linked to a particular person and that person’s life, and has the potential to be gravely abused. In Ireland the Public Services Card (which includes biometric features) is required to access social welfare and in Chile, facial recognition programmes have been deployed to deliver school meals.
Often, as a result of public-private partnerships, access to a social protection programmes is mediated by a smart (debit) cards. For example, until recently in South Africa (in partnership with Mastercard) and in Bangladesh (with support from USAID and the Bill & Melinda Gates Foundation). Closer to home, in the UK, including Scotland, ASPEN cards issued to asylum seekers allow the Home Office to monitor card usage data, i.e. where and on what money is spent, with the possibility of sanctions if the already limited funds are spent on what are deemed “unnecessary” items.
Then there are systems which integrate elements of automated decision-making in social protection systems. In the UK, for example, automation is introduced in registration and eligibility decision-making in relation to Universal Credit or the Department for Work and Pensions’ fraud investigations or with the aim of identifying children at risk.
Without a rights based-approach together with concrete safeguards (including relating to data protection and security) and due process guarantees from the onset – these programmes amplify pre-existing shortcomings and injustice. As noted in the UNSR report on the UK,“…with automation comes error at scale”. This is not just an issue of digital inclusion or exclusion but raises fundamental questions of social protection systems being designed as systems of surveillance, control, punishment and exclusion rather than systems based on human rights, fairness, dignity, equity, inclusion and justice.
Rights based services and systems that can be scrutinised and held to account
When considering a new social protection programme, in tandem with the right to social protection, the right to privacy together with security and data protection safeguards must be considered from the outset, throughout the decision-making process from design and implementation, to review. If they are not, not only will these programmes fail to meet legal standards, but the benefits will be undermined and even outweighed by the risks which emerge.
Social protection systems are complex and may be context specific. However, in a digital age it is increasingly important that they meet certain minimum requirements. This must include taking a human rights-based approach and ensuring that access is not conditional on a trade-off with the right to privacy. Any interference with the right to privacy must meet the standard of in accordance with the law, pursuant to a legitimate aim, in a democratic society.
It is essential that social protection systems are not exempt from data protection laws. Rather they must build in and respect safeguards including the principles of transparency, lawfulness, fairness, data minimisation, accuracy, storage limitation, integrity and confidentiality of data and significantly accountability. Each of these principles is extremely pertinent in the context of a social protection system. Data protection law also provides for limitations on the use of biometric data and rights for individual’s in relation to their data. Systems should be secure by design and by default and subject to regular and accessible data protection and human rights impact assessments and audits. As technology evolves it is also essential that there is transparency in relation to any automated (or semi-automated) decision-making, including the system’s purpose, impact, policies, inputs and outputs. In the context of social protection there should be limits on such decisions and at the very least a right to human intervention and explainability of any such decision. Similarly, there should be transparency and limitations on the use of profiling.
As with the realisation of all rights, there must be effective scrutiny and oversight of the systems and those responsible for them and there must be accessible and meaningful redress mechanisms.
Today, social protection systems around the world fail to meet these standards and it is vital that we can interrogate and hold them accountable. We, in this case, must be interpreted widely to include those impacted by these systems and decisions, those supporting and representing them, civil society (working to protect both civil and political and socio-economic rights), national human rights institutions, data protection authorities and other regulators and courts tasked with upholding people’s rights.
Scotland, taking a lead?
As noted in the introduction, Scotland, does seem to be trying to do things differently. Following the devolution of social security benefits through the Scotland Act 2016, Scotland has the opportunity to address some of these challenges from the outset and some headway has been made. The Social Security Act 2018 includes principles that recognise that social protection is a human right and essential to the realisation of other human rights. The new agency to oversee the system, Social Security Scotland is bound to uphold the Principles and its Charter and has committed to taking a human rights based approach and demonstrating fairness, dignity and respect in all its actions. Social Security Scotland’s Digital Strategy also aims to reflect these core values stating that it is based on Security by Design, treating client data with dignity, fairness and respect. There remains much to be done and elements of the much-criticised UK system continue to apply in Scotland. Time will tell how these commitments and the technical systems underpinning much of the service translate to the realisation of rights. However, it is at least promising that some steps are being taken to avoid and mitigate the risks and harms outlined above.
Another important development is the creation of a new national taskforce to lead on human rights in Scotland and take forward the recommendations of the First Minister’s advisory group on human rights.These recommendation include an Act which would provide for not just recognition, but enforcement and redress mechanisms for the spectrum of rights, including the right to private life, alongside the right to protection against poverty and social exclusion and the right to social security and social protection.
Looking forward
An attitude of technology as the panacea has created what can be described as a ‘government-industry complex’ that manages and regulates social protection programmes. Systemic problems include: excessive data collection; opaque systems and infrastructure; decision-making which fails to be open, inclusive and transparent; and lack of accountability. Not to mention security concerns. These are exacerbated by the reliance on third parties for delivery of these services, the lack of prioritisation of data protection and security resources and skills, and the vulnerable and challenging position that those affected by decisions are in, with limited opportunity for support to challenges these systems and decisions. There are also questions around how data is shared for other purposes, for example immigration or law enforcement.
These issues are not unique to social protection systems, we see related problems across the public and private sectors, including in policing, political campaigning, credit scoring and insurance, with implications for individuals and society. When these same techniques and practices are inbuilt into our social protection systems, these bring further risks and harms and exacerbate and amplify existing ones. These include increased surveillance, discrimination and exclusion and can have devastating consequences for those whose rights these systems have a duty to fulfil.
Social protection systems must be designed to protect rights from the onset and where they fail to do so, human rights, data protection and other legal tools should and must be used to scrutinise and where necessary challenge them. “All human rights are universal, indivisible and interdependent and interrelated." We cannot enjoy the protection of one without the others, as is the case with privacy and social protection. This only becomes more pressing as the separation between digital and analogue and online and offline become ever more porous. Can Scotland lead by example?