Exposing Surveillance in Humanitarian and Development Initiatives
What happened
Under pressure to be more accountable for their use of resources, but also due to the post-9/11 push to track and identify terrorists across the world, the humanitarian and development sectors began increasingly to look to identity registration, including biometrics, and the collection and sharing of vast amounts of data on their beneficiaries.
Development funding was supporting the deployment of ID systems, and both sectors were enthralled with ‘big data’ initiatives, all done without the consent of the individuals or a legal framework of safeguards. ‘Identity for all’ became a sustainable development goal, and ‘data for development’ became a driving force for more data, but little to no focus went on technical and legal safeguards.
What we did
We worked closely with the UN refugee agency, UNHCR, staff over their data protection practices. We ran training for UNHCR staff to increase their awareness around the problems and their lack of policy. A few years later UNHCR developed a data protection policy.
For three years we ran a research initiative to identify other forms of humanitarian and development initiatives that were furthering surveillance objectives. We investigated the use of e-health as a promoter of greater data collection without granting any control to the individual, or having any consideration of security and privacy of the data.
All this work culminated in our Aiding Surveillance report in 2013.
Where things stand now
While there hasn’t been anything near to the level of change required in the field, development and humanitarian actors refer to our work when they acknowledge that privacy needs more attention in their programmes. Yet they continue to amass more data and protections and safeguards are dangerously weak.
What we learned
Even while we could find significant numbers of staff who were concerned about what their organisations were doing, and beneficiaries who were alarmed — they were relatively powerless in the face of the convergent drives to monitor the world for terrorists and show how they are cutting waste and fraud.
Only narratives around peoples’ lives and data being put at risk could disrupt this; as there was so little critical research at the time looking at the cost-efficiency or whether the systems ever met their claims. Now there is a growing body of research on the costs and problems with biometric systems for aid, welfare, and elections for instance. Yet conducting this research is a thankless task as the agencies often do not want to assist in its production, and the primary audiences are government donors who remain relatively un-interested in change because of domestic politics.
Hard lessons
Getting access to humanitarian organisations is very hard, requires building trust, occasionally limiting what public advocacy we could do. But this meant that we could not publish as much as we knew. It limited what we could advocate for publicly. And it limited how much the sector had to listen to us. And when we first raised the alarm, few wanted to listen, so we stopped working actively in this domain after publishing the 2013 report. The sector began to pay attention after our report, and has referenced our report as a motivation for reform, but not quickly enough.
Funding this work has proven to be impossible. There is funding for surveillance but we were unsuccessful in seeking support for critical reflection.
What we are doing now
We are documenting the role of rich governments who provide resources and technologies to other countries where the rule of law is weak and safeguards are non-existent. We are working with humanitarian organisations and development funders who are interested in critical insights into how data is generated and used and what can be done to increase protections.