Last Tuesday, the United States Senate published a report into so-called “fusion centers” that were set up post-9/11 to share intelligence between agencies to support counter-terrorism activities. The report is pretty damning on a range of issues (poor financial accountability, out of date or poor-quality intelligence, officials insisting that non-existent centres did exist) but of particular interest, to me at least, are some of the finding related to privacy.
In summary, the fusion centres each represent a geographical area (states or cities) as “a collaborative effort of 2 or more Federal, State, local, or tribal government agencies that combines resources, expertise or information with the goal of maximizing the ability of such agencies to detect, prevent, investigate apprehend, and respond to criminal or terrorist activity.” To this end, the centres produce intelligence reports that are sent to the Department for Homeland Security (DHS, similar in scope to the UK Home Office / Ministry of Justice and created by the consolidation of a range of security-related departments).
In the USA, the Privacy Act* governs the collection, maintenance, use and dissemination of of personally identifiable information (PII) by federal agencies. One finding of the Senate report was that “if published, some draft reporting could have violated the Privacy Act.” Specifically “DHS officials also nixed 40 reports filed by DHS personnel … at fusion centers after reviewers raised concerns the documents potentially endangered the civil liberties or legal privacy protections of the US persons they mentioned.”
This, to me, raises two concerns:
- Why, given the fundamental nature of the privacy protections in both the Privacy Act and US Constitution, were fusion centre staff not better trained to compile reports?
- Since the Senate report focuses on counter-terrorism efforts but acknowledges that fusion centres play a significant role in other intelligence activities, it seems possible (even likely) that other privacy-sensitive reports could have been compiled and not checked/corrected/stopped by staff at the DHS.
Both of the above look like symptoms of the fairly standard “privacy as the last thing to think about” syndrome that seems pervasive in most organisations. So, how could organisations implement privacy protection as more than just a reactive bolt-on?
* Unlike EU Data Protection rules, the US Privact Act applies only to federal agencies (and not bodies such as courts) and has no equivalent of (eg) the UK’s ICO.
Organisational Approaches to Privacy Protection
These two issues made me think about how organisations can structure themselves to protect privacy of the individuals they collect data about. After some thought, I have thought of three models that could be used. I expect there could be more, and in practice I expect that most organisations have a hybrid arrangement.
1. The Firewall
The first model I identified is the one that seems to be used by the fusion centres – I call it “the firewall.” Within the organisation, there is little consideration given to privacy protection but publications and data dissemination is controlled by a “firewall” that is designed to prevent the publication or dissemination of materials that could undermine individuals’ privacy.
This is similar to the model used by some companies for PR purposes – Employees are not allowed to talk directly to the media and are expected to refer such communication via the Public Relations department.
- It’s probably easier (and cheaper) to train employees to send materials via the correct channel than to train them on privacy protection policies and best practice.
- As in the case of the fusion centres, there is a failsafe in place even where employees should know better.
- The firewall can prevent publication or dissemination, but it’s less clear how it could be used to enforce restrictions on internal processing or storage of data.
- In large organisations, internal firewalls might be required to properly control data but would certainly slow down communication and introduce a layer of bureaucracy and expense.
- Whilst, om the face of it, the firewall looks like the most rigourous way to ensure data dissemination and publication doesn’t violate privacy-protection policies, it is impractical to shut down all channels of communication, especially when the lines between organisations are blurred, such as in the fusion centres.
2. The Point of Reference
The second model I identified I call the “point of reference” – This is the model that Universities use to enforce research ethics. A body within the organisation is tasked with maintaining privacy policies and advising other parts of the organisation about what they can and cannot do. The rest of the organisation needn’t understand all the intricacies of privacy protection, but know enough to identify when they should consult the point of reference.
Here at the University of Southampton, the rule for when we should contact the Ethics Committee is fairly* straightforward: Whenever we conduct research that involves humans or animals.
- Unlike the Firewall model, the Point of Reference can be equally applied to data collection, maintenance, storage, use and dissemination.
- It is easier for employees to identify WHEN they need to consult the Point of Reference than to understand all of an organisation’s privacy policies.
- Unlike the firewall, which can provide a reasonably good failsafe (as in the case of the fusion centres – At least so far as DHS reports are concerned), unless the point of reference also has the authority to pro-actively check activities throughout the organisation it could easily be bypassed.
- The point of reference could become a point of friction if employees do not understand enough about organisational privacy policies to understand decisions that conflict with their goals.
* I say fairly, because there are some edge-cases; does scraping twitter involve human participants?
3. Culture of Privacy
My third model is what I call the “Culture of Privacy”. In this model, each employee within an organisation has a working knowledge of the organisation’s privacy policies and privacy is seen as an integral part of the organisation’s operations. In this model, employees are responsible for more than just knowing when to refer to a point of reference but have a personal responsibility for protecting the privacy of data subjects in the course of their work. This model involves the most training and support, and probably also involves appropriate sanctions for employees that engage in their own “privacy counter-culture.”
- This model applies privacy principles to all aspects of an organisation and allows for a degree of monitoring between employees.
- If privacy is seen as part of an organisation’s core principles or even identity, then it is less likely to be seen as a hindrance.
- In practice, making privacy a core value is probably a pretty difficult thing to do (especially in engineering companies [hello Google, Bing, Facebook] where “what we can do” is more of a concern than “the side effects of what we do”).
- A internal culture of privacy is likely to be dependent on a wider culture that respects privacy. There seem to be differences between the EU and the US in this regard and the motivation to create such a culture might be stronger in the EU, given the stricter Data Protection regime.
- Even with good training, employees are likely to require additional advice and support – So this model probably doesn’t work well by itself and probably needs to be considered alongside a point of reference.
As I alluded to previously, adopting a single model to try and enforce privacy-protection within an organisation is probably not a good approach. None of the models is perfect and (in the EU at least) the implications of failing to adequately protect data subjects’ privacy are serious enough for an organisation that privacy-protection is worth doing properly.
Creating hybrid models of privacy protection, for instance combining a point of reference with a firewall model for any substantial inter-organisation data transfers, is probably a better way to ensure that data subjects’ privacy is respected than (as the DHS appears to have done in the case of the fusion centres) relying on a single measure to enforce privacy protection.
The case of the US Fusion Centres illustrates atrocious project management on a number of fronts – But the apparent lack of robust privacy protection measures for data subjects is perhaps among the most unsettling. I’ve briefly explained three ways in which privacy protection could be implemented in an organisation, one of which (the firewall) appears to have saved the Fusion Centres from an even more damning report. However, in reality privacy protection needs to be at the heart of what organisations, especially data-intensive ones, do; and that probably involves a hybrid approach in which failsafe procedures are combined with a supportive environment and a culture in which employees consider privacy an important part of what they and their organisation strive to be.
There are issues that I haven’t explored about how privacy needs to be re-framed from a hindrance to engineers and service designers to being an enabler for the rest of us.