On surveillance by machines

Last Thursday I attended a workshop on consent where (among other things) Andrew McStay of Bangor University was presenting some of his work on people’s reactions to “Empathic” media; specifically adverts that are able to measure human responses and adjust accordingly.  Understandably, there is significant interest in this from the marketing industry.

This sort of surveillance raises a few interesting issues; in the context of consent it raises the question of how relevant consent is outside of Data Protection and Privacy which is where we typically think about it.  Sensing the emotional state of an unknown person who passes by an advertisement is unlikely to be covered by data protection legislation, since the data is unlikely to be personally identifiable.  Still, though, we might consider it to be something that should require the subject’s approval.  As I alluded to in my ongoing series of posts about technology and empowerment, control over personal data processing seems to be just the start of a more general question of control over technology.  At the moment, most of our technology is concerned with processing data and so data is where the control problems have manifested themselves.  The IoT, and advances such as empathic media, start to demonstrate how individuals might want control over technology that goes beyond just controlling what we currently define as personal data.

The second issue, that I want to focus on here, is the extent to which being observed by an machine (in this case an advert on a bus stop) is the same as being observed by another human being.  As another participant at the workshop pointed out, sales people have always responded to the emotions of the consumer;  you can try to upsell to a happy buyer, or back off if the customer is getting annoyed or angry.  This is a legitimate point;  few of us would feel uncomfortable at a sales person knowing how we feel – that the other person has a sense of empathy is implicit in most human interaction.  Personally, I can’t say that I’m so comfortable with a machine that attempts to do the same.  I’ve been thinking about what the difference is; why am I uncomfortable with a machine sensing how I feel but not a sales assistant?

In short, what’s the difference between a human observer and a miscellaneous electronic widget?

Visibility: Humans are, at least in comparison to modern technology, easily recognisable and actually pretty big.  What’s more, human eyes are necessarily co-located with human brains and human bodies.  Being surveilled directly by a human is, in practical terms, easier to avoid than being surveilled via a tiny piece of technology.  You’re simply more likely to know about the presence of another person, and therefore able to opt-out of their presence if desired.  What’s more, it’s hard for a human to avoid this.  No matter how hard they try, humans will never be able to hide as easily as a CCTV camera can.

Persistence: Humans don’t record information in the same way as a machine can.  Even when people have good memories, we don’t give eye witness testimony the same weight as we give, say, CCTV images.  We readily accept that human accounts can be mistaken or fabricated in a way that the high-fidelity accounts that technology creates typically aren’t.

Transfer: There’s a two-to-one (at most) relationship between human eyes and human brains.  There’s no possibility of sharing what I see (or have seen) with another human being, short of physically getting them into the same place as me.  Compare this to technology, where a video stream is easily copied, broadcast, recorded, replayed and shared.

Of course, each of these things could be achieved technologically.  We can easily build devices that are visible, make no persistent record (or even insert deliberate errors to make their accounts somewhat unreliable) and which don’t share the sensed data with other people or devices.  None of these things can be guaranteed to the same extent that they can with human beings, though.

Being surveilled, analysed and tracked by technology is qualitatively different to being surveilled, analysed and tracked by actual people precisely because technology has capabilities beyond those of humans and because there is no easy way to verify exactly which capabilities a given widget has.

We’re all unreliable liars stuck inside our own heads; and those are nice properties to have in someone that is watching and analysing you, because in some way they put limits on how the information can be used and where it will end up.  I don’t have to trust you to be those things, I know they’re true because you’re human, like me.