‘Smart’ Things: Making disempowerment physical?

This is the second in a series of posts about the crisis of intelligibility and empowerment in modern technology. If you’ve not read the first post, “Technology Indistinguishable from Magic,” that might be a good place to start.

The Internet of Things (IoT) is set to continue as the Hottest Thing in Tech ™ in 2016, and is receiving huge attention from industry and bodies such as the UK’s Digital Catapult. There is clear promise in the idea of using established communications technology (TCP/IP) and infrastructure to control and orchestrate previously disconnected objects, or to enable entirely new classes of device such as smart dust.

Of course, the IoT goes beyond just replacing existing control mechanisms like physical knobs and buttons with an API that can be accessed over the network. IoT taps into big data, machine learning and other state-of-the-art computer science techniques to bring devices that can operate with less user input. Taken together, these features of the IoT have the potential to transform our environments from the dumb analogue world we wander around in today into “smart”, interconnected digital systems that respond to our presence, the task we’re working on, the time of day or even our mood. Of course, the IoT also taps into less desirable “features” of today’s software engineering and business practices such as centralisation, obsolescence and privacy-as-an-afterthought. In keeping with the theme of this series, there is the very real possibility that the IoT-enabled environments that are beginning to emerge will disempower the unfortunate humans that are set – voluntarily or otherwise – to inhabit them.

One of the first mainstream IoT devices was the Nest – the smart thermostat (now owned by Google) that promises to lower bills by learning your routine and adjusting your heating appropriately. On the face of it, this is a great idea. There are clear benefits, from a user-experience and environmental perspective, of such a device. Even with issues such as a reliance on internet connectivity (and availability of the central Nest service) aside, there could be a serious intelligibility cost to this type of smart device, though; can users introspect why the heating is in a particular state, and correct that if required? Existing research, such as that by Yang and Newman (http://dx.doi.org/10.1145/2493432.2493489), suggests that users have trouble understanding why the Nest has set a particular temperature, or how to control the learning of the Nest effectively. That one of their participants described the Nest as “arrogant” sounds like a strong indicator that the device is disempowering them in some way, by imposing a ‘will’ all of its own.

The IoT has a data element; by embedding sensors in the environment around us the IoT will have the ability to collect a wide range of data and attribute that to individuals, or at least small groups of individuals. In many ways, techniques such as Privacy by Design (mandated by the GDPR) will help to achieve this goal, although only if we have solid understanding of how users reason about these devices, and how to present explanations and choices in a way that makes sense to them.

Potentially more novel, though, is the way in which the IoT transforms algorithmic decision making from something that has consequences for our personal data to something that has consequences in our physical environment. Having already been hugely disempowered with regard to their personal data, will the IoT spread this disempowerment into users’ physical environments, too? In my opinion, the answer to this questions seems to be an almost certain Yes, unless we learn from our mistakes in the personal data context and apply the lessons to the smart devices that the IoT promises to surround us with.

In a general sense, the learning functions of the Nest move us from a declarative interaction model (setting the desired temperature) to an inferential one, in which there is no longer a clear and easily articulated link between a user action (setting the desired temperature) and the actions taken by the system. There can be little doubt that, when users are prevented from understanding why something has taken place, or how they can correct it in future, they become less able enact their own desires. Crucially, this needn’t be by design. In fact, the intelligibility problems apparent in the Nest seem to be in spite of the desire of engineers to help users better control their thermal comfort and energy bills.

Declarative interfaces, despite being “dumb” are fairly easily understood. Broadly, declare your desire and let the system work to achieve and maintain it. In practice, a thermostat can be used in two different ways: Set the desired temperate and leave it alone; or use it as a binary on/off switch for the heating by turning it between the two extremes. A common misuse of a thermostat is to treat it as if it were a sort of thermal volume control – turning it up to increase the thermal output of the boiler and down to decrease it. That is not how most heating systems work, and this strategy is more effortful for the user than just setting the desired temperature and leaving the thermostat to turn the heating on and off as necessary. This points to the fundamental need – well established in HCI – for users to have an accurate model of how the system works in order to debug it. In practice, in the case of a conventional thermostat, even the erroneous model allows users to “debug” the heating; when it’s too hot, turn the dial to the left and when it’s too cold, turn it to the right. The resulting strategy may not be optimal, but the device is intelligible to the point that virtually everyone is empowered to control the temperate of their living room. Few, if any, people would accuse a conventional thermostat of arrogance.

The simplicity of declarative interfaces is precisely why they can lead to sub-optimal results, though. Leaving the intelligence with the human necessarily introduces their own bounded rationality into the operation of the system. Conventional thermostats lead to higher than necessary heating bills because they are constrained by the requirement that the user correctly declare the most appropriate temperature. An appropriate temperature for an occupied house might be determined by comfort, but in an unoccupied house it’s probably determined by the need to prevent the pipes from freezing, or to allow the temperature to be raised to a comfortable level relatively quickly when the occupant returns. It is this sort of “double-loop” learning that smart devices can use to introduce efficiency. Not only can they take into account what temperature the user feels most comfortable at, but also whether or not they’re at home to feel comfortable or not.

Inevitably, though, these devices will be wrong some of the time. Human beings have complex schedules that are influenced by everything from work commitments to their physical or mental health; sitting at home with the flu is an inopportune moment for the smart thermostat to decide 5°C is an appropriate temperature for your home. There are two responses to this problem; build smart devices that are never wrong, or build smart devices that are intelligible and flexible enough for your poor snotty user to correct. My working hypothesis that only the latter is a viable option.

When interacting with a smart device there are two concerns for the user to consider, though; the immediate situation (it’s too cold) and future situations (will this affect the temperate next week, when they’re back at work?). In order confidently use smart devices, users will need to be able to reason about the effects their actions will have both now and in the future in order to pick the best course of action. Ideally, the device will also be flexible enough to accommodate a temporary aberration from the norm, but even if it isn’t then knowing that it will need to be corrected later on will potentially avert another mistake in a few days time. Part of the solution to this challenge is undoubtedly to adopt models that are easily predictable, the other is to offer some means to inspect how a decision was made. Knowing the information that led to the current state will help users to to correct it, and to improve their understanding of how the device reaches decisions. In the case of device that combines multiple inputs, knowing if the temperature has been reduced because of an appointment in your calendar, or because a movement sensor determined that nobody is at home is key to rectifying the situation and preventing it from recurring in future.

This concern needs to be addressed whether it’s a question of how hot someone’s home is, how bright their lights are, whether now is a good time for the robotic hoover to go to work, or whether the front door will open. Ultimately, the goal of the IoT should be to produce devices that make our lives easier; or at least no more difficult. The smartest devices of all will embrace the fact that they’re not really that smart at all. Instead, they’ll give their users the knowledge and control that’s required to take control of their environment and to exert their own agency.


 In this series I’ll expand on the idea of technology as a disempowering force, covering the need to make empowerment part of the standard design vernacular and how we might do that. Subscribe to the RSS feed, or follow @richardgomer on Twitter to make sure you don’t miss the next post!

Technology Indistinguishable from Magic: A crisis of technological disempowerment

This is the first in a series of posts looking at the crisis of intelligibility in modern computer systems, and the threat that this poses to individual empowerment if we don’t get to grips with it.

As expected, the final text of Europe’s new Data Protection Regulation puts more emphasis data subject consent than the previous Directive. There’s a legitimate debate around whether consent is the right approach to data protection, or whether it’s just a distraction from more effective regulation, but in this series of posts I want to explore a broader, but related, and important, but often ignored, issue: The very real problem of technological intelligibility and the risk of technology disempowering everybody.

It should be stated, so I will, that my opinion is shaped hugely by a liberal philosophy; I like the idea of consent, and the EU approach to Data Protection precisely because it gives consumers rights over their personal data, rather than setting absolute limits on precisely what service providers can or can’t do. In Europe, if you consent to your data being processed then it’s pretty much fair game. If you want to find out how it’s being processed, or what is held, or challenge that processing, though, then you (as a human being, not necessarily an EU citizen) have a set of rights to help you do so. This appeals to me because it empowers individuals as intelligent (if not necessarily rational) agents. If you want to sell your genome because an advertising company offers you a vast sum of money for it, you can do so; but if you don’t want an advertising company to process your genome then you have the right to challenge it if they try.

To me, as a liberal (with a big and a little L), individual empowerment – the ability for individuals to choose and to shape their own lives – is of fundamental importance. To me that means challenging state power and social inequality, as well as other factors such as patriarchy and, as I’ll explore through these blog posts, the ways in which technology shapes our opportunity, choices and lives.

I’m a technologist by training, and I do believe that it has the potential to improve the lives of human beings and to create a fairer and more liberal society. I don’t think that is a given, though; technology can obviously be a distraction from problems, an ineffective smokescreen that gives the appearance of doing something without actually helping, or it can actively work against our interests. This critical viewpoint is one that the industry as a whole (and an often sycophantic media) often ignores, and is something I’ve tried to champion while an editor of XRDS magazine. My work as part of the meaningful consent project, and my PhD thesis, has brought me to the conclusion that we, as an industry and a society, may be stumbling blindly towards a future in which the potential of digital technology is missed, and which instead of supporting citizens to reach their potential fundamentally disempowers them.

For a long time, we’ve trumpeted the idea of advanced technology being indistinguishable from a magic as an achievement, as an implicit design goal or something to strive for; the technology industry is largely proud of being ‘magical’. Magic, though, is almost by definition the preserve of a few ‘gifted’ individuals, at best unintelligible to most and at worst completely unavailable to them. Magic is about power, that’s why it makes for exciting stories; and it is about being mysterious, which is why those stories needn’t explain how it actually works. Our most compelling stories about magic inextricably link magical ability to the antagonist – whether it’s as a tool to undermine the autonomy of others (as Sauron attempts in the Lord of the Rings), a justification to subject non-magical others to your whim (as Voldemort in Harry Potter) or a malevolent influence in and of itself (as the Dark Side of the Force in Star Wars). Basically, magic never goes well.

I think that the unintelligibility of magic is what makes it so troublesome. It’s the unintelligibility that makes is both unpredictable and exclusive. If we can’t predict, we can’t shape our environment to our own ends. If something is exclusive then those with access to it have a disproportionate power over everyone else. Magical technology that we don’t understand, cannot predict the consequences of, and do not have the ability to master does not empower us. At best it just is and at worst it shapes our lives as a result of someone else’s intent (or inattention).

Nowhere is this better demonstrated than in the web’s ubiquitous advertising network. Natasa Milic-Frayling (then at Microsoft Research), Eduarda Mendes Rodrigues (then at the University of Porto), m.c. schraefel (my academic supervisor) and I demonstrated the sheer extent of the tracking to which we are subject in our 2013 paper. Using search engines as an entry point into the Web we crawled thousands of pages to uncover the invisible network of advertisers, brokers and content providers that work together to collect information about the web pages we visit and to transfer that data among themselves to deliver advertisements individually tailored to our supposed interests. It takes just 30 clicks before the average web user has a 95% chance of being labelled by all of the top 10 tracking domains. Despite the invisibility of these networks themselves, some of the organisations participating in them are household names; Google, Facebook, Twitter – all are deeply implicated, and all have access to far more data about us than merely what we type into their own websites.

The ability for a website we’ve never visited before to deliver an advertisement tailored to our interests, or to a status update we posted “privately” to Facebook the previous day, even to our household income or our credit rating, is magical. Most of us probably wouldn’t say it was magical in a good way, more at the Voldemort end of the scale than the Gandalf side of things; it is magical nonetheless. In particular, it is unintelligible to most of us, even the websites that benefit from the advertising revenue. In fact, the complexity of the emergent network itself means that the actual extent of the data brokerage is probably beyond the understanding of most of the organisations involved in it.

The unintelligibility of the advertising network makes it virtually impossible to understand what the profiles it has created for each of us contain, or how we can influence them. This does not make for positive or fulfilling experiences; most web users perceive aggressive ad targeting as creepy or downright disturbing. Despite having dedicated years of research to the topic, I am still unable to account for some of the targeting that’s apparent when I browse the web, and I’m still unable to prevent much of it from taking place. We are all hugely disempowered by the existence of this “grey web”; we can’t opt out even if we want to. It is fortunate that, with some notable exceptions, despite the creepiness it is largely not a major threat to most of us.

Still, the grey web is just one way in which most of us have become (or at least feel that) we are pretty disempowered when it comes to exercising control over our personal data. Only yesterday the DVLA sent my driving license renewal letter to my flat in Southampton; not the address that appears on my license (which is registered to my parents’ house) after apparently checking “the last address you gave us with records held by a commercial partner.” How I would correct that record if it were wrong is anybody’s guess – There’s no insight into the magical process that they used and their description seems to provide little clue as to what they actually did or who they asked.

In this series of (hopefully weekly) posts I’ll expand on the idea of technology as a disempowering force, covering the need to make empowerment part of the standard design vernacular and how we might do so. Subscribe to the RSS feed, or follow @richardgomer on Twitter to make sure you don’t miss the next post!