‘Smart’ Things: Making disempowerment physical?

This is the second in a series of posts about the crisis of intelligibility and empowerment in modern technology. If you’ve not read the first post, “Technology Indistinguishable from Magic,” that might be a good place to start.

The Internet of Things (IoT) is set to continue as the Hottest Thing in Tech ™ in 2016, and is receiving huge attention from industry and bodies such as the UK’s Digital Catapult. There is clear promise in the idea of using established communications technology (TCP/IP) and infrastructure to control and orchestrate previously disconnected objects, or to enable entirely new classes of device such as smart dust.

Of course, the IoT goes beyond just replacing existing control mechanisms like physical knobs and buttons with an API that can be accessed over the network. IoT taps into big data, machine learning and other state-of-the-art computer science techniques to bring devices that can operate with less user input. Taken together, these features of the IoT have the potential to transform our environments from the dumb analogue world we wander around in today into “smart”, interconnected digital systems that respond to our presence, the task we’re working on, the time of day or even our mood. Of course, the IoT also taps into less desirable “features” of today’s software engineering and business practices such as centralisation, obsolescence and privacy-as-an-afterthought. In keeping with the theme of this series, there is the very real possibility that the IoT-enabled environments that are beginning to emerge will disempower the unfortunate humans that are set – voluntarily or otherwise – to inhabit them.

One of the first mainstream IoT devices was the Nest – the smart thermostat (now owned by Google) that promises to lower bills by learning your routine and adjusting your heating appropriately. On the face of it, this is a great idea. There are clear benefits, from a user-experience and environmental perspective, of such a device. Even with issues such as a reliance on internet connectivity (and availability of the central Nest service) aside, there could be a serious intelligibility cost to this type of smart device, though; can users introspect why the heating is in a particular state, and correct that if required? Existing research, such as that by Yang and Newman (http://dx.doi.org/10.1145/2493432.2493489), suggests that users have trouble understanding why the Nest has set a particular temperature, or how to control the learning of the Nest effectively. That one of their participants described the Nest as “arrogant” sounds like a strong indicator that the device is disempowering them in some way, by imposing a ‘will’ all of its own.

The IoT has a data element; by embedding sensors in the environment around us the IoT will have the ability to collect a wide range of data and attribute that to individuals, or at least small groups of individuals. In many ways, techniques such as Privacy by Design (mandated by the GDPR) will help to achieve this goal, although only if we have solid understanding of how users reason about these devices, and how to present explanations and choices in a way that makes sense to them.

Potentially more novel, though, is the way in which the IoT transforms algorithmic decision making from something that has consequences for our personal data to something that has consequences in our physical environment. Having already been hugely disempowered with regard to their personal data, will the IoT spread this disempowerment into users’ physical environments, too? In my opinion, the answer to this questions seems to be an almost certain Yes, unless we learn from our mistakes in the personal data context and apply the lessons to the smart devices that the IoT promises to surround us with.

In a general sense, the learning functions of the Nest move us from a declarative interaction model (setting the desired temperature) to an inferential one, in which there is no longer a clear and easily articulated link between a user action (setting the desired temperature) and the actions taken by the system. There can be little doubt that, when users are prevented from understanding why something has taken place, or how they can correct it in future, they become less able enact their own desires. Crucially, this needn’t be by design. In fact, the intelligibility problems apparent in the Nest seem to be in spite of the desire of engineers to help users better control their thermal comfort and energy bills.

Declarative interfaces, despite being “dumb” are fairly easily understood. Broadly, declare your desire and let the system work to achieve and maintain it. In practice, a thermostat can be used in two different ways: Set the desired temperate and leave it alone; or use it as a binary on/off switch for the heating by turning it between the two extremes. A common misuse of a thermostat is to treat it as if it were a sort of thermal volume control – turning it up to increase the thermal output of the boiler and down to decrease it. That is not how most heating systems work, and this strategy is more effortful for the user than just setting the desired temperature and leaving the thermostat to turn the heating on and off as necessary. This points to the fundamental need – well established in HCI – for users to have an accurate model of how the system works in order to debug it. In practice, in the case of a conventional thermostat, even the erroneous model allows users to “debug” the heating; when it’s too hot, turn the dial to the left and when it’s too cold, turn it to the right. The resulting strategy may not be optimal, but the device is intelligible to the point that virtually everyone is empowered to control the temperate of their living room. Few, if any, people would accuse a conventional thermostat of arrogance.

The simplicity of declarative interfaces is precisely why they can lead to sub-optimal results, though. Leaving the intelligence with the human necessarily introduces their own bounded rationality into the operation of the system. Conventional thermostats lead to higher than necessary heating bills because they are constrained by the requirement that the user correctly declare the most appropriate temperature. An appropriate temperature for an occupied house might be determined by comfort, but in an unoccupied house it’s probably determined by the need to prevent the pipes from freezing, or to allow the temperature to be raised to a comfortable level relatively quickly when the occupant returns. It is this sort of “double-loop” learning that smart devices can use to introduce efficiency. Not only can they take into account what temperature the user feels most comfortable at, but also whether or not they’re at home to feel comfortable or not.

Inevitably, though, these devices will be wrong some of the time. Human beings have complex schedules that are influenced by everything from work commitments to their physical or mental health; sitting at home with the flu is an inopportune moment for the smart thermostat to decide 5°C is an appropriate temperature for your home. There are two responses to this problem; build smart devices that are never wrong, or build smart devices that are intelligible and flexible enough for your poor snotty user to correct. My working hypothesis that only the latter is a viable option.

When interacting with a smart device there are two concerns for the user to consider, though; the immediate situation (it’s too cold) and future situations (will this affect the temperate next week, when they’re back at work?). In order confidently use smart devices, users will need to be able to reason about the effects their actions will have both now and in the future in order to pick the best course of action. Ideally, the device will also be flexible enough to accommodate a temporary aberration from the norm, but even if it isn’t then knowing that it will need to be corrected later on will potentially avert another mistake in a few days time. Part of the solution to this challenge is undoubtedly to adopt models that are easily predictable, the other is to offer some means to inspect how a decision was made. Knowing the information that led to the current state will help users to to correct it, and to improve their understanding of how the device reaches decisions. In the case of device that combines multiple inputs, knowing if the temperature has been reduced because of an appointment in your calendar, or because a movement sensor determined that nobody is at home is key to rectifying the situation and preventing it from recurring in future.

This concern needs to be addressed whether it’s a question of how hot someone’s home is, how bright their lights are, whether now is a good time for the robotic hoover to go to work, or whether the front door will open. Ultimately, the goal of the IoT should be to produce devices that make our lives easier; or at least no more difficult. The smartest devices of all will embrace the fact that they’re not really that smart at all. Instead, they’ll give their users the knowledge and control that’s required to take control of their environment and to exert their own agency.


 In this series I’ll expand on the idea of technology as a disempowering force, covering the need to make empowerment part of the standard design vernacular and how we might do that. Subscribe to the RSS feed, or follow @richardgomer on Twitter to make sure you don’t miss the next post!

One thought on “‘Smart’ Things: Making disempowerment physical?”

Leave a Reply

Your email address will not be published. Required fields are marked *