“Off you go!”: Why I don’t think we should be forcing kids to run

I don’t normally blog about anything other than consent/privacy/data protection these days; but there is another side to my PhD – wellbeing, health and exercise.  Broadly, these two apparently disjoint areas are joined together by a desire to understand how we design to accommodate complex human values and lives, and build technology that respects that diversity.  This post is, mostly, though, about experiences far predating my PhD research.

I just read (by which I mean gave a cursorily skimmed) an article on the Guardian debating whether school kids should be made to get a daily mile of running or walking in their routine.  The idea of running a mile fills me with horror.  I detest running.  It’s painful, boring and, frankly, the outside is never the right temperature.  These are pretty much the same reasons I have always hated running.  I hated other sports, too – Football, hockey, rugby; possibly because, to some extent, they involve running in themselves.  I still don’t really like competitive sports – what fun is there in being practically the slowest and falling over all the time; or standing in a field in the middle of winter; or trying to put on f***ing shin pads and football boots?

One particular low point came on school camp in Bude, in year 9.  I had tried the morning run on the first day; decided it was too painful, and presented the teachers with the prewritten excuse note I’d got mum to write.  They accepted it, somewhat grudgingly. (“Why can’t you do the swim?” they asked, although I think they knew that if they’d made me try and swim in the sea pool at 9am in the morning I would have drowned.)  Later in the week, we were taken, sans-teachers, to play ‘games’ on the beach.  One of which (peculiarly for something called a ‘game’) basically involved running up and down the beach.  I gave it a go, and, to my credit, managed a couple of beach-laps.  Then, as was typical, decided that the pain in my leg was probably not worth it.  So I stopped.

“Why aren’t you running?” asked the instructor.
“My leg hurts,”
“That’s a weak excuse,” he replied, “off you go.”

I was, basically, a pretty good pupil, and not one to disobey.  In my whole secondary school career, I had probably less than 20 debits (almost exclusively for not getting my homework diary signed –  I know, WTF?).  On reflection, my decision to basically ignore the c*** and walk off in the opposite direction to sit by myself, in floods of tears (because actually, telling a kid that isn’t lying that they are lying is a really crappy thing to do) was something of a watershed.

In retrospect, forcing me to do activities I hated was bad for my self-esteem and bad for fostering any sense of enjoyment in physical activity.  It led me to the conclusion that, fundamentally, exercise is awful, with no redeeming features – at least for me.  It encouraged a sense of helplessness in the face of physical activity, and a belief that I just couldn’t enjoy any of it.

These days, though, I do at least two exercise classes a week and have a fairly substantial collection of weights in my living room.  These are things that I enjoy, and that I look forward to.   It is common knowledge among many of my friends that Step Aerobics is the absolute highlight of my week.  I like the music, I like that it isn’t competitive, that it engages my mind so I have something to think about other than the discomfort.  I like that I can do it entirely for me and not some twat who’s shouting at me to do it.  What’s more, I’m actually pretty good at them!  V-step – YES. Reverse turn – YES. 2-minute plank – YES!  It’s hard to shake the feeling that I’m doing them in spite of my earlier experiences, though.  In spite of the fact that, instead of setting me up on a path towards an active life, the choice of activities and the way they were pushed in successive schools taught me lessons that I’ve had to unlearn, like “I hate exercise” or “I can’t exercise”.

Through my research, I’ve spoken at length to numerous interview participants who have shared, often very candidly, their own journeys around physical exercise.  Some of these people have always been active, others have come to it later and used it as a way to turn around stressful and unhealthy lives.  What is most striking, though, speaking to these people, is the diversity and dynamicity of their reasons for engaging in physical activity; and the often complicated stories about how they found an activity that really fits with what they care about.  I’ve spoken to nobody, nobody, for whom physical exercise is just about getting physical exercise.  It is not true that – beyond a reductive physical sense – any exercise is good exercise.  The “right” exercise is the one that makes you want to do it again, that fits in with the rest of your life in terms of logistics and goals.  Few, if any, teenagers will go for a run today because it might avert heart disease in 40 years time.  Plenty of people will go for a run because it’s a chance to socialise, to listen to music or to explore the countryside, though.

A single, reductive, approach like “run a mile” is the complete opposite of the rich serendipitous journeys that lead most people to finding those activities that are right for them.  It is actively unhelpful because, for those people who don’t like running, it too often translates into a blunt rejection of all exercise, and a missed opportunity to find something that will engage them.

If we’re serious about getting an active population, we need to help people discover the activities that work for them; and that should start with schools.

 

Footnote

In about 2002, after a decade of resenting numerous otherwise reasonable teachers for making me run, I was largely vindicated in my consistent opposition by medical proof that “my leg hurts” was not a weak excuse, but the actual bona-fide result soft-tissue problems in my right leg and foot.  I still struggle to get my right heel flat on the floor; and my calf muscle is noticeably smaller.  For a long time, because of my experiences trying, I thought it stopped from me doing serious exercise.  I can do aerobics, step aerobics, total tone and weight lifting, though (albeit slightly wonkily).

On surveillance by machines

Last Thursday I attended a workshop on consent where (among other things) Andrew McStay of Bangor University was presenting some of his work on people’s reactions to “Empathic” media; specifically adverts that are able to measure human responses and adjust accordingly.  Understandably, there is significant interest in this from the marketing industry.

This sort of surveillance raises a few interesting issues; in the context of consent it raises the question of how relevant consent is outside of Data Protection and Privacy which is where we typically think about it.  Sensing the emotional state of an unknown person who passes by an advertisement is unlikely to be covered by data protection legislation, since the data is unlikely to be personally identifiable.  Still, though, we might consider it to be something that should require the subject’s approval.  As I alluded to in my ongoing series of posts about technology and empowerment, control over personal data processing seems to be just the start of a more general question of control over technology.  At the moment, most of our technology is concerned with processing data and so data is where the control problems have manifested themselves.  The IoT, and advances such as empathic media, start to demonstrate how individuals might want control over technology that goes beyond just controlling what we currently define as personal data.

The second issue, that I want to focus on here, is the extent to which being observed by an machine (in this case an advert on a bus stop) is the same as being observed by another human being.  As another participant at the workshop pointed out, sales people have always responded to the emotions of the consumer;  you can try to upsell to a happy buyer, or back off if the customer is getting annoyed or angry.  This is a legitimate point;  few of us would feel uncomfortable at a sales person knowing how we feel – that the other person has a sense of empathy is implicit in most human interaction.  Personally, I can’t say that I’m so comfortable with a machine that attempts to do the same.  I’ve been thinking about what the difference is; why am I uncomfortable with a machine sensing how I feel but not a sales assistant?

In short, what’s the difference between a human observer and a miscellaneous electronic widget?

Visibility: Humans are, at least in comparison to modern technology, easily recognisable and actually pretty big.  What’s more, human eyes are necessarily co-located with human brains and human bodies.  Being surveilled directly by a human is, in practical terms, easier to avoid than being surveilled via a tiny piece of technology.  You’re simply more likely to know about the presence of another person, and therefore able to opt-out of their presence if desired.  What’s more, it’s hard for a human to avoid this.  No matter how hard they try, humans will never be able to hide as easily as a CCTV camera can.

Persistence: Humans don’t record information in the same way as a machine can.  Even when people have good memories, we don’t give eye witness testimony the same weight as we give, say, CCTV images.  We readily accept that human accounts can be mistaken or fabricated in a way that the high-fidelity accounts that technology creates typically aren’t.

Transfer: There’s a two-to-one (at most) relationship between human eyes and human brains.  There’s no possibility of sharing what I see (or have seen) with another human being, short of physically getting them into the same place as me.  Compare this to technology, where a video stream is easily copied, broadcast, recorded, replayed and shared.

Of course, each of these things could be achieved technologically.  We can easily build devices that are visible, make no persistent record (or even insert deliberate errors to make their accounts somewhat unreliable) and which don’t share the sensed data with other people or devices.  None of these things can be guaranteed to the same extent that they can with human beings, though.

Being surveilled, analysed and tracked by technology is qualitatively different to being surveilled, analysed and tracked by actual people precisely because technology has capabilities beyond those of humans and because there is no easy way to verify exactly which capabilities a given widget has.

We’re all unreliable liars stuck inside our own heads; and those are nice properties to have in someone that is watching and analysing you, because in some way they put limits on how the information can be used and where it will end up.  I don’t have to trust you to be those things, I know they’re true because you’re human, like me.

‘Smart’ Things: Making disempowerment physical?

This is the second in a series of posts about the crisis of intelligibility and empowerment in modern technology. If you’ve not read the first post, “Technology Indistinguishable from Magic,” that might be a good place to start.

The Internet of Things (IoT) is set to continue as the Hottest Thing in Tech ™ in 2016, and is receiving huge attention from industry and bodies such as the UK’s Digital Catapult. There is clear promise in the idea of using established communications technology (TCP/IP) and infrastructure to control and orchestrate previously disconnected objects, or to enable entirely new classes of device such as smart dust.

Of course, the IoT goes beyond just replacing existing control mechanisms like physical knobs and buttons with an API that can be accessed over the network. IoT taps into big data, machine learning and other state-of-the-art computer science techniques to bring devices that can operate with less user input. Taken together, these features of the IoT have the potential to transform our environments from the dumb analogue world we wander around in today into “smart”, interconnected digital systems that respond to our presence, the task we’re working on, the time of day or even our mood. Of course, the IoT also taps into less desirable “features” of today’s software engineering and business practices such as centralisation, obsolescence and privacy-as-an-afterthought. In keeping with the theme of this series, there is the very real possibility that the IoT-enabled environments that are beginning to emerge will disempower the unfortunate humans that are set – voluntarily or otherwise – to inhabit them.

One of the first mainstream IoT devices was the Nest – the smart thermostat (now owned by Google) that promises to lower bills by learning your routine and adjusting your heating appropriately. On the face of it, this is a great idea. There are clear benefits, from a user-experience and environmental perspective, of such a device. Even with issues such as a reliance on internet connectivity (and availability of the central Nest service) aside, there could be a serious intelligibility cost to this type of smart device, though; can users introspect why the heating is in a particular state, and correct that if required? Existing research, such as that by Yang and Newman (http://dx.doi.org/10.1145/2493432.2493489), suggests that users have trouble understanding why the Nest has set a particular temperature, or how to control the learning of the Nest effectively. That one of their participants described the Nest as “arrogant” sounds like a strong indicator that the device is disempowering them in some way, by imposing a ‘will’ all of its own.

The IoT has a data element; by embedding sensors in the environment around us the IoT will have the ability to collect a wide range of data and attribute that to individuals, or at least small groups of individuals. In many ways, techniques such as Privacy by Design (mandated by the GDPR) will help to achieve this goal, although only if we have solid understanding of how users reason about these devices, and how to present explanations and choices in a way that makes sense to them.

Potentially more novel, though, is the way in which the IoT transforms algorithmic decision making from something that has consequences for our personal data to something that has consequences in our physical environment. Having already been hugely disempowered with regard to their personal data, will the IoT spread this disempowerment into users’ physical environments, too? In my opinion, the answer to this questions seems to be an almost certain Yes, unless we learn from our mistakes in the personal data context and apply the lessons to the smart devices that the IoT promises to surround us with.

In a general sense, the learning functions of the Nest move us from a declarative interaction model (setting the desired temperature) to an inferential one, in which there is no longer a clear and easily articulated link between a user action (setting the desired temperature) and the actions taken by the system. There can be little doubt that, when users are prevented from understanding why something has taken place, or how they can correct it in future, they become less able enact their own desires. Crucially, this needn’t be by design. In fact, the intelligibility problems apparent in the Nest seem to be in spite of the desire of engineers to help users better control their thermal comfort and energy bills.

Declarative interfaces, despite being “dumb” are fairly easily understood. Broadly, declare your desire and let the system work to achieve and maintain it. In practice, a thermostat can be used in two different ways: Set the desired temperate and leave it alone; or use it as a binary on/off switch for the heating by turning it between the two extremes. A common misuse of a thermostat is to treat it as if it were a sort of thermal volume control – turning it up to increase the thermal output of the boiler and down to decrease it. That is not how most heating systems work, and this strategy is more effortful for the user than just setting the desired temperature and leaving the thermostat to turn the heating on and off as necessary. This points to the fundamental need – well established in HCI – for users to have an accurate model of how the system works in order to debug it. In practice, in the case of a conventional thermostat, even the erroneous model allows users to “debug” the heating; when it’s too hot, turn the dial to the left and when it’s too cold, turn it to the right. The resulting strategy may not be optimal, but the device is intelligible to the point that virtually everyone is empowered to control the temperate of their living room. Few, if any, people would accuse a conventional thermostat of arrogance.

The simplicity of declarative interfaces is precisely why they can lead to sub-optimal results, though. Leaving the intelligence with the human necessarily introduces their own bounded rationality into the operation of the system. Conventional thermostats lead to higher than necessary heating bills because they are constrained by the requirement that the user correctly declare the most appropriate temperature. An appropriate temperature for an occupied house might be determined by comfort, but in an unoccupied house it’s probably determined by the need to prevent the pipes from freezing, or to allow the temperature to be raised to a comfortable level relatively quickly when the occupant returns. It is this sort of “double-loop” learning that smart devices can use to introduce efficiency. Not only can they take into account what temperature the user feels most comfortable at, but also whether or not they’re at home to feel comfortable or not.

Inevitably, though, these devices will be wrong some of the time. Human beings have complex schedules that are influenced by everything from work commitments to their physical or mental health; sitting at home with the flu is an inopportune moment for the smart thermostat to decide 5°C is an appropriate temperature for your home. There are two responses to this problem; build smart devices that are never wrong, or build smart devices that are intelligible and flexible enough for your poor snotty user to correct. My working hypothesis that only the latter is a viable option.

When interacting with a smart device there are two concerns for the user to consider, though; the immediate situation (it’s too cold) and future situations (will this affect the temperate next week, when they’re back at work?). In order confidently use smart devices, users will need to be able to reason about the effects their actions will have both now and in the future in order to pick the best course of action. Ideally, the device will also be flexible enough to accommodate a temporary aberration from the norm, but even if it isn’t then knowing that it will need to be corrected later on will potentially avert another mistake in a few days time. Part of the solution to this challenge is undoubtedly to adopt models that are easily predictable, the other is to offer some means to inspect how a decision was made. Knowing the information that led to the current state will help users to to correct it, and to improve their understanding of how the device reaches decisions. In the case of device that combines multiple inputs, knowing if the temperature has been reduced because of an appointment in your calendar, or because a movement sensor determined that nobody is at home is key to rectifying the situation and preventing it from recurring in future.

This concern needs to be addressed whether it’s a question of how hot someone’s home is, how bright their lights are, whether now is a good time for the robotic hoover to go to work, or whether the front door will open. Ultimately, the goal of the IoT should be to produce devices that make our lives easier; or at least no more difficult. The smartest devices of all will embrace the fact that they’re not really that smart at all. Instead, they’ll give their users the knowledge and control that’s required to take control of their environment and to exert their own agency.


 In this series I’ll expand on the idea of technology as a disempowering force, covering the need to make empowerment part of the standard design vernacular and how we might do that. Subscribe to the RSS feed, or follow @richardgomer on Twitter to make sure you don’t miss the next post!

Technology Indistinguishable from Magic: A crisis of technological disempowerment

This is the first in a series of posts looking at the crisis of intelligibility in modern computer systems, and the threat that this poses to individual empowerment if we don’t get to grips with it.

As expected, the final text of Europe’s new Data Protection Regulation puts more emphasis data subject consent than the previous Directive. There’s a legitimate debate around whether consent is the right approach to data protection, or whether it’s just a distraction from more effective regulation, but in this series of posts I want to explore a broader, but related, and important, but often ignored, issue: The very real problem of technological intelligibility and the risk of technology disempowering everybody.

It should be stated, so I will, that my opinion is shaped hugely by a liberal philosophy; I like the idea of consent, and the EU approach to Data Protection precisely because it gives consumers rights over their personal data, rather than setting absolute limits on precisely what service providers can or can’t do. In Europe, if you consent to your data being processed then it’s pretty much fair game. If you want to find out how it’s being processed, or what is held, or challenge that processing, though, then you (as a human being, not necessarily an EU citizen) have a set of rights to help you do so. This appeals to me because it empowers individuals as intelligent (if not necessarily rational) agents. If you want to sell your genome because an advertising company offers you a vast sum of money for it, you can do so; but if you don’t want an advertising company to process your genome then you have the right to challenge it if they try.

To me, as a liberal (with a big and a little L), individual empowerment – the ability for individuals to choose and to shape their own lives – is of fundamental importance. To me that means challenging state power and social inequality, as well as other factors such as patriarchy and, as I’ll explore through these blog posts, the ways in which technology shapes our opportunity, choices and lives.

I’m a technologist by training, and I do believe that it has the potential to improve the lives of human beings and to create a fairer and more liberal society. I don’t think that is a given, though; technology can obviously be a distraction from problems, an ineffective smokescreen that gives the appearance of doing something without actually helping, or it can actively work against our interests. This critical viewpoint is one that the industry as a whole (and an often sycophantic media) often ignores, and is something I’ve tried to champion while an editor of XRDS magazine. My work as part of the meaningful consent project, and my PhD thesis, has brought me to the conclusion that we, as an industry and a society, may be stumbling blindly towards a future in which the potential of digital technology is missed, and which instead of supporting citizens to reach their potential fundamentally disempowers them.

For a long time, we’ve trumpeted the idea of advanced technology being indistinguishable from a magic as an achievement, as an implicit design goal or something to strive for; the technology industry is largely proud of being ‘magical’. Magic, though, is almost by definition the preserve of a few ‘gifted’ individuals, at best unintelligible to most and at worst completely unavailable to them. Magic is about power, that’s why it makes for exciting stories; and it is about being mysterious, which is why those stories needn’t explain how it actually works. Our most compelling stories about magic inextricably link magical ability to the antagonist – whether it’s as a tool to undermine the autonomy of others (as Sauron attempts in the Lord of the Rings), a justification to subject non-magical others to your whim (as Voldemort in Harry Potter) or a malevolent influence in and of itself (as the Dark Side of the Force in Star Wars). Basically, magic never goes well.

I think that the unintelligibility of magic is what makes it so troublesome. It’s the unintelligibility that makes is both unpredictable and exclusive. If we can’t predict, we can’t shape our environment to our own ends. If something is exclusive then those with access to it have a disproportionate power over everyone else. Magical technology that we don’t understand, cannot predict the consequences of, and do not have the ability to master does not empower us. At best it just is and at worst it shapes our lives as a result of someone else’s intent (or inattention).

Nowhere is this better demonstrated than in the web’s ubiquitous advertising network. Natasa Milic-Frayling (then at Microsoft Research), Eduarda Mendes Rodrigues (then at the University of Porto), m.c. schraefel (my academic supervisor) and I demonstrated the sheer extent of the tracking to which we are subject in our 2013 paper. Using search engines as an entry point into the Web we crawled thousands of pages to uncover the invisible network of advertisers, brokers and content providers that work together to collect information about the web pages we visit and to transfer that data among themselves to deliver advertisements individually tailored to our supposed interests. It takes just 30 clicks before the average web user has a 95% chance of being labelled by all of the top 10 tracking domains. Despite the invisibility of these networks themselves, some of the organisations participating in them are household names; Google, Facebook, Twitter – all are deeply implicated, and all have access to far more data about us than merely what we type into their own websites.

The ability for a website we’ve never visited before to deliver an advertisement tailored to our interests, or to a status update we posted “privately” to Facebook the previous day, even to our household income or our credit rating, is magical. Most of us probably wouldn’t say it was magical in a good way, more at the Voldemort end of the scale than the Gandalf side of things; it is magical nonetheless. In particular, it is unintelligible to most of us, even the websites that benefit from the advertising revenue. In fact, the complexity of the emergent network itself means that the actual extent of the data brokerage is probably beyond the understanding of most of the organisations involved in it.

The unintelligibility of the advertising network makes it virtually impossible to understand what the profiles it has created for each of us contain, or how we can influence them. This does not make for positive or fulfilling experiences; most web users perceive aggressive ad targeting as creepy or downright disturbing. Despite having dedicated years of research to the topic, I am still unable to account for some of the targeting that’s apparent when I browse the web, and I’m still unable to prevent much of it from taking place. We are all hugely disempowered by the existence of this “grey web”; we can’t opt out even if we want to. It is fortunate that, with some notable exceptions, despite the creepiness it is largely not a major threat to most of us.

Still, the grey web is just one way in which most of us have become (or at least feel that) we are pretty disempowered when it comes to exercising control over our personal data. Only yesterday the DVLA sent my driving license renewal letter to my flat in Southampton; not the address that appears on my license (which is registered to my parents’ house) after apparently checking “the last address you gave us with records held by a commercial partner.” How I would correct that record if it were wrong is anybody’s guess – There’s no insight into the magical process that they used and their description seems to provide little clue as to what they actually did or who they asked.

In this series of (hopefully weekly) posts I’ll expand on the idea of technology as a disempowering force, covering the need to make empowerment part of the standard design vernacular and how we might do so. Subscribe to the RSS feed, or follow @richardgomer on Twitter to make sure you don’t miss the next post!

Asking about gender in research

STILL A DRAFT, I might tweak it later…

On a couple of occasions lately I’ve had cause to query how gender is being asked about in research studies at the University.  I wanted to make some notes about my thoughts on what is potentially a confusing and difficult area – for scientific and social reasons – that I can point people to when the issue comes up.  I’m not an expert in gender, or in research. So, grab a pinch of salt before reading on.

Screenshot from 2015-12-07 16:02:27Figure 1: The problem

Apparently the University of Southampton does recommend that an “other” option is provided when asking about gender.  I was unable to find the relevant guidance, but have emailed the RGO and will update this post if they can point me to it!

These are just my thoughts, so input from researchers and research participants is welcome, in the comments or by email (r dot gomer at soton dot ac dot uk).

Here’s a typical scenario: “I’m doing a survey.  I want to collect basic demographics about participants for analysis, or just to check how representative my sample is, and I’m going to ask about gender”.  In practice, there are two questions to grapple with here.  1) Should you be asking about gender, and 2) what options should you give to participants?


Aside: Gender vs Sex
A lot of researchers might not have thought about what gender really means* – So here’s a quick note on gender vs sex.  Sex, typically, is a biological (or genetic) concept – It tends to effect things like how tall people are.  Gender is the social construct that (typically) arises from sex.  It’s the set of social expectations about how men and women behave – How they dress, their role in a family, how they behave, or (thankfully less common these days) what jobs they should or shouldn’t do.  Gender arguably has more of an impact on most of us than sex, even if for most people there is a direct mapping from biological sex to the corresponding gender.

* something that seems to be missing from the UK curriculum…


 

About 0.4% of people identify as a gender other than what they were assigned at birth.  It could be that they identify as the “other” gender, or reject a gender label altogether, feeling themselves neither male nor female.  That’s 1 in 250 people.  Not many, but uncommon either.  In a survey of 1000 people, you can pretty much guarantee that some participants mightn’t think either “male” or “female” is a good description of their gender.

Principle 3 of the Data Protection Act 1998 is that “Personal data shall be adequate, relevant and not excessive in relation to the purpose or purposes for which they are processed.”  In practice, this means that (if your participants can be identified from the data you collect) you have a legal obligation to ensure that you’re only collecting the data that are actually required to conduct your research.  If participants are not identifiable (eg in an anonymous online survey) then this isn’t a legal requirement; but it feels like good research practice not to collect data that isn’t necessary to answer your research questions.

As mentioned above, there are two general reasons you might ask about the gender of your research participants:

1) To ensure (or demonstrate) that your sample is representative, or at least to contextualise the data.
2) Because you expect (and will either look for, or control for) gender effects.

Both sound like valid reasons to collect gender, but have different implications for research (below).  Still, in many studies you might not expect to see gender effects, and if you’re not going to test for them then why bother collecting that data at all?

If you ARE testing for gender effects, then you could consider whether identifying as a binary gender should be part of your study inclusion criteria.  If you know you can’t recruit enough participants that identify as something other than male or female to get a statistically meaningful result, then you should consider whether it is ethical to ask those people to give up their time to take part in your research.  Of course, if gender effects are only one of many analyses (for most studies, this is almost certainly the case), then excluding participants on these grounds is likely to be unjustified (and arguably worse than only providing a binary choice).  Perhaps consider just excluding those participants who choose something other than “male” or “female” from the particular tests that relate to gender identity.

So, in practical terms, what should you ask participants?  For most research, where gender is necessary for descriptive purposes or some minor analysis (ie most research) one of the following is probably a good starting point.  I’ve taken the options phrasing from the EHRC guidance on asking about gender for monitoring purposes.

Free text Three Options Four Options
What is your gender?
_________________________
 Which of the following describes how you think of yourself?
+ Female
+ Male
+ In another way
 Which of the following describes how you think of yourself?
+ Female
+ Male
+ In another way
+ Prefer not to say
Pros
  •  Participants can express their gender in their own terms.
  • The most flexible approach.
  • Immediately quantitative
  • Broadly covers everyone, without the use of a clumsy “Other” option
  • Immediately quantitative
  • Broadly covers everyone, without the use of a clumsy “Other” option
  • No pressure to answer
Cons
  • Data needs to be coded, probably by hand.  EFFORT.
  • You still need to assign categories in order to do any quantitative analysis.  Unless you’re going to let those categories emerge from the data, then you might as well specify them directly for participants to choose from.
  •  Some participants might feel pressured to answer.
  • If gender is an important aspect of your research, this might cause you to miss out on data.

Specious Arguments in Favour of Mass Surveillance

Richard Berry, of Gloucestershire Police and the National Police Chief’s Council has been making specious arguments in favour of ISPs retaining all of our browsing history and handing it over to the police, reports HuffPo.

“Five years ago (a suspect) could have physically walked into a bank and carried out a transaction. We could have put a surveillance team on that but now, most of it is done online. We just want to know about the visit.”

So, just to be clear, five years ago the police could put a suspect under surveillance in order to see what they were doing.  They could NOT retroactively follow that person around to see where they had BEEN.  Getting retrospective access to browsing history is not the same as being able to obtain a warrant to compel an ISP to assist with targeted surveillance.  Retrospective access would amount to a massive power grab by the Police that a) undermines the privacy of everyone in the UK, and b) amounts to a huge burden on ISPs who will have to pay for the storage of this data.

Conversely, one can imagine that there might be a valuable case in asking ISPs to log visits to particular websites, again with a warrant, because that specific website is suspected of being a key part in illegal activity and refuses to co-operate directly with investigations – for instance the publication of online abuse imagery such as “revenge porn”.  Targeted surveillance, with judicial oversight, of a particular website would be much easier to stomach than mass surveillance, with no oversight until the point that the police themselves actually want to access to the data.

For most people, the risks of storing personal information do not come from the police themselves.  There is a legitimate need to limit the powers of the state, but there is also a legitimate need to minimise the data that is collected in the first place.  With the recent attacks on TalkTalk, and the pretty steady series of data breaches in general, why should we be forced to trust anyone to keep such potentially sensitive data about us?

On the MSM Blood Ban

CGFb3c7W0AAW9b7.jpg:largeOpposition to the UK’s (and others’…) ban on blood donations from (rather awkwardly termed) “men who have sex with men” comes to our collective attention now and then – for instance when Michael Fabricant put forward a parliamentary motion calling for its end last year.

This morning it popped back to my personal attention when a leaflet popped through the door. “If you could give up just 1 hour of your time to save or improve up to 3 lives, would you do it?” it asked. Well, yes – I would. I can’t though. Last week, it popped back to my attention when I saw the blood donation vans parked on campus – “Give Blood” they say. Well, yes – I would. I can’t though. The privilege of the majority is not being constantly aware that you’re in a minority and seeing that demonstrated through opportunities – however altruistic – denied to you. Especially when that denial is rooted in homophobia.

“NO” – you’re thinking – “it’s not homophobia, MSM have an objectively higher likelihood of being HIV positive, and we need to keep that out of the blood supply.” It’s all true. MSM are more likely to be HIV positive, there’s no denying that fact. Nor can you deny that there is slight risk that the window between infection and seroconversion means screening is not 100% effective in keeping an HIV positive person’s blood out of the blood supply. The real question, and my real objection to this ban, is why on earth we have chosen to stratify the population into MSM and non-MSM. I’ll explain, shortly, why I think this distinction is homophobic, but I think it’s worth restating the challenge faced by the Blood Service to give a common basis for understanding.

The goal of the blood service should be – and, despite what I think is misguided reliance on dubious science, is – to keep HIV positive blood out of the blood supply, so that all of us, should we ever need it, can feel confident in receiving a transfusion. In an ideal world, people who are HIV positive (or who carry other diseases like Hepatitis, or vCJD) would be excluded on safety grounds and everyone else would be encouraged to give blood. Keep that simple goal in mind.

The problem, then, is that we don’t have a completely reliable – or even “good enough” – test for HIV. It’s possible to be infected for months or even years prior to the virus becoming detectable through an antibody test (although the average is about 6 weeks) and only about half of people will develop symptoms during seroconversion. So now the challenge is a little more complex, how do we segregate the population into those considered low risk and those considered too high-a-risk to donate blood?

Those people who know they are HIV+ can be screened simply by asking them “Are you HIV+?” – This gets rid of the 80% or so (HIV Aware, 2012) of people who are diagnosed. Of the remaining 20%, many will be picked up by screening, having been infected long enough to develop HIV-specific antibodies that will be detected by the antibody test used by the UK Blood Service. The other HIV test, a nucleic acid test (NAT) is 95% effective in detecting HIV infection in individuals infected for 17 days or more. That leaves a small group of people, who have been recently infected, that cannot be removed from the donor pool either by asking them about their status or by screening their donations – the undiagnosed and undiagnosable-at-point-of-donation HIV+ population – UUHIV+.

In mathematical terms, we’re starting with a single population (of all of us, excluding known HIV+ people), with some known proportion of UUHIV+ people remaining. This is a prior probability distribution – It’s the distribution of risk that we know about, in the absence of any more information about the population.

The challenge that we have in practice, then, is to identify some set of criteria that we can use to improve our probability distribution – to learn more about the likelihood that a given person is UUHIV+.

Excluding every man who has had sex with another man in the last 12 months does do that. But so does excluding all black people, everyone below 60, or, (for a massive reduction in risk, at the expense of supply) everyone who isn’t a gold-star lesbian.

Quite reasonably (and probably in part because most of institutions are now clued-up enough about racism to spot it in their own policies) the UK Blood Service does not exclude all black people from donating blood. They do have restrictions on people who have visited certain countries in the last 6 months, though. The point is that we need to select screening criteria that are reliable and specific.

While it’s undeniable that more MSM people are HIV+, and therefore probable that, proportionally, more MSM people are UUHIV+ than the general population, it’s equally undeniable that a) the vast majority (96% for most of the UK, 92% for London *) MSM are not (and never will be) HIV+ and that b) large numbers of non-MSM people are HIV+.

It’s also undeniable that the MSM group covers a massive spectrum of risk: from monogamous long-term couples to [what’s a nice term for megasluts?], from the regularly-tested condom-obsessive to the GUM-avoiding bareback fanatic. The use of a criterion as blunt as “MSM” knowingly excludes those with negligible risk of being UUHIV+ – those who are demonstrably lower risk than many non-MSM people who ARE allowed to give blood.

Meanwhile, straight [nice-term-for-megasluts] are not excluded on the grounds of sexual behaviour – unless they think they might have had sex with an MSM.

Fundamentally: HIV is not created when two men have sex – HIV is transmitted from an HIV+ person to someone else, usually sexually. HIV transmission is not so much a function of the gender or sexuality of the people you’re having sex with, but the prevalence of HIV within the network of people you’re having sex with. A homogeneous network of two – the monogamous HIV-negative diad – isn’t at risk. In fact, the pre-1980s homogeneous network of millions wasn’t at risk until HIV was introduced into that network. It was the individual behaviours within that network – promiscuity and little use of condoms – that facilitated the rapid spread of HIV among members. Two MSM in a monogamous sexual relationship are at infinitely less risk than a non-MSM with many casual partners – the fact that MSMs, as a result of social factors (such as homophily, historic ghettoisation and culture), are at statistically higher risk is a rough correlation only.

Why, then, is being MSM in itself, rather than sexual participation in a broader “at-risk” network, used to screen blood donors? The UK ban was introduced during the early days of the AIDS epidemic – When we knew far less about HIV/AIDS but did know that it affected the gay community most. A ban at that point – in the absence of a clear understanding about what caused AIDS and no time to do the science necessary to establish specific screening criteria – was justifiable.

“Gay” is largely a term of self-identification and so, to your typically positivistic natural scientist, rather too wooly. Men who have sex with men, is, in terms of who it includes, far more objective. With very few edge cases (and only a little uncertainty about what constitutes “sex”) we all know whether we fit into that category.

Despite our knowledge that merely having sex with other men does not lead to HIV infection, and the fact that MSM covers such a wide range of behaviours with such varying degrees of HIV risk, it is still used as an objective and scientifically useful way to categorise people. What debates about objectivity (MSM) vs subjectivity (“gay”, “queer”, “on the D/L”) miss is the extent to which this division is arbitrary – and a distinction between two broad groups, MSM and everyone else, is increasingly arbitrary given documented social changes towards “post-gay” identities, social networks and hence behaviours. If MSMs ever were a homogenous group, they surely aren’t today.

Using MSM as a medical distinction blurs a whole range of complex factors, some of which correlate with HIV risk and some of which are co-incidental. MSM is a distinction that arises, in large part, because of how sexuality has come to define identity and social connections. Only a society in which a continuous spectrum (the Kinsey scale) was awkwardly divided into two discrete groups would even have stumbled into such a categorisation for epidemiological purposes.

Fundamentally, that is why the MSM blood ban is homophobic. It is a ban that denies our individual autonomy, our own understanding of (and appetite for) risk and instead reduces us to a sexuality because that’s how history has defined and marginalised us for centuries. It is a ban justified by a distinction based on two apparently objective groups, but which in reality is a distinction that results from the historic ghettoisation of MSMs, and which today is more and more arbitrary.

Yes, excluding MSMs from donating blood reduces the risk of UUHIV+ individuals donating blood, but it reduces the risk LESS than screening based on individual risk profiles and at the expense of a great deal of supply for a blood service which, it constantly tells us, is often under-supplied.

I want to see a comparison of the effectiveness of screening based on “Have you (a male) had sex with another male in the last 12 months?” and “Have you (a person) had sex with a new partner in the last six months?”, or “Have you (a person) had sex with more than three people in the last twelve months?”. These questions are justified by our increasingly detailed model of HIV transmission, and are not grounded in the arbitrary social divisions of our great grandparents. These questions send the message that your behaviour is what matters, and not your sexuality – If that’s not a helpful public health message then I don’t know what is.

If it turns out that asking “Are you an MSM?” is more effective at screening out UUHIV+ individuals from donating blood, then we should stick with it. Somehow, though, I doubt it would be.

 

* Although, even if you knew how MSM people had HIV (which is possible), since nobody can agree how many MSM there are (estimates of the “gay” population ranges from 2% to 10% of the UK population, with much of government going for something around 6%), stating what proportion are HIV+ is more akin to divination than actual statistics.

(HIV Aware, 2012) http://www.hivaware.org.uk/facts-myths/hiv-statistics

Raymond Charles Gomer

I struggled to pick any particular memories of Grandad. There are no individual moments that stand out from the others, no single memory that sums him up. On reflection, though, that is befitting of a man who has been a sort of persistent calm in our lives. A friendly face, a warm smile, and a heartfelt pat on the back.

My memories are of him sat in his chair, feeding his fish; Of arriving at his house and rushing to find him in his greenhouse or knelt in the garden in his string vest.

William Wordsworth wrote that “the best portion of a good man’s life [is] his little, nameless, unremembered acts of kindness and of love”. Little nameless acts like bringing in carrots with the tops on (because one of us liked to eat them), or keeping birds eggs in the shed to show us next time we visited (even though they kept exploding); sharing his stories and eating the holes from doughnuts.

Other acts – building model snowmen, or growing the pineapple – were achievements in themselves, but the ease with which we accepted them is testament in itself. I still feel an intuitive sense of childish bemusement at the suggestion that either of these things might be unusual. “Of course he’s grown a pineapple, of course he’s built a big model snowman, he’s grandad.”

In truth, then, memories of Grandad are not hard to find. Memories of grandad are memories of Christmas, of Birthdays, of the everyday activities that we did together and the memory of a childhood spent with grandparents that loved and cared for us.

Grandad leaves behind a family that, as we have seen these past weeks, continues to care. A family with little drama but defined by the same calm, patient affection that characterised him. We all know how proud he was of each of us, and I hope he was proud of himself for being such a part of an environment in which we could flourish.

Raymond Charles Gomer leaves us each with his calm kindness, his understated affection. These are virtues for us to remember and to emulate. We may no longer find grandad in his garden, or in his shed; but still, as we go on with our own lives we can remember, when we are sad, when we feel stressed, when we need to remember a calm or a safe place, to keep finding grandad.