Cyborg Unplug is a wireless anti-surveillance system for the home and workplace. ‘Plug to Unplug’, it detects and kicks devices known to pose a risk to personal privacy from your local wireless network, breaking uploads and streams. Detected devices currently include: Google Glass, Dropcam, small drones/copters, wireless ‘spy’ microphones and various other network-dependent surveillance devices.
London authorities have started forcing people convicted of alcohol related crime to wear ankle tags which monitor the alcohol content of their perspiration alerting the offenders probabation officer if they drink. Meanwhile in a healthcare and by extension health insurance context. Proteus digital health have developed a system where wearable and ingestible sensors work together to gather information about treatment compliance. (The system uses technology similar to the ingestable password pill that Motorola have been developing as reported by allthingsd back in 2013). Meanwhile wearables are increasingly being used for monitoring employees in the workplace , including as part of corporate wellness programs.
The upshot of all this is that we are seeeing the mass market application of something academics have been warning about for some time: Namely the use of wearable, ingestable and implantable surveillance technology to segregate and enforce behavioural compliance for the purpose of efficient population management. Given that we are already seeing the use of wearable monitoring devices for both healthcare and criminal justice applications it wouldn’t be too much of a stretch to imagine tags or implants being used by a rightwing government to distingush between ‘deserving‘ and ‘undeserving‘ recipients of wealthfare or healthcare at which point the uberveillance society will have well and truely arrived:
We are living in a period where chip implants for the purposes of segregation are being discussed seriously …We will almost certainly witness new, and more fixed forms, of “electronic apartheid.” … The next generation will view this technology as super “cool” and convenient and opt-in without comprehending the full extent of their compliance
We understand that technotherapeutics (as many other health technologies) may be initially welcomed by many of those they are intended to help. These technologies may provide individuals who are chronically ill with a sense of identity and even ‘empowerment’ about their adherence.
…… As has been observed historically, these technologies may ultimately become divisive and serve to differentiate the ‘good’ from the ‘bad’, or, in a bio-political sense, to differentiate those worthy of life (ongoing treatment and support) and those who the state should ‘let die’ (denied future medication or insurance coverage).
Overall, we understand technotherapeutics as serving to both discipline individual bodies and also to regulate whole groups of people deemed to constitute a threat to the collective body. In this sense, we consider that adherence work is above all a political project that endeavors to achieve optimal disease management (through surveillance and discipline), reduce the financial burden of treatment non-adherence on healthcare systems, and serve to further marginalize and differentiate ‘at-risk groups’ because of their unwillingness or inability to conform
The key decisions that shape people’s lives—decisions about jobs, healthcare, housing, education, criminal justice and other key areas—are, more and more often, being made automatically by computers. As a result, a growing number of important conversations about civil rights, which focus on how these decisions are made, are also becoming discussions about how computer systems work.
The September 2014 report on social justice and technology begins to answer the question “How and where, exactly, does big data become a civil rights issue? “
Read the report here
The report is generally very good and provides real concrete examples. However they claim on the basis of sparse anecdotal evidence that ‘Body-worn cameras are poised to help boost accountability for law enforcement and citizens’. We beg to differ – life as always is more complicated than that.
There is no such thing as ‘technology.’
Anyone who views critics of particular technologies as ‘luddites’ fundamentally misunderstands what technology is.There is no such thing as ‘technology.’ Rather there are specific technologies, produced by specific economic and political actors, and deployed in specific economic and social contexts. You can be anti-nukes without being anti-antibiotics. You can be pro-surveillance of powerful institutions without being pro-surveillance of individual people. You can work on machine vision for medical applications while campaigning against the use of the same technology for automatically identifying and tracking people. How? Because you take a moral view of the likely consequences of a technology in a particular context.”
In June 2014, the Supreme Court handed down its decision in Riley v. California, in which the justices unanimously ruled that police officers may not, without a warrant, search the data on a cell phone seized during an arrest. Writing for eight justices, Chief Justice John Roberts declared that “modern cell phones . . . are now such a pervasive and insistent part of daily life that the proverbial visitor from Mars might conclude they were an important feature of human anatomy.”1
This may be the first time the Supreme Court has explicitly contemplated the cyborg in case law—admittedly as a kind of metaphor. But the idea that the law will have to accommodate the integration of technology into the human being has actually been kicking around for a while.
Speaking at the Brookings Institution in 2011 at an event on the future of the Constitution in the face of technological change, Columbia Law Professor Tim Wu mused that “we’re talking about something different than we realize.” Because our cell phones are not attached to us, not embedded in us, Wu argued, we are missing the magnitude of the questions we contemplate as we make law and policy regulating human interactions with these ubiquitous machines that mediate so much of our lives. We are, in fact, he argued, reaching “the very beginnings of [a] sort of understanding [of] cyborg law, that is to say the law of augmented humans.”
The report is interesting and thoughtful. It asks exactly the kinds of questions we need to consider as a society.
Read the full report here
Since we are cited as an example twice we need to briefly clarify our views:
(1) The report states
How exactly we will mediate between the rights of cyborgs and the rights of anti-cyborgs remains to be seen—but we are already seeing some basic principles emerge. For example, the proposition that individuals should have special rights with respect to the use of therapeutic or restorative technologies appears to be so accepted that it has prompted a kind of intuitive carve-out for those who otherwise oppose wearable and similar technologies. Such is the case with Stop the Cyborgs, an organization that emerged directly in response to the public adoption of “wearable” technologies such as Google Glass. On its website, the group promotes “Google Glass ban signs” for owners of restaurants, bars and cafes to download and encourages the creation of “surveillance-free” zones.76 Yet the site also expressly requests that those who choose to ban Google Glass and “similar devices” from their property to also respect the rights of those who rely on assistive devices.77
This is true (it refers to our section on ‘Disability rights & assistive devices‘) but our stance is a little more nuanced than implied in the report. The core issues are agency and coersion rather than some normative conception of what a human should be.
If the cyborg’s extended body includes components that they do not fully control such as:
- Remotely controlled devices
- Closed source devices
- Cloud services or data storage
- Hackable or remotely updateable networked devices
Then the cyborg does not have control over there own extended body and are in a vunerable position. They are potentially subject to external surveillance, coersion and control. Further because they are carriers of external forces they may subject those arround them to external surveillance, coersion and risk. Because their extended body comprises networked technical systems they cannot reassure people that they are not going to do X because they do not control their own extended body. Thus cyborgisation forces us to replace behavioural requests “please turn your camera off and leave it outside” with the exclusion of particular extended bodies “you cannot come in to the Tibetan dissidents meeting because your body is a camera which automatically syncs with Baidu“.
Cyborgisation threatens the idea of individual agency and responsibility.
Depending on the situation it may be the cyborg themselves or those arround them that suffer most. Further depending on the situation the degree of choice that the cyborg has about using the device may differ. In the case of assistive devices the user may have little choice. All available devices may subject the wearer and those arround them to external monitoring but because the consequences of not being able to see, or hear, move or otherwise function as huge they have little choise but to accept. Similary some people may be coersed by their insurers or employeers into wearing or being implanted with a device. Then finally we have people like glassholes or lifeloggers who have freely choosen to wear a device.
Where the cyborg is subject to coersion our sympathies are with them. If a technical part makes up your extended body then you should control it not some corporation but unfortunately the majority of medical and assistive devices are closed propriety systems. Further no-one should be coersed by people, corporations or indeed wider economic or social forces into wearing or being implanted with any device. However it is clear that many people unfortunately are.
In the case of glassholes, lifeloggers or views are clear. The loss to these people of removing their device is minimial and even if they have embedded a camera in their head – noone and no circumstance forced them to do it.
Great blog post by Mark Carrigan of sociologicalimagination.org:
He starts off with personal experiance of using a tracking device:
Earlier this week I finally bought the Jawbone Up24 after weeks of deliberation. I’d got bored with the Nike Fuel Band, losing interest in the opaque ‘fuel points’ measurement and increasingly finding it to be an unwelcome presence on my wrist. I’d also been ever more aware of how weird my sleep patterns have become in the past couple of years, cycling between rising early and staying up late, with little discernible rhyme or reason. The idea of tracking my sleep in a reasonably accurate fashion, using degree of bodily movement as a cypher for the depth of sleep, appealed to me on a reflexive level.
This experiance of being nudged by wearable tech makes him consider how intrusive wearable tech be if were made manditory and used to enforce behaviour.
I set the ‘idle alert’. I did so because I found it an appealing idea. It was an expression of my own agency. But it left me with a sense of quite how intrusive and aggressive this technology could be if it were ever mandated.How hard is it to imagine a situation where Amazon factory workers are expected to wear similar bands, programmed to issue a vibrating warning after 15 minutes of idleness and to alert the supervisor if the worker is still idle a few minutes later? Is it at all challenging to imagine a comparable band with an RFID chip being used to track and sanction a call centre operator who spends too long in a toilet?
How hard is it to imagine a situation where a Conservative government, eager to separate ‘strivers’ from ‘skivers’ demands that welfare recipients submit to monitoring of their alcohol and nicotine intake?How hard is it to imagine a situation where recipients of weight related interventions on the NHS are made to wear activity tracking bands with the threat of withdrawn rights to healthcare in the case of unhealthy eating or sedentary lifestyles?
Originally posted on Om Malik:
“You should presume that someday, we will be able to make machines that can reason, think and do things better than we can,” Google co-founder Sergey Brin said in a conversation with Khosla Ventures founder Vinod Khosla. To someone as smart as Brin, that comment is as normal as sipping on his super-green juice, but to someone who is not from this landmass we call Silicon Valley or part of the tech-set, that comment is about the futility of their future.
And more often than not, the reality of Silicon Valley giants, who are really the gatekeepers of the future, is increasingly in conflict with the reality of the real world! What heightens that conflict — the opaque and often tone-deaf responses from companies big and small!
Silicon Valley (both the idea and the landmass) means that we always try to live in the future. We imagine what the future…
View original 1,948 more words
This special section of the International Journal of Communication brings together critical accounts of big data as theory, practice, archive, myth, and rhetorical move. The essays collected here interrogate rather than accept the realities conjured through our political, economic, and cultural imaginings of big data. From neoliberal economic logics shaping the deployment of big data to the cultural precursors that underlie data mining techniques, the issue covers both the macro and micro contexts. We have drawn together researchers from communication, anthropology, geography, information science, sociology, and critical media studies to, among other things, examine the political and epistemological ramifications of big data for a range of audiences. Articles in this collection also interrogate the ethics of big data use and critically consider who gets access to big data and how access (or lack of it) matters to the issues of class, race, gender, sexuality, and geography.
The issue can be found here (Articles are Open access)