With Big Data Comes Big Responsibility

Originally posted on Om Malik:

“You should presume that someday, we will be able to make machines that can reason, think and do things better than we can,” Google co-founder Sergey Brin said in a conversation with Khosla Ventures founder Vinod Khosla.  To someone as smart as Brin, that comment is as normal as sipping on his super-green juice, but to someone who is not from this landmass we call Silicon Valley or part of the tech-set, that comment is about the futility of their future.

And more often than not, the reality of Silicon Valley giants, who are really the gatekeepers of the future, is increasingly in conflict with the reality of the real world!  What heightens that conflict — the opaque and often tone-deaf responses from companies big and small!

Silicon Valley (both the idea and the landmass) means that we always try to live in the future. We imagine what the future…

View original 1,948 more words

Special edition of International Journal of Communication on critical approaches to #bigdata

Big Data| Critiquing Big Data: Politics, Ethics, Epistemology | Special Section of the International Journal of Communication.

This special section of the International Journal of Communication brings together critical accounts of big data as theory, practice, archive, myth, and rhetorical move. The essays collected here interrogate rather than accept the realities conjured through our political, economic, and cultural imaginings of big data. From neoliberal economic logics shaping the deployment of big data to the cultural precursors that underlie data mining techniques, the issue covers both the macro and micro contexts. We have drawn together researchers from communication, anthropology, geography, information science, sociology, and critical media studies to, among other things, examine the political and epistemological ramifications of big data for a range of audiences. Articles in this collection also interrogate the ethics of big data use and critically consider who gets access to big data and how access (or lack of it) matters to the issues of class, race, gender, sexuality, and geography.

….

The issue can be found here (Articles are Open access)

….

Surveillance in the Workplace an overview of issues of privacy, monitoring, and ethics

Surveillance in the Workplace
an overview of issues of privacy, monitoring, and ethics

MICHAEL BLAKEMORE / Briefing Paper for GMB September 2005

Professor Michael Blakemore
IDRA Ltd, blakemoremjb@hotmail.com

 

THEMES

1   Surveillance is nothing new, but the nature of surveillance is changing
2   Surveillance pre-Internet did not require consent, but it was selective, costly, and not pervasive
3   Over-reliance on technological surveillance can be problematical
4   Function-creep has always been a characteristic of surveillant technologies
5   Surveillance in many circumstances is a positive process, but not without problems
6   Surveillance of employees focused in the past mainly on physical removal of property
7   Those using surveillance technologies often rely in simple linear arguments of good and bad 5
8   Propagate a powerful myth and embed it into the `need’ for pervasive surveillance
9   Surveillance in the retail sector
9.1. Routine surveillance in a retail situation is also promoted as a form of employee protection – whether it realistically protects employees, or at least helps in the detection of criminals
9.2. How do I know whether I am being surveilled?
9.3. Am I justified in being worried by surveillance?
9.4. Areas of surveillance
10  Pervasive computing does not necessarily lead to positive benefits
11  Call Centres
12  Legislative reactions
13  Health and Safety, Risk Assessment
14  Consumer choice can be influenced by `social sorting’
15  The problem is not just the technologies, but may be more one of consent
16  The demise of the implied social contract?
17  Sources of Imagery
18  Sources


Continue reading

The border is everywhere: The history and future of biometric security

The birth of biometric security

We are currently witnessing a rapid rise in biometric security. Borders are apparently becoming ‘smart’; passports are becoming e-passports, and when you set out on your travels your data double is already at your destination. Access to airports and even continents will increasingly be determined not by your national citizenship but by the security of your identity. Biometric security has received little anthropological attention despite historical associations with the discipline. Here I wish to outline a brief genealogy of biometric security in order to argue that, beyond the apparent newness of the technology, key biometric technologies owe their origins to 19th-entury deployments and then as now they may be understood as a form of bio-governmentality in which the security of identity opens possibilities for population control.

Maguire, Mark (2009) The birth of biometric security. Anthropology Today, 25 (2). pp. 9-14. ISSN 0268-540X

….

Full paper here, bbc radio 4 interview here

….

Identity dominance: The U.S Military’s Biometric War in Afghanistan

For years the U.S. military has been waging a biometric war in Afghanistan, working to unravel the insurgent networks operating throughout the country by collecting the personal identifiers of large portions of the population.  A restricted U.S. Army guide on the use of biometrics in Afghanistan obtained by Public Intelligence provides an inside look at this ongoing battle to identify the Afghan people.

….

Article here

….

Face recognition in retail, transport & buildings

Retail:

When a person in your database steps into one of your stores, you are sent an email, text, or SMS alert that includes their picture and all biographical information of the known individual so you can take immediate and appropriate action.

  • Receive descriptive alerts when pre-identified shoplifters walk through any door at any store.
  • Get alerts when known litigious individuals enter any of your locations.
  • Build a database of good customers, recognize them when they come through the door, and make them feel more welcome.
  • Enhance treatment of frequent travelers. Build a database of frequent travelers to ensure they are properly recognized and greeted.

Transport

  • Spot parties from watch lists and alert authorities worldwide. Catch individuals on local, national and international watch lists.
  • Control employee access. Receive alerts instantly when employees enter areas of your facility for which they are not authorized.
  • Enhance treatment of frequent travelers. Build a database of frequent travelers to ensure they are properly recognized and greeted.

Buildings

  • Receive descriptive alerts when anyone walks into your building who is not wanted there.
  • Flag individuals who have caused problems in the past.
  • Be alerted when known litigious individuals enter any of your properties.
  • Cooperate with law enforcement. Load their criminal data into your  database so you can notify them if one enters your building.
  • Monitor the movement of people in your facility to ensure that no one is in an area in which they are not authorized to be.

….

Source here

….

An open letter to Glass explorers attending the canberra #GLASSMEETUPS event

This is an open letter addressed to Glass explorers attending the Glass Meetups event that will occur at the University of Canberra, INSPIRE Centre on 12 May, 2014.

Date: 12th May 2014
Time: 5:30 PM till 7:00PM AEST Australia
Location: University of Canberra, INSPIRE Centre, Building 25 Pantowora Street, Bruce ACT 2167 Australia

Tickets:  are sold out but you can participate (appropriately enough) via a Google Hangout.

 ……

A copy of this letter can be found on the Inspire centre’s website here

……

Dear Glass Explorers,

Greetings from ‘stop the cyborgs’ which you may know from our ban signs and possibly a few media articles. We are mainly technology people so we are definitely not ‘anti tech’.  We are not calling for a complete government ban on wearable tech like glass. Nor do we believe that you shouldn’t wear it at all. Rather we want to help define sensible norms around where people do or don’t wear devices; encourage individual people to think about the social impact of new technologies; and to discourage the normalisation of surveillance.

Even though Google Glass is still a limited prototype it has generated excitement and controversy in equal measure.  Whether you love it or loath one thing we can agree on is that it is a symbolically important device that represents a change in our relationship with technology.  If the trajectory that glass represents is followed technology will become part of us, mediating every human decision and interaction for good or ill.

Glass, other wearables  (and in future implants) are designed with the intention of making technology both invisible and omnipresent by integrating it closely with the body. The integration of corporate cloud services, technical devices, and the human body has three major implications.

  1. Non-users cannot tell what user’s device is doing. 

Wearing a POV camera like Narrative Clip, Glass or Life Logger is not equivalent to having a smart phone in your pocket – and picking it up and using it. It is equivalent to constantly holding up your phone and pointing it at people.  The non-user has no real idea what the user’s device is doing and this leads to mistrust, unease and unfortunately in some circumstances confrontation.

  1. Users can feel devices are part of their extended body.

With a traditional device like a phone non-users could just make a behavioural request. If someone asked you to stop pointing your smart phone at them would you be offended?  Probably not. However with some wearable devices like Glass this leads to confrontation because there is a feeling that wearable devices form part of the extended self. Wearable devices are not temporary tools but rather deeply personal and individual.

  1. Individual becomes part of the platform.

The individual becomes a node in the network. They are personally monitored (for example location or activity data). They gather data about the world on behalf of the system (social sharing, rating, location, proximity to others). They are given suggestions and advice by the system such as recommendations, ratings nudges or incentives.

  • A Google Now like service provides you with suggestions.
  • A Name tag like service gives you a trip advisor like rating of the person you are looking at.
  • A tinder like service tells you who you should to talk to
  • A fitbit like service rewards you for certain ‘good’ behaviours

This applies to the web and smart phones as well as wearables. However as these systems “get out of the way”, and are increasingly integrated with ourselves they become a kind of outsourced unconscious that we become simultaneously more dependent on and less able to scrutinize.

This corporately controlled collective mind allows companies to exert a powerful and invisible influence. The algorithms seem objective – we trust them – but social assumptions, cultural values, expected norms and power structures are hidden and enforced by code.

….

Most of the discussion at #glassmeetup is naturally going to focus on Google Glass. However we should not fixate on the specific technical features of generation one of one particular device.  Yes the battery life sucks, they don’t constantly record and face recognition is currently banned, but battery life will improve, Life Logger constantly records and Google have not committed to a permanent ban on face recognition.

Rather we need to take a wider view and sensibly discus the social and political implications of current and potential technologies.  Technology has a powerful influence on society – yet there is typically a view that it is morally neutral, apolitical and inevitable. Nothing is further from the truth.

This discussion should not take place in a middle class tech bubble. Not everyone is a model citizen with a perfect credit record, positive social media profile and good health. Not everyone will have an equal chance in this gamified world we are constructing. Rather we need to consider how new technologies will impact marginalised groups and individuals.

Further we need to consider how technologies affect power structures. Do certain organisations, systems gain an unprecedented ability to monitor and influence human behaviour? Wither personal freedom when our every action is monitored and judged by social media and powerful and un-debatable automated systems?

Some say: “That it is too late. That we must accept that everything will inevitably recorded.  That the only response is transparency and mutual snooping.” This argument is initially seductive but it fundamentally misunderstands what modern surveillance is.

There is a place for transparency but it is systems that need to be transparent not people. Mutual snooping against individuals magnifies power disparities and perpetuates inequalities.

Modern surveillance is not primarily about an evil big brother figure watching from above. Modern surveillance is about classifying and sorting people. It is about ‘statistically rational’ automated discrimination, nudging and enforcement of norms. Modern abuse of power is not a cop beating someone up. It is about your life chances being determined by some correlation and that doesn’t show up on video.

Wishing you all the best for your #glassmeetup in Canberra.

Stop the Cyborgs

 

Annex: Defining the extended body

In a sense users of some wearable devices are claiming a cyborg identity. They are claiming that their self encompasses both their biological body and a technical device.

It seems reasonable to claim that one’s extended self can encompass technical parts. If a person has a hearing aid, a pace maker or a prosthetic limb then it would be unreasonable to ask them to remove their ears, their heart, or their leg.

Similarly though a little less convincingly it could be argued that there is no reason to make a distinction between medical devices (which restore “normal” function) and enhancements (which give people new abilities).

Thus it can be argued that excluding people because their self happens to include some technical part is a form of discrimination. Certainly we have a great deal of sympathy with this view point and support the rights of people who use assistive devices and indeed people who have enhancements like Neil Harbisson.

However while it seems reasonable to claim that a hearing aid or self-built extra sense is part of the extended self this extended self must have a boundary for it to exist as an individual self at all. This boundary if it is not the biological body must be defined by agency. That is an individuals extended body and extended mind comprises only those systems over which that individual and no-one else has control.

Devices which are networked or controlled by a corporation therefore cannot form part of an individual’s extended body.  Indeed a user of such a device cannot claim for instance that “they are not recording” because they have no idea what their device is really doing.  Certainly they have no idea what is happening to their data once it has been uploaded to the server – where it can be sold or subpoenaed.

This is of course true for phones as well as wearables or implantables. The key difference being that a compromised phone will sit in a handbag, rather than an ideal POV vantage point (wearables) or embedded into your flesh (implantables).

#Bigdata is really automated discrimination

Oscar H. Gandy Jr. Ethics and Information Technology
March 2010, Volume 12, Issue 1, pp 29-42

“Engaging rational discrimination: exploring reasons for placing regulatory constraints on decision support systems”

In the future systems of ambient intelligence will include decision support systems that will automate the process of discrimination among people that seek entry into environments and to engage in search of the opportunities that are available there. This article argues that these systems must be subject to active and continuous assessment and regulation because of the ways in which they are likely to contribute to economic and social inequality. This regulatory constraint must involve limitations on the collection and use of information about individuals and groups. The article explores a variety of rationales or justifications for establishing these limits. It emphasizes the unintended consequences that flow from the use of these systems as the most compelling rationale.

….

Download the Full paper here (if you have journal access) here (if you don’t)

….

Also see Big Data Is A Civil Rights Issue