As somebody pointed out to me recently, we have been running analytics on data well before somebody named a subset of all that data as ‘Big Data’. And much analytics are undertaken on data that do not fit the ‘3Vs’ (or is it 4?) used to define Big Data.
Those nomenclature fine points don’t matter. What does is the way that they impact people’s lives.
Perhaps the most fundamental reason for undertaking such analytics is to discriminate, ie draw distinctions between individuals and treating them differently based on such distinctions. Between the interested and the less interested; between car drivers and users of public transport; between men and women and on and on almost forever.
Having discriminated, then a decision can be made: by a person or by a machine. The new popular term for the latter seems to be ‘algorithmic decision making’. The range of applications include ‘personalised’ advertising, social network feeds, pricing of goods and services, pre-qualification for job interviews, insurances of all sorts, predeliction to delinquency and beyond.
We are watching an exponentially accelerating (ie not just the acceleration of the speed, but an acceleration of the acceleration) deployment of discrimination and decision making based on digital data.
Whether this is a good thing or a bad thing may often be something in the eye of the beholder (or profit maker). One of the better articles that tries to draw this out is Judged by the Tin Man: Individual Rights in the Age of Big Data by Jules Polonetsky and Omer Tene.
And the issue is gaining headline prominence, not just among the boffins and aficionados. Only in the last few days, the US President’s Council of Economic Advisers has released a report on Big Data and Differential Pricing, “as part of the Administration’s commitment to deeply examine how these new technologies may inadvertently or deliberately lead to discriminatory outcomes, and what policy mechanisms may be needed to respond”.
On this particular form of discrimination, the report (rather hopfully) concludes its Executive Summary by observing that:
“While substantive concerns about differential pricing in the age of big data remain, many of them can be addressed by enforcing existing antidiscrimination, privacy, and consumer protection laws. In addition, providing consumers with increased transparency into how companies use and trade their data would promote more competition and better informed consumer choice.”
But almost certainly more will be needed.
Digital ethics has been listed as one of the three priority topics for the coming year for the Digital Enlightenment Forum. This is a vital issue and will shape our society as much as anything else does.
The biggest challenge for DEF will be to put forward implementable, effective ways to improve digital ethics and then to make the case convincingly to the wider world.
The Information Accountability Foundation believes that the introduction of ethics consideration explicitly into decision making on Big Data Analytics will help address these challenges. And they have gone further than others who have made similar proposals by actually setting out “A Unified Ethical Frame for Big Data Analysis“.
Maybe a good place to start is to take the Foundation’s proposals as a ‘Straw Man’ starting point and see what contribution DEF can make to that debate or to put forward a better proposition.