digitaLiberties

Surveillance, power and communication

Coalitions of actors – scholars, activists, some politicians, and even some captains of industry, will need to collaborate if the pathway we pursue to a calculated, unequal future is to change.

Robin Mansell
20 July 2016

HRI

2612820208_43994df40d.jpg

Algorithmic drawing X11. Flickr/ Brett Renfer. Some rights reserved.This presentation is organised around three keywords: Surveillance, Power and Communication. Another keyword is algorithm. This keyword was omitted from my original title – perhaps there was a worry that this talk would be very technical. It won’t be.

I want to discuss an answer to a deceptively simple question - In this increasingly algorithmic world of ours, is digitally mediated communication governable? In 1950, one of my PhD supervisors, the Canadian Political Economist, Professor Dallas Smythe, was also asking a similar question! He put it this way: ‘What kind of world will be borne through the midwifery of our new and more powerful communications tools?’ He was concerned about the consolidation of the communications industry in the post-war period. He was worried about what was happening to the public’s right to access information, to the right of citizens to be free from that era’s forms of surveillance, and about the protection of privacy – in today’s terms – the right not to be tracked, analysed or acted upon in harmful ways.

In my own research my primary interest is always in how we imagine the digitally mediated environment we inhabit, and in whether alternative worlds or pathways are possible. What alternatives are realistically available for societies when they embrace digital technological innovations? Are the dominant trends in digitally mediated surveillance, power and communicative practice congealed, or, can they change and be better aligned with citizen interests in social democracy, a good society or whatever you wish to call it?

Scholarship on algorithms asks - What are they? Who or what governs them? Are values embedded in them? What are the consequences for social sorting and discrimination? Are users aware of them? This field is a growth industry. It is attracting lots of research funding. Research focuses on the algorithm as a sensitising concept, as an active agent, and as a black box to be unpacked. It also asks normative questions about the political, ethical and accountable character of algorithms.

Is there anything new to say?

The digitally mediated world is governable. But - great care is essential to situate both what is governed and who governs in the context of what kind of world is desirable and for whom. Questions about governance, power, surveillance… matter because of their relation to very big social, political and economic problems.

Surveillance communication is obviously connected with power relations. These are understood entirely differently by algorithm makers and their corporate and state overseers, as compared to many – though not all - social science scholars and many users. I want to shift the analytical spotlight away from algorithms or algorithmic assemblages per se. In this presentation I am going to highlight a core societal problem. This is the increasing fascination with - and attachment to - the quantifiable.

As you all know, some research in our field is very media centric. Similarly, even when the algorithm is treated as a sensitising concept (Ziewitz), research in this area is often very algorithm centric. Sometimes it seems to be at risk of forgetting why questions about governance, power, surveillance and algorithmic communication matter. Of course they matter. They matter because of their relation to very big social, political and economic problems.

So, what kind of world is being born through our new communications tools?

My doctoral student, Joao Vieira Magalhaes, says that, ‘algorithms are simultaneously effective and unfathomable’. This reminds me of Wittgenstein’s phrase - ‘we cannot […] say what we cannot think’. For most people, most of the time as users of digital services, we do not think about what is happening. Taina Bucher’s (2016) research on the algorithmic imaginary, affect and perception does examine user awareness. But it is almost impossible for us to say and think about what choices are being made for us and by whom when we go online. For the algorithm makers, of course, algorithmic computation is mainly about patterns of data and the choices that make them what they are. The problems are about ‘prediction’ and ‘contextualisation’ to rub out the foibles of human beings – to achieve and optimise the quantification of behaviour. Algorithms make digitally mediated surveillance or watching over technically very easy.

Algorithms make digitally mediated surveillance or watching over technically very easy. Applications can support and mitigate the damage of disasters, they can help protect people in public spaces, they can help signal health risks and in that sense, they combat disease. They help in monitoring climate change. Algorithms are being used to help companies to boost profits and countries are (in some cases) experiencing economic growth as a result – that is the claim and it can be verified. Algorithms also of course support sousveillance or undersight as Steve Mann and others call it; and so algorithmic based watching from below also supports a radical politics of resistance.

But is this world that is being born inclusive?

How does it fit with notions of a good society? A quick overview of the digital communications environment will give us some idea – usually this environment is depicted in this way – at least if we were to find ourselves at the World Economic Forum at Davos it would be:

- some 914 million people have at least one international connection on social media such as Facebook, Twitter, LinkedIn and WeChat and most are using it for electronic commerce.

- Global data flows raised the world’s GDP by more than 10% to $7.8 trillion in 2014. Small businesses can become ‘micro-multinationals’. The most connected are in Singapore, the Netherlands, the US, Germany, Ireland and the UK, with China clocking in at number 7, according to McKinsey.

  • - Around 12% of global goods trade is done via electronic commerce on platforms like Alibaba, Amazon, e-Bay, Flipkart, and Rakuten. Social media exposes consumers to products that go viral - Adele’s song ‘Hello’ had 50 million views on YouTube in its first 48 hours.
  • - Some company platforms and automated processes operate at hyperscale. Thanks to Airbnb, Agoda and TripAdvisor, data analytics-driven decision making is the order of the day.
  • - The Internet of Things is feeding all this, monitoring, sensing, tracking, and combining data in novel ways. Companies are investing big time to improve productivity, innovation and customer retention.
  • - Digital communication is central to the majority of people’s lives in the Global North. It is becoming so in many parts of the Global South. Generated by LinkedIn, Weibo, price comparison sites, and government information online sites, networks are carrying the traffic of billions of internet users. Of 1.6 million gigabytes of data transmitted every minute, a lot of it is transactions by citizens or consumers.

Global flows are becoming more inclusive.

This and the claims I’ve outlined are McKinsey’s (2016) in its recent report on data flows. For McKinsey and other corporate analysts, the biggest sources of vulnerability for society are from disgruntled employees, criminals, political activists, and other countries, not from the algorithms themselves. But more soberly, McKinsey also notes that lagging countries are catching up extremely slowly. The data flows of the leading countries just keep on rising, outpacing less wealthy countries and perpetuating gaps that do exclude. And we should not overlook the fact that six billion people do not have high speed broadband, some four billion do not have Internet access at all, and some two billion do not have a mobile phone.

The data industry is highly concentrated globally in terms of its capacity for processing data. Barnett and Park’s (2014) work shows that global internet connectivity is concentrated among a few core countries which serve as hubs – with a Gini coefficient of .930 – that is very concentrated. The hub countries are the US, the UK, China, Germany Brazil, France, India, Italy, Japan, Spain and Russia. With the growth of the so-called ‘big data’ ecology, new types of risk can command public attention and data processing using the algorithms can come to the rescue: the failure of power grids, financial crises, or information leaks. For McKinsey and other corporate analysts, the biggest sources of vulnerability for society are from disgruntled employees, criminals, political activists, and other countries, not from the algorithms themselves. Net losses due to cyberattacks according to McAfee and Lloyds of London insurers are around US $400 billion annually.

But inclusion and the penetration of technology and stats on the gaps cannot be the sole criteria for deciding whether the pathway towards an algorithmic society is a good one.

Yet this is the world which is being born: a society that privileges quantification. Let me put all this apparent newness or novelty into a context.

If ‘big data’ is short hand for algorithmic computing, why is it that it only relatively recently has become a topic for communication scholars and other social scientists? We encounter it as novel in a way that is similar to the way we encountered the so-called birth of the digital revolution or the information society. We encounter it as new partly because the discourse on big data and algorithms is being hyped by powerful actors as a solution to some very big social problems – here I am not hinting at any co-ordinated or organised conspiracy. But, there is a campaign to assure people that, whatever the functions of today’s algorithms, they are designed to keep us safe, happy, and make us wealthier.

In addition, a recent shift from commerce and concerns about consumer privacy to public discussion about the State’s role in war and migration is bringing algorithms into the public eye through the mainstream media in a different way. This, to some extent, is deflecting citizen attention away from threats to their privacy and rights to freedom of expression – at least for a while.

Nothing new

The catch phrase, ‘big data’, is new, but data processing most certainly is not new. The ‘data science’ term which includes algorithmic computing was coined in the 1960s. According to Sundaresan writing in the Huffington Post recently, Jeff Wu, an engineering professor at Georgia Institute of Technology, used it in the 1970s to refer to statistical data analysis. William Cleveland, an environmental statistician, used it in 2001 in an article in the International Statistical Review. Big data analytics is about statistically detecting patterns in IP addresses, unusual data accesses, or suspicious files. And machine or algorithmic learning is a branch of artificial intelligence. It too has been around for quite a while. There is a campaign to assure people that… today’s algorithms are designed to keep us safe, happy, and make us wealthier.

What is new to the public realm is the move into behavioural analytics and learning algorithms where the analytics happen beyond the knowledge of the algorithm makers. This is a departure from the past and this seems to be deepening the fascination of many with the quantification of every day life.

An example: MIT announced a new AI-based Cyber Threat Analysis Framework. The objective is to ramp up the speed and accuracy of analytics to find threats in the Dark Web - the part of the Web which is not indexed by search engines. By scanning for malware releases and ransom-ware tools, the technology will be used to identify new threats and observe the activities of hackers. Before assuming this is a major new development though, it is worth paying attention to people who are very experienced. One commentator says that ‘the effectiveness of AI-based systems in detecting threats is yet to be fully determined … solutions such as MIT’s are no silver bullet’. There have been high hopes for similar earlier communication machines. Yet we persistently treat each generation as if it is new. Historical memory is washed away.

In addition, knowledge or skills gaps in the algorithmic computation area are not new either. There is a gap on the science and social science sides. But the digital communications skills gap generally is big and there has been debate about deskilling and up-skilling for decades. It is true that few people have the knowledge to understand what an algorithm is or what it means to do ‘data analytics’. It is true that skilled people in areas like artificial intelligence, data management, data quality control, and data visualisation are short in supply. But the debate itself isn’t new.

Inequality

In this society or world which is being born through the midwifery of our digital communication technologies, another gap is growing. Inequality is growing. Oxfam claims that 62 people have as much wealth as the poorest half of the world's population. Countries are facing economic instability, bubbles and financial crashes. There is the spread of new viruses – Ebola or Zika. Poverty, lack of housing and poor water sanitation due to migration and asylum seeking are all too visible. And the answer for some? All these are symptoms of calculable risks. They can be managed by relying on algorithms and data analytics. The Cathedral or Temple of Computation is the big societal issue alongside inequality.

Ian Bogost suggested in US The Atlantic magazine that we are rapidly moving towards a ‘computational theocracy’. He is right I think. This isn’t new either, but I suggest to you that the Cathedral or Temple of Computation is the big societal issue alongside inequality. These are much bigger issues than what algorithms can, or cannot, do to us, or for us.

The challenge isn’t so much whether digitally mediated communication – based on algorithmic computation - is exploitative or liberating, inclusive or excluding. It may sometimes seem that algorithms are the drivers of the kind of society that is being born. But this is hype. Or it may sometimes seem that growing online participation is coinciding with a negation of human agency. But, human agency and power still matter – and of course they are contradictory! In practice, these developments in communication are conditioned by norms and rules – They are conditioned by governance arrangements and by power relations.

So let us turn now to the governability of societies that depend increasingly on algorithms, surveillance and online communication.

In what sense are the computational ‘black boxes’ governable?

Governance is complex. The term is often used loosely. By governance, I mean the rules, norms, and practices which are accepted or resisted in a given society. Governance influences the kind of world that is being borne; it is about the fundamentals of life, the quality of people’s lives, and whether, by any measure, societies aspire to be ‘good’ societies – societies which are inclusive, respectful, and enabling.

Sometimes governance is about legislation or policy. Some argue governance is needed to make sure that algorithms which signpost Twitter trends, the most read press articles, or support policing, should be transparent. Of course it is useful to understand their biases, who or what they sequester or hide, and when they are successful and when they fail some or other criterion. When the results algorithms produce are treated as if they are certain, this discourages our capacity to think about alternative worlds and development pathways.

However, when we think instead about algorithms in society as Networked Information Assemblages, governance is a much more subtle issue. Mike Ananny defines these assemblages as ‘institutionally situated code, practices, and norms with the power to create, sustain, and signify relationships among people and data through minimally observable, semiautonomous action’ (p. 93). Algorithms govern by structuring possibilities. When the results they produce are treated as if they are certain, this discourages our capacity to think about alternative worlds and development pathways. In this sense, these assemblages are our most recent disciplining technology – they discipline the mind.

We can think of governance as the ‘the ensemble of techniques and procedures put into place to direct the conduct of men [and women] and to take account of the probabilities of their action and their relations’, if we follow Mauizio Lazzarato. This suggests that governance research must focus on why machine learning or algorithmic computation is becoming a ‘black box’ even for the designers. And we need to remember that algorithms do not ‘make’ a world. Human beings in their institutional settings make the world. Governance analysis needs to focus on how practices and knowledge are interconnected in ways that produce governed subjects who are making their worlds and choosing their pathways.

In line with this, Lucas Introna says in a 2016 paper that the big governance challenge today is that ‘calculative practices are established as legitimate (or true)’ (p. 39). They are being internalised. But, I suggest that while they may be more effective in producing self-governing subjects than some earlier technologies, they are not 100 per cent effective!

Disciplining the mind

2612818208_2d8e2ac749.jpg

Algorithmic drawing X. Flickr/ Brett Renfer. Some rights reserved.The big governance challenge is not so much the ‘black box’ of the algorithm, but the core assumption that human conduct is predictable enough to allow human beings to defer to machine-driven decisions (as Tal Zarsky says). When those decisions exacerbate inequality, unfairness, and discrimination, surely we cannot be on a pathway which aligns with most people’s ideas of a good society. Resistance to the seductive algorithmic computational drama as it has been called, is definitely called for.

Here is a quote of the kind that needs to be resisted:

‘Algorithms can produce actionable insights even though it is not yet possible to explain the reasons behind these insights. Once AI starts consistently producing recommendations that improve outcomes, people should start using these algorithms and investigating why exactly these recommendations work as well as they do’

This is a quote from leading software developers – it is symptomatic of today’s forms of the governance of conduct. This developer asks a why question, but he and others like him do not follow up by asking - with what consequence or impact?

I suggest to you that the ‘black box’ that really needs unpacking is not the inner workings of an algorithm – albeit this may be a nice theoretical challenge. It is a different black box that we need to focus on.

Here we can learn from a bit of history. In digitalisation’s earlier history, the late Nathan Rosenberg, a US economist at Stanford University who studied technological innovation, said that researchers need to look in detail inside the black box of technology. But he meant we should focus on points of control – on economic or political power. In the early 1970s when he was writing and now, instrumental, anti-normative, social science treatment of ‘black boxes’ power needs to be challenged at every opportunity. As Philip Napoli says, the algorthmic assemblage black box needs to be opened. But the aim must be to understand how the velocity, volume, and value of data are encouraging us to ‘embrace algorithmically driven decision making’ – to bow to the cathedral of computation and quantification.

Who decides?

And who is embracing this attachment to the quantification of our lives and to what end? Professor Louise Amoore, of Durham University in the UK, shows us how data derivatives – the combinations of traces left by people – are being used with probabilistic techniques to yield unimagined correlations and new possible risks in the surveillance and security field. The risks are then acted upon, but who actually has the power to act and who – which companies, states or social movement groups - can and does respond? Those who interpret, make choices and act on data analytics results can be questioned. They are people - they are not algorithms.

Empirical analysis of who has the power to act - imagined and in practice is hugely needed. Which sets of data analytic results are privileged? Power asymmetries in the digital ecology are framed by global capitalism and we should not forget this. When we do, we fall into the ‘twin traps of economic reductionism and of the idealist autonomization of the ideological level’ as political economist, Professor Nicholas Garnham, might say. Or as Amoore in fact does say, the data derivative – ‘a specific form of abstraction that is [being] deployed in contemporary risk-based security calculations’, is little acknowledged because the goal is to calculate the incalculable.

In practice, when the present and future are visualised as risk maps, scores or flags, someone – a human – takes a decision to act. Designers and engineers choose algorithms based on how quickly they return results or on their computational elegance, but surely this should not be the main determinant of choices about which actions to take.

This shift from numbers – quantifiable life – to action is in fact a gateway or control point through which power is being exercised. It is this control point that I suggest we should focus on – who can and does take action? Online participation negates only some people’s agency, only some – the vast majority, but still, only some.

Citizens who rely on the Cloud, self-managed bioteams, avatars or Facebook have little chance of mastery. They have few resources to take action. But for others with asymmetrical power, such as the military and big companies, choices and actions lead to judgements about the use of aerial surveillance and drones or geo-mapping, targeting ‘persons of concern’. These actions reinforce uneven mobilities and they expose marginalised populations. These are the societal problems that we need to be focusing on. Those who interpret, make choices and act on data analytics results can be questioned. They are people - they are not algorithms. Formal governance arrangements could hold them better to account, at least they may able to do so in societies that respect the fundamental rights of citizens.

But instead, the growing captivation by the siren call of a computational theocracy means that comparatively relatively little research is focusing on how the people who do act on data – or on data derivatives – can be better held to account. This is a very different approach than seeking to hold the algorithmic code itself to account or indeed the individual makers.

Why is this siren attraction to computational theocracy so effective?

Let me answer by using the social or learning machine as an example.

A reification of calculated futures is taking hold, I suggest. This is without regard to the varied values of the makers of algorithms, but it is influenced by what they privilege as an understanding of points of control over human beings. The goal is the ‘web-extended mind’ which can participate in the mental states of human beings.

Social computing is a field which brings computing science together with engineering and social science. Social machines are being built. These are described as ‘web-based socio-technical systems in which the human and technological elements’ aim for ‘the mechanistic realization of system-level processes’. Another way to say this is that the goal is the ‘web-extended mind’ which can participate in the mental states of human beings. Most of these social machine makers will tell you that they give equal weight to the technological and social features and that these machines are designed to foster ‘desirable’ behavior.

What do these algorithm-based social machine makers read in the social sciences to understand the social and governance issues? The following is based on my survey of a vast amount of the literature:

  • - First, from business and management studies they tend to cite works which argue that ‘desirable behaviour’ is anything that helps to exploit economic returns. Algorithms and digital platforms are seen simply by many such as Bresnahan and Greenstein as ‘a reconfigurable base of compatible components’ . They are neutral ‘conduits’ for data transmission.
  • - Second, economists with engineers are cited. Algorithms are depicted as self-organising agents. The technical system is said to ‘create itself out of itself’, according to Sante Fe Institute’s, Brian Arthur. The best algorithm is optimised to ‘select the fittest’.
  • - Third, legal scholars are cited. The human being now is an object to be predicted as a rational agent. Values are not neglected, but justice is ‘local’ – it is about allocating resources using rational choice procedural models. Transparency is a property of the technical system. Policy requirements, for instance, privacy, are noted, but the goal is to make digital records of behaviour, Facebook Likes, automatically and accurately predict personal attributes.
  • - Fourth, Psychology. Here the most cited are cognitive and neuroscientists who believe there can be a natural science of mind. Personal construct theory supports decision science and decision making is formalised. Analogies are drawn between genes in biology and human neural states. Kahneman and Tversky’s Prospect Theory helps in modelling ‘fast and frugal’ decision making algorithms, but loses its insight into the unpredictability of human agency.
  • - Fifth, Political theory. Elinor Ostrom’s work on governing the commons and Robert Putnam’s work on social capital are frequently cited, but always with reference to a rational expectations model of human behaviour. Why? Because rational expectations are potentially codable, uncertainty and emotion are not (yet).
  • - Sixth and last, philosophy and ethics. Floridi on the ethics of information is cited because his work is analytical and avowedly non-normative. Benkler is much cited on modular software design, but rarely Benkler and Nissenbaum on virtues. Martha Nussbaum is cited on context and data objectification but then her work is parked – her theory is not amenable to coding. The search is on for an ‘axiomatised computational logic’ to find the way to formalise fairness, utility, and equity.

In sum, scientists and engineers turn to those strands in the social sciences which emphasise only a certain kind of rationality, one that fits into the computational temple or cathedral. In sum, scientists and engineers turn to those strands in the social sciences which emphasise only a certain kind of rationality, one that fits into the computational temple or cathedral.

These social or algorithmic machines are the fruits of defence-sponsored research. For decades the aim has been to build a unified theory of artificial intelligence. The goal is to solve the problem of making inferences about the internal structure of a system when all that is known about that system is the input and output signals. As Dennett says the aim is to automate human intelligence by creating ‘an all-powerful executive homunculus whose duties require almost Godlike omniscience’.

Examples of technologies moving in this direction are driverless cars, the augmented soldier and the so-called enabled consumer. Semiconductor manufacturer, Qualcomm, is working on neuroprocessing engines for smart phones. These developments are starting to come out of the lab.

So in summary, for scientists, despite the commitment to interworking with social science (and some media and communication scholars) algorithms are understood to ‘reason’ about reliability and honesty. They are seen as facilitating ‘good’ behaviour. But this computational attraction is, as Harold Rheingold says, ‘changing what it means to be human’.

Of course there is resistance to the virtues of a calculable ‘good life’ in other domains of the social sciences. There are alternative perspectives! Many media and communication scholars understand that the internet is ‘radically incomplete’ as Andrew Feenberg says and so must be the algorithms and science. But his work on ‘hegemonic forces’ in software and hardware design, or van Dijck’s work on ‘platformed sociality’, for instance, do not really ask fundamental questions about what it means to be human. Maria Bakardjieva says that ‘many directions are open to the social construction or deconstruction of socialbots’. My colleagues at LSE, Nick Couldry and Alison Powell, see big data ‘as a variegated space of action’, one that is open to resistance and shaping along different pathways. But which different pathways?

Louise Amoore says research is needed on how algorithmic techniques can ‘rule out, [and] render invisible, other potential futures’ (p 38). When it comes to the big social problems – policing, migration, climate change or inequality and poverty - what alternatives are being concealed by the gleam of risk-based algorithmic solutions? And who governs which solutions are acted upon in practice?

Even if algorithms operate at speeds and scales beyond the threshold of human perception, surely this doesn’t mean we should give up on governing the control points where the algorithmic results are translated into action.

Conclusion

The challenges of governing in this world that is being borne of algorithms. What alternative worlds and policy pathways might there be? Governance is needed, not so much of individual algorithm makers, but of the states and companies who finance their work.

I do think we should treat ‘the algorithm’ and a surveillance society as a complex system of persons and things – as assemblages. But I also think as scholars we need to pay much more attention to the control points of surveillance, power and communicative action. This is where choices are being made and action is being taken by relatively limited numbers of human beings who set the pathway for social, economic and political development.

Governance is needed, not so much of individual algorithm makers, but of the states and companies who finance their work. Governance using conventional approaches to privacy legislation and policy are one part, but not the whole, of the governance challenge. Of course, some countries are limiting data processing and data flows. Indonesia, Nigeria, Russia, and Vietnam have legislation. Brazil has its ‘Internet Bill of Rights’. The EU 2014 ruling by the European Court of Justice upholds the ‘right to be forgotten’. But companies and states are extremely innovative. They can evade legislation. For instance, they can run their analytics engines on separate databases such as airline passenger name records, alert data, financial or health data, without breaking the law. Companies can focus on open cross-border data flows to accelerate economic growth, evading national policies. But rights-based approaches to privacy and surveillance that rely on ‘informed consent’ are becoming unenforceable.

States are calling for open data flows to facilitate their security agendas. Companies lobby for self-governance, claiming their formal representations of data access rights, copyright, and privacy norms in algorithms are, by definition, consistent with good behaviour. As Hall Varian, Google’s chief economist, says, ‘big data’ gives rise to a host of new tricks for econometricians, but as we also know, to healthy profits for Google and, we must assume, good things for consumers and citizens.

Conventional privacy protection and human rights legislation for individuals has some traction. But rights-based approaches to privacy and surveillance that rely on ‘informed consent’ are becoming unenforceable. Policy frameworks are being devised with the expectation that, whatever the market power of digital platform companies, and the political power of states, their actions are aligned with citizen interests, or at least with consumer interests.

If attraction to the quantification of everything means that life itself is becoming humanly ungovernable, then care of the self and others will become meaningless too. The dominant default assumption is that humans are empowered by our immersive mediated environments. Focusing on regulatory toolkits that might govern social machines and their developers is one thing, but what we really need is better insight into how to influence and change the discourse of machine worship and the notion that quantification is synonymous with the good life. This is much more urgent than it has ever been.

We need research on the scope for relatively autonomous subjects to exploit the emancipatory potential of these technologies, of course we do. But the digitally mediated world is not benign. Nor is it completely hegemonic. Alternative societal outcomes are possible, but only if we can say and think of them, only if we can imagine them.

We need revealing research on the orchestrators of actions based on the technologies and processes of surveillance (and sousveillance as well). We need a clearer view of who funds computational research, who commercialises it, and who is using it to act on and shape our world. As Gillespie says, the idea precedes the social algorithm. Coalitions of actors – scholars, activists, some politicians, and yes, even some captains of industry, will need to collaborate if the pathway we are following to a calculated, unequal future is to change. This pathway is incompatible with human agency for the great majority of the world’s citizens and it is in need of change. The narrative needs to change to resist the overwhelming fascination with quantification.

I close with the thought that the intense ‘datafication’ of our lives is only pre-determined if we persist in believing that it is, and we fail to change this.

This keynote speech was first delivered at the International Communication Association (ICA) Conference Plenary, Fukuoka, 13 June 2016.

Had enough of ‘alternative facts’? openDemocracy is different Join the conversation: get our weekly email

Comments

We encourage anyone to comment, please consult the oD commenting guidelines if you have any questions.
Audio available Bookmark Check Language Close Comments Download Facebook Link Email Newsletter Newsletter Play Print Share Twitter Youtube Search Instagram WhatsApp yourData