Some lessons of Trump’s short career as a blogger

By John Naughton

The same unaccountable power that deprived Donald J. Trump of his online megaphones could easily be deployed to silence other prominent figures, including those of whom liberals approve.

‘From the Desk of Donald J. Trump’ lasted just 29 days. It’s tempting to gloat over this humiliating failure of a politician hitherto regarded as an omnipotent master of the online universe.

Tempting but unwise, because Trump’s failure should alert us to a couple of unpalatable realities.

The first is that the eerie silence that descended after the former President was ‘deplatformed’ by Twitter and Facebook provided conclusive evidence of the power of these two private companies to control the networked public sphere.

Those who loathed Trump celebrate his silencing because they regarded him — rightly — as a threat to democracy.

But on the other hand nearly half of the American electorate voted for him. And the same unaccountable power that deprived him of his online megaphones could easily be deployed to silence other prominent figures, including those of whom liberals approve.

The other unpalatable reality is that Trump’s failure to build an online base from scratch should alert us to the way the utopian promise of the early Internet — that it would be the death of the ‘couch potato’, the archetypal passive media consumer — has not been realised. Trump, remember, had 88.9m followers on Twitter and over 33m fans on Facebook.

“The failure of Trump’s blog is not just a confirmation of the unaccountable power of those who own and control social media, but also a reflection of the way Internet users enthusiastically embraced the ‘push’ model of the Web over the ‘pull’ model that we techno-utopians once hoped might be the network’s future.”

Yet when he started his own blog they didn’t flock to it. In fact they were nowhere to be seen. Forbes reported that the blog had “less traffic than pet adoption site Petfinder and food site Eat This Not That.” And it was reported that he had shuttered it because “low readership made him look small and irrelevant”. Which it did.

What does this tell us? The answer, says Philip Napoli in an insightful essay in Wired,

“lies in the inescapable dynamics of how today’s online media ecosystem operates and how audiences have come to engage with content online. Many of us who study media have long distinguished between “push” media and “pull” media.

“Traditional broadcast television is a classic “push” medium, in which multiple content streams are delivered to a user’s device with very little effort required on the user’s part, beyond flipping the channels. In contrast, the web was initially the quintessential “pull” medium, where a user frequently needed to actively search to locate content interesting to them.

“Search engines and knowing how to navigate them effectively were central to locating the most relevant content online. Whereas TV was a “lean-back” medium for “passive” users, the web, we were told, was a “lean-forward” medium, where users were “active.” Though these generalizations no longer hold up, the distinction is instructive for thinking about why Trump’s blog failed so spectacularly.

“In the highly fragmented web landscape, with millions of sites to choose from, generating traffic is challenging. This is why early web startups spent millions of dollars on splashy Super Bowl ads on tired, old broadcast TV, essentially leveraging the push medium to inform and encourage people to pull their online content.

“Then social media helped to transform the web from a pull medium to a push medium...”

adem-ay-Tk9m_HP4rgQ-unsplash

Credit: Adem AY for Unsplash

This theme was nicely developed by Cory Doctorow in a recent essay, “Recommendation engines and ‘lean-back’ media”.  The optimism of the early Internet era, he mused, was indeed best summarized in that taxonomy.

“Lean-forward media was intensely sociable: not just because of the distributed conversation that consisted of blog-reblog-reply, but also thanks to user reviews and fannish message-board analysis and recommendations.

“I remember the thrill of being in a hotel room years after I’d left my hometown, using Napster to grab rare live recordings of a band I’d grown up seeing in clubs, and striking up a chat with the node’s proprietor that ranged fondly and widely over the shows we’d both seen.

“But that sociability was markedly different from the “social” in social media. From the earliest days of Myspace and Facebook, it was clear that this was a sea-change, though it was hard to say exactly what was changing and how.

“Around the time Rupert Murdoch bought Myspace, a close friend had a blazing argument with a TV executive who insisted that the internet was just a passing fad: that the day would come when all these online kids grew up, got beaten down by work and just wanted to lean back.

“To collapse on the sofa and consume media that someone else had programmed for them, anaesthetizing themselves with passive media that didn’t make them think too hard.

“This guy was obviously wrong – the internet didn’t disappear – but he was also right about the resurgence of passive, linear media.”

This passive media, however, wasn’t the “must-see TV” of the 80s and 90s.  Rather, it was the passivity of the recommendation algorithm, which created a per-user linear media feed, coupled with mechanisms like “endless scroll” and “autoplay,” that obliterated any trace of an active role for the  aptly-named Web “consumer”.

As Napoli puts it,

“Social media helped to transform the web from a pull medium to a push medium. As platforms like Twitter and Facebook generated massive user bases, introduced scrolling news feeds, and developed increasingly sophisticated algorithmic systems for curating and recommending content in these news feeds, they became a vital means by which online attention could be aggregated.

“Users evolved, or devolved, from active searchers to passive scrollers, clicking on whatever content that their friends, family, and the platforms’ news feed algorithms put in front of them. This gave rise to the still-relevant refrain “If the news is important, it will find me.” Ironically, on what had begun as the quintessential pull medium, social media users had reached a perhaps unprecedented degree of passivity in their media consumption. The leaned-back “couch potato” morphed into the hunched-over “smartphone zombie.””

So the failure of Trump’s blog is not just a confirmation of the unaccountable power of those who own and control social media, but also a reflection of the way Internet users enthusiastically embraced the ‘push’ model of the Web over the ‘pull’ model that we techno-utopians once hoped might be the network’s future.

In Review: The Cloud and the Ground

By Julia Rone

In this literature review, Julia Rone outlines the key trends and logics behind the boom in data centre construction across the globe.

Hamlet: Do you see yonder cloud that’s almost in shape of a camel?

Polonius: By th’ mass, and ‘tis like a camel indeed.

Hamlet: Methinks it is like a weasel

Polonius: It is backed like a weasel.

Hamlet: Or like a whale?

Polonius: Very like a whale.

The cloud – this fundamental building block of digital capitalism – has been so far defined mainly by the PR of big tech companies.

The very metaphor of the “cloud” presupposes an ethereal, supposedly immaterial collection of bits gliding in the sky, safely removed from the corrupt organic and inorganic matter that surrounds us. This, of course, can’t be further from the truth.

But even when they acknowledge the materiality of the “cloud” and the way it is grounded in a very physical infrastructure of cables, data centres, etc., tech giants still present it in a neat and glamorous way. Data centres, for example, provide carefully curated tours and are presented as sites of harmoniously humming servers, surrounded by wild forests and sea. Some data centres even boast with saunas.  

Instead of accepting blindly the PR of tech companies and seeing “the cloud” as whatever they present it (similarly to the way Polonius accepts Hamlet’s interpretations of the cloud), we should be attuned to the multiplicity of existing perspectives on “the cloud”, coming from researchers, rural and urban communities, and environmentalists, among others.

In this lit review, I outline the key trends and logics behind the boom in data centre construction across the globe. I base the discussion on several papers from two special issues. The first one is The Nature of Data Centres, edited by Mél Hogan and Asta Vonderau for Culture Machine. The second: Location and Dislocation: Global Geographies of Digital Data, edited by Alix Johnson and Mél Hogan for Imaginations: Journal of Cross-Cultural Image Studies. I really recommend reading both issues – the contributions read like short stories and go straight to the core of the most pressing political economy problems of our times.

Credit: Zbynek Burival for Unsplash

The “nature” of data centres

Data centres as key units of the cloud are very material: noisy, hot, giant storage boxes containing thousands of servers, they occupy factories from the past or spring up on farm land all over the globe. Data centres are grounded in particular locations and depend on a number of “natural” factors for their work, including temperature, humidity, or air pollution. In order for data centres to function, they not only use up electricity (produced by burning coal or using wind energy, for example). They also employ technologies to circulate air and water to cool down and emit heat as a waste product.

But data centres are not only assemblages of technology and nature. Their very appearance, endurance and disappearance is defined by complex institutional and non-institutional social relations: regions and countries compete with each other to cut taxes for tech corporations that promise to bring jobs and development. Some states (e.g. Scandinavian states) are preferred over others because of their stable institutions and political “climate”.

No blank slate

To illustrate, the fact that data centres are built in the Sweden’s Norrbotten region has to do a lot with the “nature” of the region conceptualized reductively by tech companies as cheap energy, cheap water, cheap land and green imagery (Levenda and Mahmoudi, 2019, 2). But it also has to do a lot with the fact that Norrbotten is filled with the “ruins of infrastructural promises” (Vonderau, 2019, 3) – “a scarcely populated and resource-rich region, historically inhabited by native Sami people, the region was for a long-time regarded as no-man’s land” (ibid). Not only is Norrbotten scarcely populated but it also has an “extremely stable and redundant electricity grid which was originally designed for […]‘old’ industries” (ibid, 7).

A similar logic of operation could be discerned in the establishment of a data centre in the Midway Technology Centre in Chicago, where the Schulze Bakery was repurposed as a data centre (Pickren, 2017) Pickren was told in an interview with a developer working on the Schulze redevelopment project that “because the surrounding area had been deindustrialized, and because a large public housing project, the Robert Taylor Homes had closed down in recent decades, the nearby power substations actually had plenty of idle capacity to meet the new data centre needs” (Pickren, 2017). As Pickern observes, “there is no blank slate upon which the world of data simply emerges”(ibid.) There are multiple “continuities between an (always temporary) industrial period and the (similarly temporary) ascendancy of digital capitalism” (ibid).

Extraction and the third wave of urbanization

What the examples of Norrbotten in Sweden and the redevelopment of Chicago by the data industry show is that despite a carefully constructed PR around “being close to nature” and “being green”, decisions on data centre construction actually depend on availability of electricity for which depopulation is only a plus. Instead of “untouched” regions, what companies often go for are rather abandoned or scarcely populated regions with infrastructure left behind. Data centres use resources – industrial capacity or Green energy – that are already there, left from previous booms and busts of capitalism or from conscious state investment that is now used to the benefit of private companies.

“Urban interactions are increasingly mediated by tech and leave a digital trace – from paying your Uber to ordering a latte, from booking a restaurant to finding a date for the night.”

Both urban and rural communities are in fact all embedded within a common process of a “third wave of urbanization” that goes hand in hand with an increase in the commodification and extraction of both data and “natural” resources (Levenda and Mahmoudi, 2019). What this means is that urban interactions are increasingly mediated by tech and leave a digital trace – from paying your Uber to ordering a latte, from booking a restaurant to finding a date for the night.

Credit: Priscilla Du Preez for Unsplash

This urban data is then stored and analysed in predominantly rural settings: “[T]he restructuring of Seattle leads to agglomerations in urban data production, which rely on rural data storage and analysis” (ibid, 9). Put simply, “[J]ust as Facebook and Google use rural Oregon for their ‘natural’ resources, they use cities and agglomerations of ‘users’ to extract data”.

Ultimately, data centres manifest themselves as assemblages for the extraction of value from both people and nature.

As if in a perverse rendition of Captain Planet, all elements – water, air, earth, human beings and technology – unite forces so that data centres can function and you can upload a cat photo on Facebook. In this real life data-centre version of Captain Planet, however, all elements are used up, extracted, exhausted. Water is polluted.

People live with the humming noise of thousands of servers.

Taxes are not collected and therefore not invested in communities that are already deprived.

What is more, data centres often arrive in rural regions with the promise to create jobs and drive development. But as numerous authors have shown, actual jobs created by data centres are less than what was originally promised, with most jobs being precarious subcontracting (Mayer, 2019). As Pickren notes, “If the data centre is the ‘factory of the 21st century,’ whither the working class?”

Abstraction

Data centres do create jobs but predominantly in urban areas. “[W]here jobs are created, where they are destroyed and who is affected are socially and geographically uneven” (Pickern, 2017). Where value is extracted from and where value is allocated rarely coincide.

And if from a birds view perspective, what matters is the total number of jobs created, what matters in Sweden’s Norrbotten or The Netherlands’ Groningen, where data centres are built, is how many jobs are created there and furthermore, what types of jobs (Mayer, 2019). In the same way, while from an abstract point of view tech companies such as Microsoft might be “carbon neutral”, this does not change their questionable practices and dependence on coal in particular places.

The Introduction to the “Location and Dislocation” Special Issue quotes a classic formulation by Yi-Fu Tuan according to whom “place is space made meaningful”(Johnson and Hogan, 2017, 4).

“Whenever we hear big tech’s grandiose pledges of carbon neutrality and reducing carbon emissions, we need to understand that these companies are not simply “green-washing” but are also approaching the problem of global warming “in the abstract””.

One of the key issues with tech companies building data centres is the way they privilege space over place – an abstract logic of calculation and global flows over the very particular local relations of belonging and accountability.

In a great piece on “fungible forms of mediation in the cloud”, Pasek explores how the practice of big tech companies to buy renewable energy certificates does more harm than good, since it allows “data centre companies to symbolically negate their local impacts in coal-powered regions on papers, while still materially driving up local grid demand and thereby incentivizing the maintenance or expansion of fossil energy generation” (ibid, 7).

The impact for local communities can be disastrous: “In communities located near power plants, disproportionately black, brown and low-income, this has direct consequences for public health, including greater rates of asthma and infant mortality” (ibid).

So whenever we hear big tech’s grandiose pledges of carbon neutrality and reducing carbon emissions, we need to understand that these companies are not simply “green-washing” but are also approaching the problem of global warming “in the abstract”, at the global level, paying little attention to their effect in any particular locality.

As Pasek notes, this logic of abstraction subordinates the “urgencies of place” to the “logics of circulation”.

Unsurprisingly, it is precisely the places that have already lost the most from previous industrial transformations that are the ones who suffer most during the current digital transformations.

Invisibility and Hypervisibility

What makes possible the extraction practices of tech companies is a mix between how little we know about them and how much we believe in their promise of doing good (or well, not doing evil at least).

In her fascinating essay “The Second Coming: Google and Internet infrastructure”, Mayer (2019) explores the rumours around a new Google data centre in Groningen. Mayer explores how Google’s reputation as a leading company combined with a the total lack of concrete information about the new data centre create a mystical aura around the whole enterprise: “Google’s curation of aura harkens back to the early eras of Western sacred art, during which priests gave sacred objects their magical value by keeping them ‘invisible to the spectator’” (Mayer, 2019, 4).

Mayer contrasts a sleek Google PR video (with a lone windmill and blond girls looking at computer screens) with the reality brought about by a centre that offered only a few temporary subcontracting jobs. The narrative of regional growth presented by Google unfortunately turned out to be PR rather than a coherent development strategy.

Impermanence

Furthermore, in a fascinating essay on data centres as “impertinent infrastructures”, Velkova (2019) explores the temporality and impermanence of data centres that can be moved or abandoned easily. 

How could such impertinent structures provide regional development?

What is more, even if data centres do not move, they do reorganize global territories and connectivity speeds through the threat of moving: “data center companies are constantly reevaluating the economic profitability of  particular locations in synchrony with server replacement cycles and new legislative frameworks that come into force.

Data centres are above all impermanent – they can come and go. Rather than being responsible to a particular locality, data centres are part of what Pasek called a “logic of global circulation”

Should tax regulations, electricity prices, legislation or geopolitical dynamics shift, even a hyper-sized data center like Google’s in Finland or Facebook’s in Sweden could make a corresponding move to a place with more economically favourable conditions within three years” (Velkova, 2019, 5).

So data centres are on the one hand, hypervisible through corporate PR. On the other hand, they are invisible for local communities that are left guessing about construction permits, the conditions of data centres arrival, their impact on the environment and the economy.

But ultimately, and this is the crucial part, data centres are above all impermanent – they can come and go. Rather than being responsible to a particular locality, data centres are part of what Pasek called a “logic of global circulation”.

Holding each node accountable

Big tech’s logics of extraction, abstraction, invisibility, hypervisibility and impermanence are driving the current third wave of urbanization and unequal development under digital capitalism.

But it is possible to imagine another politics that would “hold each node accountable to the communities in which they are located” (Pasek, 9).

The papers from the two special issues I review here provide an exhaustive and inspiring overview of the “nature” and imaginaries of data centres.

Yet, with few exceptions (such as the work of Asta Vonderau), we know little about the politics of resistance to data centres and the local social movements that are appearing and demanding more democratic participation in decision making.

Would it be possible for us – citizens – to define what the cloud should look like? Not sure. But this is a crucial element of any project for democratizing digital sovereignty. And this is what I work on now.

In Review: Bellingcat and the unstoppable Mr Higgins

By John Naughton

Review of We are Bellingcat: An Intelligence Agency for the People, by Eliot Higgins, Bloomsbury, 255pp

On the face of it, this book tells an implausible story. It’s about how an ordinary guy – a bored administrator in Leicester, to be precise – becomes a skilled Internet sleuth solving puzzles and crimes which appear to defeat some of the world’s intelligence agencies. And yet it’s true. Eliot Higgins was indeed a bored administrator, out of a job and looking after his young daughter in 2011 while his wife went out to work. He was an avid watcher of YouTube videos, especially of those emanating from the Syrian civil war, and one day had an epiphany: “If you searched online you could find facts that neither the press nor the experts knew.”

Higgins realised that one reason why mainstream media were ignoring the torrent of material from the war zone that was being uploaded to YouTube and other social media channels was that these outlets were unable to verify or corroborate it. So he started a blog — the Brown Moses blog — and discovered that a smattering of other people had had a similar realisation, which was the seed crystal for the emergence of an online community that converged around news events that had left clues on YouTube, Facebook, Twitter and elsewhere.

This community of sleuths now sails under the flag of Bellingcat, a name taken from the children’s story about the ingenious mice who twig that the key to obtaining early warning of a cat’s approach is to put a bell round its neck. This has led to careless journalists calling members of the community “Bellingcats” — which leads them indignantly to point out that they are the mice, not the predators!

The engaging name belies a formidable little operation which has had a series of impressive scoops. One of the earliest involved confirming Russian involvement in the downing of MH17, the Malaysia Airlines aircraft brought down by a missile when flying over Ukraine. Other impressive scoops included identification of the Russian FSB agents responsible for the Skripal poisonings and finding the FSB operative who tried to assassinate Alexai Navalny, the Russian democratic campaigner and Putin opponent who is now imprisoned — and, reportedly, seriously ill — in a Russian gaol.

‘We are Bellingcat’ is a low-key account of how this remarkable outfit evolved and of the role that Mr Higgins played in its development. The deadpan style reflects the author’s desire to project himself as an ordinary Joe who stumbled on something significant and worked at it in collaboration with others. This level of understatement is admirable but not entirely persuasive for the simple reason that Higgins is no ordinary Joe. After all, one doesn’t make the transition from a bored, low-level administrator to become a Research Fellow at U.C. Berkeley’s Human Rights Center and a member of the International Criminal Court’s Technology Advisory Board without having some exceptional qualities.

“One could say that the most seminal contribution Bellingcat has made so far is to explore and disseminate the tools needed to convert user-generated content into more credible information — and maybe, sometimes, into the first draft of history.”

One of the most striking things about Bellingcat’s success is that — at least up to this stage — its investigative methodology is (to use a cliché) not rocket science. It’s a combination of determination, stamina, cooperation, Internet-saviness, geolocation (where did something happen?), chronolocation (when did it happen?) and an inexhaustible appetite for social-media-trawling. There is, in other words, a Bellingcat methodology — and any journalist can learn it, provided his or her employer is prepared to provide the time and opportunity to do so. In response, Bellingcat has been doing ‘boot camps’ for journalists — first in Germany, Britain and France and — hopefully — in the US. And the good news is that some mainstream news outlets, including the New York Times, the Wall Street Journal and the BBC, have been setting up journalistic units working in similar ways.

In the heady days of the so-called ‘Arab spring’ there was a lot of excited hype about the way the smartphone had launched a new age of ‘Citizen Journalism’. This was a kind of category error which confused user-generated content badged as ‘witnessing’ with the scepticism, corroboration, verification, etc. that professional journalism requires. So in that sense one could say that the most seminal contribution Bellingcat has made so far is to explore and disseminate the tools needed to convert user-generated content into more credible information — and maybe, sometimes, into the first draft of history.

Mr Higgins makes continuous use of the phrase “open source” to describe information that he and his colleagues find online, when what he really means is that the information — because it is available online — is in the public domain. It is not ‘open source’ in the sense that the term is used in the computer industry, but I guess making that distinction is now a lost cause because mainstream media have re-versioned the phrase.

The great irony of the Bellingcat story is that the business model that finances the ‘free’ services (YouTube, Twitter, Facebook, Reddit, Instagram et al) that are polluting the public sphere and undermining democracy is also what provides Mr Higgins and his colleagues with the raw material from which their methodology extracts so many scoops and revelations. Mr Higgins doesn’t have much time for those of us who are hyper-critical of the tech industry. He sees it as a gift horse whose teeth should not be too carefully examined. And I suppose that, in his position, I might think the same.

Forthcoming in British Journalism Review, vol. 32, No 2, June 2021.

Worried about data overload or AI overlords? Here’s how the CDH Social Data School can help

By Anne Alexander

Ahead of the CDH Social Data School application Q&A on May 4, Dr Anne Alexander, Director of Learning at Cambridge Digital Humanities (CDH), explains how the programme provides the digital research tools necessary for the data-driven world.

The world we live in has long been shaped by the proliferation of data – companies, governments and even our fellow citizens all collect and create data about us every day of our lives.

Much of our communications are relayed digitally, the buildings we live in and the urban spaces we pass through have been turned into sensors, we work, play and even sleep with our digital devices. Particularly over the past year, as the pandemic has dramatically reduced in-person interactions for many, the data overload has come to seem overwhelming. 

The CDH Social Data School (June 16-29) which Cambridge Digital Humanities is organising in collaboration with the Minderoo Centre for Technology and Democracy is aimed at people working with data in the media, NGOs and civil society organisations and in education who want to equip themselves with new skills in designing and carrying out digital research projects, but who don’t enjoy easy access to education in data collection, management and analysis.

We want to make available the methods of inquiry and the technical skills we teach to students and staff at the University of Cambridge to a much wider audience. 

This year’s CDH Social Data School will include modules exploring the ethical and societal implications of new applications in Machine Learning, with a specific focus on the problems of structural injustice which permeate the computer vision techniques underpinning technologies such as facial recognition and image-based demographic profiling. 

We are keen to hear from participants whose work supports public interest journalism, human rights advocacy, trade unionism and campaigns for social justice, environmental sustainability and the decolonisation of education. 

Although criticism of the deployment of these technologies is now much more widespread than in the past, it often focuses on the problems with specific use cases rather than more general principles.

In the CDH Social Data School we will take a “bottom-up” approach by providing an accessible introduction to the technical fundamentals of machine learning systems, in order to equip participants with a better understanding of what can (and usually does) go wrong when such systems are deployed in wider society. 

We will also engage with these ideas through an experimental approach to learning, giving participants access to easy-to-use tools and methods allowing them to pose the questions which are most relevant to their own work. 

Participants are not expected to have any prior knowledge of programming to take part – familiarity with working with basic office tools such as spreadsheets will be helpful. We will be using free or open source software to reduce barriers to participation. 

We are particularly interested in applications from participants from countries, communities and groups which suffer from under-resourcing, marginalization and discrimination.

We are keen to hear from participants whose work supports public interest journalism, human rights advocacy, trade unionism and campaigns for social justice, environmental sustainability and the decolonisation of education. 

The CDH Social Data School will run online from June 16-29.

Apply now for the CDH Social Data School 2021

Please join us for a Q&A session with the teaching team:

Tuesday 4 May 2 – 2.45pm BST

Registration essential: Sign up here

Read more on the background and apply for your place at the School here.

In Review: Is more state ownership the panacea that will save us from the big tech giants?

By Julia Rone

Living in a world with an increasingly uncontrolled accumulation of power by big tech, what alternatives are there to privately owned enterprises that could ensure the tech sector better serves democratic society? Julia Rone reviews Andrew Cumbers’ new book ‘Reclaiming Public Ownership. Making Space for Economic Democracy’ and starts a conversation on how to apply his writing to the tech sector.

Every discussion we’ve had so far on regulating tech giants ends up with discussing whether regulation (be it anti-trust/regulating ‘recommending’ algorithms/treating big tech as public utilities) is enough.

As a colleague smartly noted last time, we have reduced our expectations of the state to a form of (light-touch) regulation to take place only in case markets fail. But as Mariana Mazucatto has famously shown in her spectacular book “The Value of Everything”, “the state” has in fact funded the fundamental science and tech development behind not only the Internet but also the technologies used in purportedly private companies’ successes such as the iPhone. The state has been a key driver of innovation rather than some clumsy behemoth lagging behind technology and poking its nose in people’s business.

The sad thing, of course, is that the value created with public funding has been subsequently privatized/appropriated by private companies – not only in monetary terms but also in symbolic terms. I’ve never had random strangers at parties telling me about publicly funded researchers, yet I have endured hours of men (it’s usually men) praising Elon Musk and Steve Jobs.

Now, we might think that this “forgetting” of the role of the state is innocent, a childish fascination with mythical entrepreneurial figures. But that’s not the case. The bad-mouthing of the state we see in tech industry is part of a much broader trend (neoliberalism?) of framing the state as incompetent, wasteful, bureaucratic and incapable of innovation.

This is why, when, as a reaction to the 2008 economic crisis, the British government nationalized (fully or partially) large parts of UK’s retail or banking sector, they were quick to appoint private executives, often from the very banks that had caused the crisis to begin with.

What nationalization amounted to, in this case, was the public sector absorbing bad debts to allow private capital to restructure and start accumulating profits again. Andrew Cumbers begins his brilliant book on public ownership with this example and dedicates the rest of the book to 1) explaining why even amidst the biggest crisis of capitalism private executives were considered more competent; 2) what alternatives are there to privately owned enterprises.

While the neoliberal bad-mouthing of the state and its reduction to light-touch regulator have been undoubtedly super influential, the question I would like to bring to the table, drawing extensively on Cumbers, is: should we uncritically rehabilitate the state? Is more state the panacea that will save us from the big bad tech giants? Or should we try to think of new forms of ownership and democratic management, in our case, of digital platforms? In the following paragraphs I will present Cumbers’ book in detail (maybe too much detail but it’s really a great book) before returning to these key questions at the end.

Historic experiences with nationalization in the UK – “neither socialization nor modernization”

What makes Cumbers’ book so brilliant is that he engages in depth with existing theories, empirical examples and critiques of public ownership but then he moves beyond this purely analytical exercise of discussing ‘who is right and who is wrong’.

Instead, he puts forward an alternative – a vision of public ownership that goes beyond the state, embraces diversity and heterodoxy, and puts at its center the core principle of economic democracy.

To begin with, Cumbers argues that nationalization and state planning have such a bad name partially because of the way they were instituted in practice. Talking about the British post 1945 experience with nationalization, Cumebrs argues it was “neither socialization, nor modernization” (p. 14). More radical agendas never penetrated the upper echelons of the Labour establishment: referring to the nationalization programme as “socialization” was mainly PR and the government “was deeply suspicious of anything remotely ‘syndicalist’ that might provide more grass-roots or shop-floor representation and influence on the councils of nationalized industries” (p. 15).

Management was top-down and the large bureaucratic structures produced “an alienating environment for the average worker”, creating a “significant democratic deficit” in industries that were owned and managed supposedly on behalf of the people. Nationalization in the UK played out as centralization significantly weakening the power and authority of local governments vis-a-vis the national government (p.21)

What is more, “nationalized industries, in their varying ways. provided large and continuous subsidies to the private sector, while being severely constrained in their own operations!” (p.20). In the socialist USSR, nationalization was similarly not a synonym of economic democracy, with workers councils in Yugoslavia being the exception rather than the common practice. So nationalization in these and other cases analysed by Cumbers basically meant making the state the capitalist-in-chief. Now, this turned out not to be particularly efficient (even though there is a big difference between industries in this respect). There were plenty of thinkers eager to explain why this was the case.

Hayek’s critique of nationalization and central planning

The centralization of economic power and decision-making, according to thinkers such as Hayek, led to the crushing of individual freedoms and democracy. Central planning, Hayek and other critics emphasized, furthermore creates several knowledge problems – how could central planners “have all the knowledge needed about the individualized demands of millions of consumers in advanced economies?” (p.64). What is more, knowledge is dispersed in society and not easily appropriated by central managers, especially considering that economies are dynamic and evolutionary, and therefore ever changing and unpredictable (p. 65). According to Hayek, “markets and private ownership can solve such knowledge problems, because they involve dispersed decision-making and experimentation […] It is precisely the anarchy of market order, which is the key both to innovation and to the preservation of more democratic societies” (p. 64). So far so good. But we’ve all heard this before – socialism failed because it was too centralized and incapable of innovating.

The market is the solution to all evils, seriously?

What makes the book “Reclaiming public ownership” interesting is that Cumbers doesn’t stop here. Instead, he moves the argument forward by, first of all, explaining why Hayek’s solution is not so appealing as it seems. To begin with, he notes some spheres of life should just not be marketized – think of romantic love, health or education. The absurdity of the marketization of education in contexts such as the US and the UK becomes painfully obvious when compared to the fully free public education in countries such as Austria. Competition and profit are not and should not be the only drivers of economic decision-making (p. 80):

“It is precisely the incursion and spread of ‘free market values’ and norms – through heightened commodification processes – into all areas of economic life that needs to be resisted and rolled back if wider social goals, such as environmental sustainability, decent and ‘choiceworthy’ lives and social justice, are to be achieved” (p. 75).

But beyond such normative discussions, the binaries markets/democracy and planning/authoritarianism just don’t hold empirically. Market economies exist both under democratic and authoritarian regimes, as do forms of central planning (p.76)- just think of how much central planning goes on in private corporations such as Amazon.

Capitalist exploitation rests upon three pillars: “the employment relation, private property and the market” (p. 77).

Real-existing socialism or nationalization attempts in the UK achieved state ownership but they were associated with highly unequal, top-down managerial decision-making and power structures. They were also inefficient.

Markets, purportedly solve efficiency and innovation problems, but they also come with horrible employment relations (think again of Amazon workers peeing in bottles or workplace bullying as seen in *every single TV series about the US corporate world”). What is more, markets can’t and should not govern every aspect of human relations. And finally, they often lead to situations of mass concentration of private property in which a few own a lot and the majority owns nothing but their ability and time to work.

So rather than replacing the state with the market, or vice-versa, what we need to do is to think of alternatives that address all three pillars of exploitation – “the employment relation, private property and the market”.

The alternatives

When thinking of alternatives, Cumbers is careful to urge us not to search for a “one-size fits all solutions” or an all-encompassing model or vision (p. 81). One of the most interesting authors quoted in the book is the associational socialist Otto Neurath, who “used the phrase ‘pseudo-rationalism’ to refer to scientists and philosophers who believed that there is always a possibility of discovering one theory or solution to any problem through rational inquiry” (p. 79). The real world is messy, solutions are always provisional and there are a lot of diverse cultural traditions in the world that should be explored.

Going back to the three pillars (the employment relation, private property and the market), at the core of Cumbers’ alternative vision is the idea that 1) not only should we go beyond marketizing everything, but also 2) the workers should be able to take part in decision-making about companies, that is the employment relations should be democratic and participative. 3) Third, when it comes to property, there is a strong case to be made for “reclaiming public ownership” conceived much more broadly than simply “state ownerhsip”, i.e. nationalization. .

Forms of ownership and the principles behind them:

Cumbers puts forward at least six different forms of ownership, all of which can and should exist together: full state ownership, partial state ownership, local or municipal ownership, employee-owned firms, producer-cooperatives, and consumer cooperatives (p.165). In promoting these diverse forms of ownersip, Cumbers is led by several key principles, among which:

  • taking social justice as class justice: that is, essentially going beyond redistributive justice. i.e. distributing the surplus – or profit- that comes from the labour process through income taxation (not that we are scoring particularly well in this respect currently, anyway…). What is needed instead is to challenge the way the owners of capital control the labour process or “the wider decisions that make and shape economies” (p.146).
  • a commitment to distributed economic power, but not necessarily in decentralized forms: combining diverse forms of public ownership should allow “different groups of citizens to have some level of participation and a stake in the economy, compared to the situation at present, where a small minority globally (the 1 per cent!) hold most of the key decision-making power” (p. 150). In short, there should be different institutional arrangements that “foster distributed and dispersed powers of economic decision-making against tendencies towards hierarchy and centralization” (p 150).
  • tolerance, tradition and heterodox thinking: Traditional forms of collective ownership in fact can be crucial for articulating alternative ownership model. I am thinking here of indigenous communities fighting against corporations “patenting” uses of plants, etc. Another great example, that I encountered actually not in Cumbers’s book but in Xiaowei Wang’ Blockchain Chicken Farm, are Chinese Township and Village enterprises, a large share of which have been owned collectively and about which I will write soon. TVEs were among the key protagonists of China’s explosive growth, outperforming state-owned enterprises).

Not a utopia

The book then moves on from these more abstract principles to a situated analysis of different experiments with diverse forms of public ownership. Rather than being some utopian, never-tried out experiment, most of these forms of ownership are already present. Municipal-cooperative partnerships, for example, have been crucial for the boom of green energy in Denmark (Chapter 9). The state owned Norwegian Oil company has had a long period of intense parliamentary debates on its key decisions (Chapter 8). (This has changed showing that power battles over ownership and decision-making are ongoing and never settled completely.)

Finally, following strong contestation and opposition to water privatization in Latin America, multinational corporations have retreated with varying implications for ownership – in Bolivia, Venezuela and Uruguay operations have returned to the public sector; in Brazil and Chile a mix of private local and foreign capital remains (Chapter 5). But there have also been attempts to return water companies to municipal control – in Argentina, the Aguas Bonaerense (ABSA) public organization was created as a pubic-private partnership between the local authority and a workers’ cooperative (p.113).

So rather than inventing the hot water (or non-privatized water), we can learn from a number of best practices and try to think how different forms of public ownership can transform and democratize different types of economic activity, depending also on the scale of these activities: finance, utilities industries, public transportation, public services, consumer products, private services, consumer services clearly all operate on different scales.

Private ownership might actually be the best option for a small local hairdresser, state, local cooperatives or municipal ownership – for the management of water, and state or municipal ownership – for the management of railways or gas, etc. (p. 168).

Rather than a one-size fit all solution (“nationalize everything!”), thinking of alternatives should be open to combining different forms of ownership at different levels, with the ultimate goal of increasing participation – not of everyone in everything but of everyone at least in some respects and in what matters to them.

So what?

In short, Cumbers’ book is really interesting. Despite the long quotes I don’t think I have given it justice so just read it (there is also some fascinating critique of the concept of the commons inside). But why on Earth am I writing on this book in a blog for our very techie group?

Well, because I think when we criticize regulation as too light touch and want to rehabilitate the state, we should not forget that state ownership (or enterpreneurship) is not always the panacea. To be honest, I have no idea how exactly the argument in Cumbers’ book can be relevant for finding alternatives to the big tech giants.

In a previous post, I had argued that maybe what we need instead of Facebook, are public networks along national lines, with states owning the data of their citizens, using it for research and machine learning, instead of private companies doing this.

But could we instead think of citizens collectively owning their data? Or having citizen cooperatives managing interoperable networks?

Furthermore, what type of public ownership might be an adequate model for an alternative to Amazon? These are not easy questions. And I would love to discuss them with you.

The reason why I made such an extensive review of this book is because I think it might be relevant but it remains for us to explore how exactly. One thing I am certain of is that few things are worse than the current ownership model of big tech, with a few private corporations owning and exploiting all our data.

Going back to the three pillars outlined by Cumbers, when we think of how to reform big tech/find alternatives, we need to think of how to 1) change employment relations within tech firms allowing more participation in decision-making 2) change property relations – who owns the companies that own us? what forms of ownership might be adequate? 3) change the marketization of ourselves and our data – is this reversible in a world where we rent even our homes to strangers?

Each one of these three aspects should be considered and can be changed.

We just rarely frame the debate in these terms, and even more rarely think of all three aspects together. But this is precisely what we should do.

Bridging digital divides: We are proud partners of the CDH Social Data School 2021

By Hugo Leal

In the data-driven age, we believe it is our duty to help bridge some of the digital divides that plague our societies. The Minderoo Centre for Technology and Democracy is proud to partner on the CDH Social Data School 2021.

We are pleased to announce that applications are open to the CDH Social Data School 2021, taking place entirely online from 16-29 June.

Originally conceived by Cambridge Digital Humanities, this year’s event is organised in association with us, the Minderoo Centre for Technology and Democracy. We co-designed and will deliver together a new version of an already outstanding initiative.

Two of our goals at the Minderoo Centre for Technology and Democracy are to enhance public understanding of digital technologies and build journalistic capacity to interrogate big data and Big Tech.

These goals align neatly with the objectives of a Data School, borne out of the need to democratise the exploration of digital methods and push back against abusive practices of data appropriation and exploitation by internet giants.

In the data-driven age, we believe it is our duty to help bridge some of the digital divides that plague our societies.”

I was part of the team that originally put together the Data School pilot, back in 2019, and it was clear to us then that academia was, once again, falling short on its mission and failing the public it should serve.

In the data-driven age, we believe it is our duty to help bridge some of the digital divides that plague our societies.

The yawing skills-gap between those who can and those who cannot understand key aspects of digital data manipulation and analysis, is one of the digital divides that must be urgently closed.

For that purpose, the CDH Social Data School utilises in-house expertise in digital methods and provides hands-on training and knowledge exchange across sectors, professions and disciplines.

Who can apply?

We invite, in particular, people and organisations whose role is to form and inform the public, such as journalists, watchdogs and NGOs, academics, and civil servants, to join us in Cambridge.

The CDH Social Data School also strives to address, even if modestly, other digital divides that fall along the traditional class, gender and racial fault-lines.

Although open to all, the selection procedure prioritises individuals from organisations whose access to digital methods training is limited or non-existent due to insufficient human or financial resources, especially the ones located in the Global South.

“The Social Data School is a venue for that dialogue and an avenue to foster the development of better technical, legal and ethical practices in digital methods research.”

Furthermore, we particularly welcome applications from women and black and minority ethnic candidates as they have historically been under-represented in the technology and data science sectors.

While this will do little to redress centuries of colonial, affluent, white, male fuelled inequalities, we at the Minderoo Centre for Technology and Democracy believe that academic institutions, widely perceived as bastions of elitism, have a special responsibility to adopt inclusive practices and adapt our events to pressing public needs.

If academics want to remain relevant and have proper impact beyond obtuse journal impact factors, we must remove the barriers standing in the way of cross-sectorial and interdisciplinary dialogue.

The CDH Social Data School is a venue for that dialogue and an avenue to foster the development of better technical, legal and ethical practices in digital methods research. Both our delivery format and our programme intend to facilitate a conversation among professions, disciplines and methods.

It is less about having experts looking into a pool of data than about inviting participants to share their knowledge and experiences within the context of a guided immersion into digital methods.

Whenever someone asks to describe the CDH Social Data School format the term “data stroll” comes to mind.

It bears some resemblance with what our colleague Tommaso Venturini calls a “data sprint”, intensive code and data-driven gatherings of people with different skill sets focused on specific research question, but with a more critical and even contemplative nature.

For starters, the pace is slower as we intend to reflect critically upon problems arising from data rather than solving them in a week.

This peripatetic wondering confers the Data School its “strolling” colours. It caters more to “adventurous beginners” willing to get their hands dirty than to data whizzes obsessed over data cleaning.

It is less about having experts looking into a pool of data than about inviting participants to share their knowledge and experiences within the context of a guided immersion into digital methods.

For these reasons, having people from diverse backgrounds is not just a matter of desegregating or decolonising curricula but also an opportunity to confront and learn from different regional, disciplinary, cultural or gender informed perspectives over the widespread practices of data surveillance.

In the context of the Social Data School, democratising access to digital methods is also a call to reclaim our data and demonstrate that data appropriated for private profit can be reappropriated for the common good.

This year’s programme is very rich and ambitious, covering topics ranging from data protection and surveillance to machine learning.

We will also try to make (some) sense of the online disinformation nonsense.

If you are a journalist who has the interest but lacks the tools to investigate the spread of misinformation, work for an NGO who wants to monitor online abuses, a watchdog trying to assess the impact of Machine Learning, a civil servant working to improve the health of our online spaces, an academic willing but hesitant to experiment with digital methods, the Social Data School was designed for you.

Apply now

In Review: Democracy, law and controlling tech platforms

By John Naughton

Notes on a discussion of two readings

  1. Paul Nemitz: “Constitutional democracy and technology in the age of artificial intelligence”, Philosophical Transactions of the Royal Society A, 15 October 2018. https://doi.org/10.1098/rsta.2018.0089
  2. Daniel Hanley: “How Antitrust Lost its Bite”, Slate, April 6, 2021 – https://tinyurl.com/2ht4h8wf

I had proposed these readings because (a) Nemitz’s provided a vigorous argument for resisting the ‘ethics-theatre’ currently being orchestrated by the tech industry as a pre-emptive strike against regulation by law; and (b) the Hanley article argued the need for firm rules in antitrust legislation rather than the latitude currently offered to US judges by the so-called “rule of reason”.

Most of the discussion revolved around the Nemitz article. Here are my notes of the conversation, using the Chatham House Rule as a reporting principle.

  • Nemitz’s assertion that “The Internet and its failures have thrived on a culture of lawlessness and irresponsibility” was challenged as an “un-nuanced and uncritical view of how law operates in the platform economy”. The point was that platform companies do of course ignore and evade law as and when it suits them, but they also at a corporate level rely on it and use it as both ‘a sword and a shield’; law has as a result played a major role in structuring the internet that now exists and producing the dominant platform companies we have today and has been leveraged very successfully to their advantage. Even the egregious abuse of personal data (which may seem closest to being “lawless”) largely still runs within the law’s overly permissive framework. Where it doesn’t, it generally tries to evade the law by skirt around gaps created within the law, so even this seemingly extra-legal processing is itself shaped by the law (and cannot therefore be “lawless”). So any respect for the law that they profess is indeed, as you say, disingenuous, but describing the internet as a “lawless” space – as Nemitz does – misses a huge part of the dynamic that got us here and is a real problem if we’re going to talk about the potential role of law in getting us out. Legal reform is needed, but if it’s going to work then we have to be aware of and account for these things.
  • This critique stemmed from the view that law is both produced by society and in turn reproduces society, and in that sense always functions essentially as an instrument of power — so it has historically been (and remains) a tool of dominance, of hierarchy, of exclusion and marginalisation, of capital and of colonialism. In that sense, the embryonic Silicon Valley giants slotted neatly into that paradigm. And so, could Nemitz’s insistence on the rule of law — without a critical understanding of what that actually means — itself be a problem?

“They [tech companies] employ the law when it suits them and do so very strategically – as both a ‘sword’ and a ‘shield’ – and that’s played a major role in getting the platform ecosystem to where it is now.”

  • On the one hand, laws are the basic tools that liberal democracies have available for bringing companies under democratic (i.e. accountable) control. On the other hand, large companies have always been adept (and, in liberal democracies, very successful) at using the law to further their interests and cement their power.
  • This point is particularly relevant to tech companies. They’ve used law to bring users within their terms of service and thereby to hold on to assets (e.g. exabytes of user data) that they probably wouldn’t have been able to do otherwise. They use law to enable the pretence that click-through EULAs are, in fact, contracts. So they employ the law when it suits them and do so very strategically — as both a ‘sword’ and a ‘shield’ — and that’s played a major role in getting the platform ecosystem to where it is now.
  • Also, law plays a big role in driving and shaping technological development. Technologies don’t emerge in a vacuum, they’re a product of their context and law is a major part of that context. So the platform business models and what’s happening on the internet aren’t outside of the law; they’re constructed through, and depend upon, it. So it’s misleading when people argue (like Nemitz??) that we need to use law to change things — as if the law isn’t there already and may actually be partially enabling things that are societally damaging. So unless we properly understand the rule of law in getting us to our current problematique, talking about how law can help us is like talking about using a tool to fix a problem without realising that the tool is itself is part of the problem.

“It’s the primacy of democracy, not of law that’s crucial.”

  • There was quite a lot of critical discussion of the GDPR on two fronts — its ‘neoliberal’ emphasis on individual rights; and things that are missing from it. Those omissions and gaps are not necessarily mistakes; they may be the result of political choices.
  • One question is whether there is a deficit of law around who owns property in the cloud. If you upload a photo to Facebook or whatever it’s unclear if you have property rights over or if the cloud-computing provider does. General consensus seems to be that that’s a tricky question! (Questions about who owns your data generally are.)
  • Even if laws exist, enforcement looks like a serious problem. Sometimes legal coercion of companies is necessary but difficult. And because of the ‘placelessness’ of the internet, it seems possible that a corporation or an entity could operate in a place where there’s no nexus to coerce it. Years ago Tim Wu and Jack Goldsmith’s book recounted how Yahoo discovered that they couldn’t just do whatever they wanted in France because they had assets in that jurisdiction and France seized them. Would that be the case that with say, Facebook, now? (Just think of why all of the tech giants have their European HQs in Ireland.)
  • It’s the primacy of democracy, not of law that’s crucial. If the main argument of the Nemitz paper is interpreted as the view that law will solve our problems, that’s untenable. But if we take as the main argument that we need to democratically discuss what the laws are, then we all agree with this. (But isn’t that just vacuous motherhood and apple pie?)
  • More on GDPR… it sets up a legal framework in which we can regulate the consenting person that is, that’s a good thing that most people can agree on. But the way that GDPR is constructed is extremely individualistic. For example, it disempowers data subjects in even in the name of giving them rights because it individualises them. So even the way that it’s constructed actually goes some way towards undermining its good effects. It’s based on the assumption that if we give people rights then everything will be fine. (Shades of the so-called “Right to be Forgotten”.)

As for the much-criticised GDPR, one could see it as an example of ‘trickle-down’ regulation, in that GDPR has become a kind of gold standard for other jurisdictions.

  • Why hasn’t academic law been a more critical discipline in these areas? The answer seems to be that legal academia (at least in the UK, with some honourable exceptions) seems exceptionally uncritical of tech, and any kind of critical thinking is relatively marginalised within the discipline compared to other social sciences. Also most students want to go into legal practice, so legal teaching and scholarship tends to be closely tied to law as a profession and, accordingly, the academy tends to be oriented around ‘producing’ practising lawyers.
  • There was some dissent from the tenor of the preceding discourse about the centrality of law and especially about the improbability of overturning such a deeply embedded cognitive and professional system. This has echoes of a venerable strand in political thinking which says that in order to change anything you have to change everything and it’s worse to change a little bit than it is to change everything — which means nothing actually changes. This is the doctrine that it’s quite impossible to do any good at all unless you do the ultimate good, which is to change everything. (Which meant, capitalism and colonialism and original sin, basically!) On the other hand, there is pragmatic work — making tweaks and adjustments — which though limited in scope might be beneficial and appeal to liberal reformers (and are correspondingly disdained by lofty adherents to the Big Picture).
  • There were some interesting perspectives based on the Daniel article. Conversations with people across disciplines show that technologists seem to suggest a technical solution for everything (solutionism rules OK?), while lawyers view the law as a solution for everything. But discussions with political scientists and sociologists mostly involve “fishing for ideas” which is a feature, not a bug, because it suggests that minds are not set in silos — yet. But one of the problems with the current discourse — and with these two articles — is that the law currently seems to be filling the political void. And the discourse seems to reflect public approval of the judicial approach compared with the trade-offs implicit in Congress. But the Slate article shows the pernicious influence or even interference of an over-politicised judiciary in politics and policy enforcement. (The influence of Robert Bork’s 1978 book and the Chicago School is still astonishing to contemplate.)
  • The Slate piece seems to suffer from a kind of ‘neocolonial governance syndrome’ — the West and the Rest. We all know section 230 by heart. And now it’s the “rule of reason” and the consumer welfare criterion of Bork. It’s important to understand the US legal and political context. But we should also understand: the active role of the US administration; what happened recently in Australia (where the government intervened, both through diplomatic means and directly on behalf of the Facebook platform); and in Ireland (where the government went to the European Court to oppose a ruling that Apple had underpaid tax to the tune of 13 billion Euros). So the obsession with the US doesn’t say much about the rest of the world’s capacity to intervene and dictate the rules of the game. And yet China, India and Turkey have been busy in this space recently.
  • And as for the much-criticised GDPR, one could see it as an example of ‘trickle-down’ regulation, in that GDPR has become a kind of gold standard for other jurisdictions. Something like 12 countries have adopted GDPR-like legislation, and this includes many countries in Latin America Chile. Chile, Brazil, South Africa and South Africa, South Africa, Japan, Canada and so on so forth.

Mail-In Voter Fraud: Anatomy of a Disinformation Campaign

John Naughton:

Yochai Benkler and a team from the Berkman-Klein Centre have published an interesting study which comes to conclusions that challenge conventional wisdom about the power of social media.

“Contrary to the focus of most contemporary work on disinformation”, they write,

our findings suggest that this highly effective disinformation campaign, with potentially profound effects for both participation in and the legitimacy of the 2020 election, was an elite-driven, mass-media led process. Social media played only a secondary and supportive role. This chimes with the study on networked propaganda that Yochai, Robert Faris and Hal Roberts conducted in 2015-16 and published in 2018 in  Network Propaganda: Manipulation, Disinformation, and Radicalization in American Politics. They argued that the right-wing media ecosystem in the US operates fundamentally differently than the rest of the media environment. Their view was that longstanding institutional, political, and cultural patterns in American politics interacted with technological change since the 1970s to create a propaganda feedback loop in American conservative media. This dynamic has, they thought, marginalised centre-right media and politicians, radicalised the right wing ecosystem, and rendered it susceptible to propaganda efforts, foreign and domestic.

The key insight in both studies is that we are dealing with an ecosystem, not a machine, which is why focussing exclusively on social media as a prime explanation for the political upheavals of the last decade is unduly reductionist. In that sense, much of the public (and academic) commentary on social media’s role brings to mind the cartoon of the drunk looking for his car keys under a lamppost, not because he lost them there, but because at least there’s light. Because social media are relatively new arrivals on the scene, it’s (too) tempting to over-estimate their impact. Media-ecology provides a better analytical lens because it means being alert to factors like diversity, symbiosis, feedback loops and parasitism rather than to uni-causal explanations.

(Footnote: there’s a whole chapter on this — with case-studies — in my book From Gutenberg to Zuckerberg — published way back in 2012!)

The flight from WhatsApp

John Naughton:

Not surprisingly, Signal has been staggering under the load of refugees from WhatsApp following Facebook’s ultimatum about sharing their data with other companies in its group. According to data from Sensor Tower Signal was downloaded 8.8m times worldwide in the week after the WhatsApp changes were first announced on January 4. Compare that with 246,000 downloads the week before and you get some idea of the step-change. I guess the tweet — “Use Signal” — from Elon Musk on January 7 probably also added a spike.

In contrast, WhatsApp downloads during the period showed the reverse pattern — 9.7m downloads in the week after the announcement, compared with 11.3m before, a 14 per cent decrease.

This isn’t a crisis for Facebook — yet. But it’s a more serious challenge than the June 2020 advertising boycott. Evidence that Zuckerberg & Co are taking it seriously comes from announcements that Facebook has cancelled the February 8 deadline in its ultimatum to users. It now says that it will instead “go to people gradually to review the policy at their own pace before new business options are available on May 15.”  As Charles Arthur has pointed out, the contrast between the leisurely pace at which Facebook has moved on questions of hate speech posted by alt-right outfits and it’s lightning response to the exodus from WhatsApp is instructive.  It shows what really matters to the top brass.

Signal seems an interesting outfit, incidentally, and not just because of its technology. It’s a not-for-profit organisation, for one thing. Its software is open source — which means it can be independently assessed. And it’s been created by interesting people. Brian Acton, for example, is one of the two co-founders of WhatsApp, which Facebook bought in 2014 for $19B. He pumped $50m of that into Signal, and no doubt there’s a lot more where that came from. And Moxie Marlinspike, the CEO, is not only a cryptographer but also a hacker, a shipwright, and a licensed mariner. The New Yorker had a nice profile of him a while back.

Silencing Trump and authoritarian tech power

John Naughton:

It was eerily quiet on social media last week. That’s because Trump and his cultists had been “deplatformed”. By banning him, Twitter effectively took away the megaphone he’s been masterfully deploying since he ran for president. The shock of the 6 January assault on the Capitol was seismic enough to convince even Mark Zuckerberg that the plug finally had to be pulled. And so it was, even to the point of Amazon Web Services terminating the hosting of Parler, a Twitter alternative for alt-right extremists.

The deafening silence that followed these measures was, however, offset by an explosion of commentary about their implications for freedom, democracy and the future of civilisation as we know it. Wading knee-deep through such a torrent of opinion about the first amendment, free speech, censorship, tech power and “accountability” (whatever that might mean), it was sometimes hard to keep one’s bearings. But what came to mind continually was H L Mencken’s astute insight that “for every complex problem there is an answer that is clear, simple and wrong”. The air was filled with people touting such answers.

In the midst of the discursive chaos, though, some general themes could be discerned. The first highlighted cultural differences, especially between the US with its sacred first amendment on the one hand and European and other societies, which have more ambivalent histories of moderating speech. The obvious problem with this line of discussion is that the first amendment is about government regulation of speech and has nothing whatsoever to do with tech companies, which are free to do as they like on their platforms.

A second theme viewed the root cause of the problem as the lax regulatory climate in the US over the last three decades, which led to the emergence of a few giant tech companies that effectively became the hosts for much of the public sphere. If there were many Facebooks, YouTubes and Twitters, so the counter-argument runs, then censorship would be less effective and problematic because anyone denied a platform could always go elsewhere.

Then there were arguments about power and accountability. In a democracy, those who make decisions about which speech is acceptable and which isn’t ought to be democratically accountable. “The fact that a CEO can pull the plug on Potus’s loudspeaker without any checks and balances,” fumed EU commissioner Thierry Breton, “is not only confirmation of the power of these platforms, but it also displays deep weaknesses in the way our society is organised in the digital space.” Or, to put it another way, who elected the bosses of Facebook, Google, YouTube and Twitter?

What was missing from the discourse was any consideration of whether the problem exposed by the sudden deplatforming of Trump and his associates and camp followers is actually soluble – at least in the way it has been framed until now. The paradox that the internet is a global system but law is territorial (and culture-specific) has traditionally been a way of stopping conversations about how to get the technology under democratic control. And it was running through the discussion all week like a length of barbed wire that snagged anyone trying to make progress through the morass.

All of which suggests that it’d be worth trying to reframe the problem in more productive ways. One interesting suggestion for how to do that came last week in a thoughtful Twitter thread by Blayne Haggart, a Canadian political scientist. Forget about speech for a moment, he suggests, and think about an analogous problem in another sphere – banking. “Different societies have different tolerances for financial risk,” he writes, “with different regulatory regimes to match. Just like countries are free to set their own banking rules, they should be free to set strong conditions, including ownership rules, on how platforms operate in their territory. Decisions by a company in one country should not be binding on citizens in another country.”

In those terms, HSBC may be a “global” bank, but when it’s operating in the UK it has to obey British regulations. Similarly, when operating in the US, it follows that jurisdiction’s rules. Translating that to the tech sphere, it suggests that the time has come to stop accepting the tech giant’s claims to be hyper-global corporations, whereas in fact they are US companies operating in many jurisdictions across the globe, paying as little local tax as possible and resisting local regulation with all the lobbying resources they can muster. Facebook, YouTube, Google and Twitter can bleat as sanctimoniously as they like about freedom of speech and the first amendment in the US, but when they operate here, as Facebook UK, say, then they’re merely British subsidiaries of an American corporation incorporated in California. And these subsidiaries obey British laws on defamation, hate speech and other statutes that have nothing to do with the first amendment. Oh, and they should also pay taxes on their local revenues.