Some lessons of Trump’s short career as a blogger

By John Naughton

The same unaccountable power that deprived Donald J. Trump of his online megaphones could easily be deployed to silence other prominent figures, including those of whom liberals approve.

‘From the Desk of Donald J. Trump’ lasted just 29 days. It’s tempting to gloat over this humiliating failure of a politician hitherto regarded as an omnipotent master of the online universe.

Tempting but unwise, because Trump’s failure should alert us to a couple of unpalatable realities.

The first is that the eerie silence that descended after the former President was ‘deplatformed’ by Twitter and Facebook provided conclusive evidence of the power of these two private companies to control the networked public sphere.

Those who loathed Trump celebrate his silencing because they regarded him — rightly — as a threat to democracy.

But on the other hand nearly half of the American electorate voted for him. And the same unaccountable power that deprived him of his online megaphones could easily be deployed to silence other prominent figures, including those of whom liberals approve.

The other unpalatable reality is that Trump’s failure to build an online base from scratch should alert us to the way the utopian promise of the early Internet — that it would be the death of the ‘couch potato’, the archetypal passive media consumer — has not been realised. Trump, remember, had 88.9m followers on Twitter and over 33m fans on Facebook.

“The failure of Trump’s blog is not just a confirmation of the unaccountable power of those who own and control social media, but also a reflection of the way Internet users enthusiastically embraced the ‘push’ model of the Web over the ‘pull’ model that we techno-utopians once hoped might be the network’s future.”

Yet when he started his own blog they didn’t flock to it. In fact they were nowhere to be seen. Forbes reported that the blog had “less traffic than pet adoption site Petfinder and food site Eat This Not That.” And it was reported that he had shuttered it because “low readership made him look small and irrelevant”. Which it did.

What does this tell us? The answer, says Philip Napoli in an insightful essay in Wired,

“lies in the inescapable dynamics of how today’s online media ecosystem operates and how audiences have come to engage with content online. Many of us who study media have long distinguished between “push” media and “pull” media.

“Traditional broadcast television is a classic “push” medium, in which multiple content streams are delivered to a user’s device with very little effort required on the user’s part, beyond flipping the channels. In contrast, the web was initially the quintessential “pull” medium, where a user frequently needed to actively search to locate content interesting to them.

“Search engines and knowing how to navigate them effectively were central to locating the most relevant content online. Whereas TV was a “lean-back” medium for “passive” users, the web, we were told, was a “lean-forward” medium, where users were “active.” Though these generalizations no longer hold up, the distinction is instructive for thinking about why Trump’s blog failed so spectacularly.

“In the highly fragmented web landscape, with millions of sites to choose from, generating traffic is challenging. This is why early web startups spent millions of dollars on splashy Super Bowl ads on tired, old broadcast TV, essentially leveraging the push medium to inform and encourage people to pull their online content.

“Then social media helped to transform the web from a pull medium to a push medium...”

adem-ay-Tk9m_HP4rgQ-unsplash

Credit: Adem AY for Unsplash

This theme was nicely developed by Cory Doctorow in a recent essay, “Recommendation engines and ‘lean-back’ media”.  The optimism of the early Internet era, he mused, was indeed best summarized in that taxonomy.

“Lean-forward media was intensely sociable: not just because of the distributed conversation that consisted of blog-reblog-reply, but also thanks to user reviews and fannish message-board analysis and recommendations.

“I remember the thrill of being in a hotel room years after I’d left my hometown, using Napster to grab rare live recordings of a band I’d grown up seeing in clubs, and striking up a chat with the node’s proprietor that ranged fondly and widely over the shows we’d both seen.

“But that sociability was markedly different from the “social” in social media. From the earliest days of Myspace and Facebook, it was clear that this was a sea-change, though it was hard to say exactly what was changing and how.

“Around the time Rupert Murdoch bought Myspace, a close friend had a blazing argument with a TV executive who insisted that the internet was just a passing fad: that the day would come when all these online kids grew up, got beaten down by work and just wanted to lean back.

“To collapse on the sofa and consume media that someone else had programmed for them, anaesthetizing themselves with passive media that didn’t make them think too hard.

“This guy was obviously wrong – the internet didn’t disappear – but he was also right about the resurgence of passive, linear media.”

This passive media, however, wasn’t the “must-see TV” of the 80s and 90s.  Rather, it was the passivity of the recommendation algorithm, which created a per-user linear media feed, coupled with mechanisms like “endless scroll” and “autoplay,” that obliterated any trace of an active role for the  aptly-named Web “consumer”.

As Napoli puts it,

“Social media helped to transform the web from a pull medium to a push medium. As platforms like Twitter and Facebook generated massive user bases, introduced scrolling news feeds, and developed increasingly sophisticated algorithmic systems for curating and recommending content in these news feeds, they became a vital means by which online attention could be aggregated.

“Users evolved, or devolved, from active searchers to passive scrollers, clicking on whatever content that their friends, family, and the platforms’ news feed algorithms put in front of them. This gave rise to the still-relevant refrain “If the news is important, it will find me.” Ironically, on what had begun as the quintessential pull medium, social media users had reached a perhaps unprecedented degree of passivity in their media consumption. The leaned-back “couch potato” morphed into the hunched-over “smartphone zombie.””

So the failure of Trump’s blog is not just a confirmation of the unaccountable power of those who own and control social media, but also a reflection of the way Internet users enthusiastically embraced the ‘push’ model of the Web over the ‘pull’ model that we techno-utopians once hoped might be the network’s future.

Apple clearly has power, but it isn’t accountable

By John Naughton

The only body that has, to date, been able to exert real control over the data-tracking industry is a giant private company which itself is subject to serious concerns about its monopolistic behaviour. Where is democracy in all this?

A few weeks ago, Apple dropped its long-promised bombshell on the data-tracking industry.

The latest version (14.5) of iOS — the operating system of the iPhone — included a provision that required app users explicitly to confirm that they wished to be tracked across the Internet in their online activities.

At the heart of the switch is a code known as “the identity for advertisers” or IDFA. It turns out that every iPhone comes with one of these identifiers, the object of which is to provide participants in the hidden real-time bidding system with aggregate data about the user’s interests.

For years, iPhone users had had the option to switch it off by digging into the privacy settings of their devices; but, because they’re human, very few had bothered to do that.

From 14.5 onwards, however, they couldn’t avoid making a decision, and you didn’t have to be a Nobel laureate to guess that most iPhone users would opt out.

Which explains why those who profit from the data-tracking racket had for months been terminally anxious about Apple’s perfidy.

Some of the defensive PR mounted on their behalf — for example Facebook’s weeping about the impact on small, defenceless businesses — defied parody.

“We have evidence of its [real-time bidding] illegitimacy, and a powerful law on the statute book which in principle could bring it under control — but which we appear unable to enforce.”

Other counter-offensives included attacks on Apple’s monopolistic control over its Apps store, plus charges of rank hypocrisy – that changes in version 14.5 were not motivated by Apple’s concerns for users’ privacy but by its own plans to enter the advertising business. And so on.

It’ll be a while until we know for sure whether the apocalyptic fears of the data-trackers were accurate.

It takes time for most iPhone users to install operating system updates, and so these are still relatively early days. But the first figures are promising. One data-analytics company, for example, has found that in the early weeks the daily opt-out rate for American users has been around 94 percent.

This is much higher than surveys conducted in the run-up to the change had suggested — one had estimated an opt-out rate closer to 60 per cent.

If the opt-out rate is as high as we’ve seen so far, then it’s bad news for the data-tracking racket and good news for humanity. And if you think that description of what the Financial Times estimates to be a $350B industry is unduly harsh, then a glance at a dictionary may be helpful.

Merriam-Webster, for example, defines ‘racket’ as “a fraudulent scheme, enterprise, or activity” or “a usually illegitimate enterprise made workable by bribery or intimidation”.

It’s not clear whether the computerised, high-speed auction system in which online ads are traded benefits from ‘bribery or intimidation’, but it is certainly illegal — and currently unregulated.

That is the conclusion of a remarkable recent investigation by two legal scholars, Michael Veale and Frederik Zuiderveen Borgesius, who set out to examine whether this ‘real-time bidding’ (RTB) system conforms to European data-protection law.

“The irony in this particular case is that there’s no need for such an overhaul: Europe already has the law in place.”

They asked whether RTB complies with three rules of the GDPR (General Data Protection Regulation) — the requirement for a legal basis, transparency, and security. They showed that for each of the requirements, most RTB practices do not comply. “Indeed”, they wrote, “it seems close to impossible to make RTB comply”. So, they concluded, it needs to be regulated.

It does.

Often the problem with tech regulation is that our legal systems need to be overhauled to deal with digital technology. But the irony in this particular case is that there’s no need for such an overhaul: Europe already has the law in place.

It’s the GDPR, which is part of the legal code of every EU country and has provision for swingeing punishments of infringers. The problem is it’s not being effectively enforced.

Why not? The answer is that the EU delegates regulatory power to the relevant institutions — in this case Data Protection Authorities — of its member states. And these local outfits are overwhelmed by the scale of the task – and are lamentably under-resourced for it.

Half of Europe’s DPAs have only five technical experts or fewer. And the Irish Data Protection Authority, on whose patch most of the tech giants have their European HQs, has the heaviest enforcement workload in Europe and is clearly swamped.

So here’s where we are: an illegal online system has been running wild for years, generating billions of profits for its participants.

We have evidence of its illegitimacy, and a powerful law on the statute book which in principle could bring it under control — but which we appear unable to enforce.

And the only body that has, to date, been able to exert real control over the aforementioned racket is… a giant private company which itself is subject to serious concerns about its monopolistic behaviour. And the question for today: where is democracy in all this? You only have to ask to know the answer.


A version of this post appeared in The Observer on 23 May, 2021.

In Review: The Cloud and the Ground

By Julia Rone

In this literature review, Julia Rone outlines the key trends and logics behind the boom in data centre construction across the globe.

Hamlet: Do you see yonder cloud that’s almost in shape of a camel?

Polonius: By th’ mass, and ‘tis like a camel indeed.

Hamlet: Methinks it is like a weasel

Polonius: It is backed like a weasel.

Hamlet: Or like a whale?

Polonius: Very like a whale.

The cloud – this fundamental building block of digital capitalism – has been so far defined mainly by the PR of big tech companies.

The very metaphor of the “cloud” presupposes an ethereal, supposedly immaterial collection of bits gliding in the sky, safely removed from the corrupt organic and inorganic matter that surrounds us. This, of course, can’t be further from the truth.

But even when they acknowledge the materiality of the “cloud” and the way it is grounded in a very physical infrastructure of cables, data centres, etc., tech giants still present it in a neat and glamorous way. Data centres, for example, provide carefully curated tours and are presented as sites of harmoniously humming servers, surrounded by wild forests and sea. Some data centres even boast with saunas.  

Instead of accepting blindly the PR of tech companies and seeing “the cloud” as whatever they present it (similarly to the way Polonius accepts Hamlet’s interpretations of the cloud), we should be attuned to the multiplicity of existing perspectives on “the cloud”, coming from researchers, rural and urban communities, and environmentalists, among others.

In this lit review, I outline the key trends and logics behind the boom in data centre construction across the globe. I base the discussion on several papers from two special issues. The first one is The Nature of Data Centres, edited by Mél Hogan and Asta Vonderau for Culture Machine. The second: Location and Dislocation: Global Geographies of Digital Data, edited by Alix Johnson and Mél Hogan for Imaginations: Journal of Cross-Cultural Image Studies. I really recommend reading both issues – the contributions read like short stories and go straight to the core of the most pressing political economy problems of our times.

Credit: Zbynek Burival for Unsplash

The “nature” of data centres

Data centres as key units of the cloud are very material: noisy, hot, giant storage boxes containing thousands of servers, they occupy factories from the past or spring up on farm land all over the globe. Data centres are grounded in particular locations and depend on a number of “natural” factors for their work, including temperature, humidity, or air pollution. In order for data centres to function, they not only use up electricity (produced by burning coal or using wind energy, for example). They also employ technologies to circulate air and water to cool down and emit heat as a waste product.

But data centres are not only assemblages of technology and nature. Their very appearance, endurance and disappearance is defined by complex institutional and non-institutional social relations: regions and countries compete with each other to cut taxes for tech corporations that promise to bring jobs and development. Some states (e.g. Scandinavian states) are preferred over others because of their stable institutions and political “climate”.

No blank slate

To illustrate, the fact that data centres are built in the Sweden’s Norrbotten region has to do a lot with the “nature” of the region conceptualized reductively by tech companies as cheap energy, cheap water, cheap land and green imagery (Levenda and Mahmoudi, 2019, 2). But it also has to do a lot with the fact that Norrbotten is filled with the “ruins of infrastructural promises” (Vonderau, 2019, 3) – “a scarcely populated and resource-rich region, historically inhabited by native Sami people, the region was for a long-time regarded as no-man’s land” (ibid). Not only is Norrbotten scarcely populated but it also has an “extremely stable and redundant electricity grid which was originally designed for […]‘old’ industries” (ibid, 7).

A similar logic of operation could be discerned in the establishment of a data centre in the Midway Technology Centre in Chicago, where the Schulze Bakery was repurposed as a data centre (Pickren, 2017) Pickren was told in an interview with a developer working on the Schulze redevelopment project that “because the surrounding area had been deindustrialized, and because a large public housing project, the Robert Taylor Homes had closed down in recent decades, the nearby power substations actually had plenty of idle capacity to meet the new data centre needs” (Pickren, 2017). As Pickern observes, “there is no blank slate upon which the world of data simply emerges”(ibid.) There are multiple “continuities between an (always temporary) industrial period and the (similarly temporary) ascendancy of digital capitalism” (ibid).

Extraction and the third wave of urbanization

What the examples of Norrbotten in Sweden and the redevelopment of Chicago by the data industry show is that despite a carefully constructed PR around “being close to nature” and “being green”, decisions on data centre construction actually depend on availability of electricity for which depopulation is only a plus. Instead of “untouched” regions, what companies often go for are rather abandoned or scarcely populated regions with infrastructure left behind. Data centres use resources – industrial capacity or Green energy – that are already there, left from previous booms and busts of capitalism or from conscious state investment that is now used to the benefit of private companies.

“Urban interactions are increasingly mediated by tech and leave a digital trace – from paying your Uber to ordering a latte, from booking a restaurant to finding a date for the night.”

Both urban and rural communities are in fact all embedded within a common process of a “third wave of urbanization” that goes hand in hand with an increase in the commodification and extraction of both data and “natural” resources (Levenda and Mahmoudi, 2019). What this means is that urban interactions are increasingly mediated by tech and leave a digital trace – from paying your Uber to ordering a latte, from booking a restaurant to finding a date for the night.

Credit: Priscilla Du Preez for Unsplash

This urban data is then stored and analysed in predominantly rural settings: “[T]he restructuring of Seattle leads to agglomerations in urban data production, which rely on rural data storage and analysis” (ibid, 9). Put simply, “[J]ust as Facebook and Google use rural Oregon for their ‘natural’ resources, they use cities and agglomerations of ‘users’ to extract data”.

Ultimately, data centres manifest themselves as assemblages for the extraction of value from both people and nature.

As if in a perverse rendition of Captain Planet, all elements – water, air, earth, human beings and technology – unite forces so that data centres can function and you can upload a cat photo on Facebook. In this real life data-centre version of Captain Planet, however, all elements are used up, extracted, exhausted. Water is polluted.

People live with the humming noise of thousands of servers.

Taxes are not collected and therefore not invested in communities that are already deprived.

What is more, data centres often arrive in rural regions with the promise to create jobs and drive development. But as numerous authors have shown, actual jobs created by data centres are less than what was originally promised, with most jobs being precarious subcontracting (Mayer, 2019). As Pickren notes, “If the data centre is the ‘factory of the 21st century,’ whither the working class?”

Abstraction

Data centres do create jobs but predominantly in urban areas. “[W]here jobs are created, where they are destroyed and who is affected are socially and geographically uneven” (Pickern, 2017). Where value is extracted from and where value is allocated rarely coincide.

And if from a birds view perspective, what matters is the total number of jobs created, what matters in Sweden’s Norrbotten or The Netherlands’ Groningen, where data centres are built, is how many jobs are created there and furthermore, what types of jobs (Mayer, 2019). In the same way, while from an abstract point of view tech companies such as Microsoft might be “carbon neutral”, this does not change their questionable practices and dependence on coal in particular places.

The Introduction to the “Location and Dislocation” Special Issue quotes a classic formulation by Yi-Fu Tuan according to whom “place is space made meaningful”(Johnson and Hogan, 2017, 4).

“Whenever we hear big tech’s grandiose pledges of carbon neutrality and reducing carbon emissions, we need to understand that these companies are not simply “green-washing” but are also approaching the problem of global warming “in the abstract””.

One of the key issues with tech companies building data centres is the way they privilege space over place – an abstract logic of calculation and global flows over the very particular local relations of belonging and accountability.

In a great piece on “fungible forms of mediation in the cloud”, Pasek explores how the practice of big tech companies to buy renewable energy certificates does more harm than good, since it allows “data centre companies to symbolically negate their local impacts in coal-powered regions on papers, while still materially driving up local grid demand and thereby incentivizing the maintenance or expansion of fossil energy generation” (ibid, 7).

The impact for local communities can be disastrous: “In communities located near power plants, disproportionately black, brown and low-income, this has direct consequences for public health, including greater rates of asthma and infant mortality” (ibid).

So whenever we hear big tech’s grandiose pledges of carbon neutrality and reducing carbon emissions, we need to understand that these companies are not simply “green-washing” but are also approaching the problem of global warming “in the abstract”, at the global level, paying little attention to their effect in any particular locality.

As Pasek notes, this logic of abstraction subordinates the “urgencies of place” to the “logics of circulation”.

Unsurprisingly, it is precisely the places that have already lost the most from previous industrial transformations that are the ones who suffer most during the current digital transformations.

Invisibility and Hypervisibility

What makes possible the extraction practices of tech companies is a mix between how little we know about them and how much we believe in their promise of doing good (or well, not doing evil at least).

In her fascinating essay “The Second Coming: Google and Internet infrastructure”, Mayer (2019) explores the rumours around a new Google data centre in Groningen. Mayer explores how Google’s reputation as a leading company combined with a the total lack of concrete information about the new data centre create a mystical aura around the whole enterprise: “Google’s curation of aura harkens back to the early eras of Western sacred art, during which priests gave sacred objects their magical value by keeping them ‘invisible to the spectator’” (Mayer, 2019, 4).

Mayer contrasts a sleek Google PR video (with a lone windmill and blond girls looking at computer screens) with the reality brought about by a centre that offered only a few temporary subcontracting jobs. The narrative of regional growth presented by Google unfortunately turned out to be PR rather than a coherent development strategy.

Impermanence

Furthermore, in a fascinating essay on data centres as “impertinent infrastructures”, Velkova (2019) explores the temporality and impermanence of data centres that can be moved or abandoned easily. 

How could such impertinent structures provide regional development?

What is more, even if data centres do not move, they do reorganize global territories and connectivity speeds through the threat of moving: “data center companies are constantly reevaluating the economic profitability of  particular locations in synchrony with server replacement cycles and new legislative frameworks that come into force.

Data centres are above all impermanent – they can come and go. Rather than being responsible to a particular locality, data centres are part of what Pasek called a “logic of global circulation”

Should tax regulations, electricity prices, legislation or geopolitical dynamics shift, even a hyper-sized data center like Google’s in Finland or Facebook’s in Sweden could make a corresponding move to a place with more economically favourable conditions within three years” (Velkova, 2019, 5).

So data centres are on the one hand, hypervisible through corporate PR. On the other hand, they are invisible for local communities that are left guessing about construction permits, the conditions of data centres arrival, their impact on the environment and the economy.

But ultimately, and this is the crucial part, data centres are above all impermanent – they can come and go. Rather than being responsible to a particular locality, data centres are part of what Pasek called a “logic of global circulation”.

Holding each node accountable

Big tech’s logics of extraction, abstraction, invisibility, hypervisibility and impermanence are driving the current third wave of urbanization and unequal development under digital capitalism.

But it is possible to imagine another politics that would “hold each node accountable to the communities in which they are located” (Pasek, 9).

The papers from the two special issues I review here provide an exhaustive and inspiring overview of the “nature” and imaginaries of data centres.

Yet, with few exceptions (such as the work of Asta Vonderau), we know little about the politics of resistance to data centres and the local social movements that are appearing and demanding more democratic participation in decision making.

Would it be possible for us – citizens – to define what the cloud should look like? Not sure. But this is a crucial element of any project for democratizing digital sovereignty. And this is what I work on now.

In Review: Bellingcat and the unstoppable Mr Higgins

By John Naughton

Review of We are Bellingcat: An Intelligence Agency for the People, by Eliot Higgins, Bloomsbury, 255pp

On the face of it, this book tells an implausible story. It’s about how an ordinary guy – a bored administrator in Leicester, to be precise – becomes a skilled Internet sleuth solving puzzles and crimes which appear to defeat some of the world’s intelligence agencies. And yet it’s true. Eliot Higgins was indeed a bored administrator, out of a job and looking after his young daughter in 2011 while his wife went out to work. He was an avid watcher of YouTube videos, especially of those emanating from the Syrian civil war, and one day had an epiphany: “If you searched online you could find facts that neither the press nor the experts knew.”

Higgins realised that one reason why mainstream media were ignoring the torrent of material from the war zone that was being uploaded to YouTube and other social media channels was that these outlets were unable to verify or corroborate it. So he started a blog — the Brown Moses blog — and discovered that a smattering of other people had had a similar realisation, which was the seed crystal for the emergence of an online community that converged around news events that had left clues on YouTube, Facebook, Twitter and elsewhere.

This community of sleuths now sails under the flag of Bellingcat, a name taken from the children’s story about the ingenious mice who twig that the key to obtaining early warning of a cat’s approach is to put a bell round its neck. This has led to careless journalists calling members of the community “Bellingcats” — which leads them indignantly to point out that they are the mice, not the predators!

The engaging name belies a formidable little operation which has had a series of impressive scoops. One of the earliest involved confirming Russian involvement in the downing of MH17, the Malaysia Airlines aircraft brought down by a missile when flying over Ukraine. Other impressive scoops included identification of the Russian FSB agents responsible for the Skripal poisonings and finding the FSB operative who tried to assassinate Alexai Navalny, the Russian democratic campaigner and Putin opponent who is now imprisoned — and, reportedly, seriously ill — in a Russian gaol.

‘We are Bellingcat’ is a low-key account of how this remarkable outfit evolved and of the role that Mr Higgins played in its development. The deadpan style reflects the author’s desire to project himself as an ordinary Joe who stumbled on something significant and worked at it in collaboration with others. This level of understatement is admirable but not entirely persuasive for the simple reason that Higgins is no ordinary Joe. After all, one doesn’t make the transition from a bored, low-level administrator to become a Research Fellow at U.C. Berkeley’s Human Rights Center and a member of the International Criminal Court’s Technology Advisory Board without having some exceptional qualities.

“One could say that the most seminal contribution Bellingcat has made so far is to explore and disseminate the tools needed to convert user-generated content into more credible information — and maybe, sometimes, into the first draft of history.”

One of the most striking things about Bellingcat’s success is that — at least up to this stage — its investigative methodology is (to use a cliché) not rocket science. It’s a combination of determination, stamina, cooperation, Internet-saviness, geolocation (where did something happen?), chronolocation (when did it happen?) and an inexhaustible appetite for social-media-trawling. There is, in other words, a Bellingcat methodology — and any journalist can learn it, provided his or her employer is prepared to provide the time and opportunity to do so. In response, Bellingcat has been doing ‘boot camps’ for journalists — first in Germany, Britain and France and — hopefully — in the US. And the good news is that some mainstream news outlets, including the New York Times, the Wall Street Journal and the BBC, have been setting up journalistic units working in similar ways.

In the heady days of the so-called ‘Arab spring’ there was a lot of excited hype about the way the smartphone had launched a new age of ‘Citizen Journalism’. This was a kind of category error which confused user-generated content badged as ‘witnessing’ with the scepticism, corroboration, verification, etc. that professional journalism requires. So in that sense one could say that the most seminal contribution Bellingcat has made so far is to explore and disseminate the tools needed to convert user-generated content into more credible information — and maybe, sometimes, into the first draft of history.

Mr Higgins makes continuous use of the phrase “open source” to describe information that he and his colleagues find online, when what he really means is that the information — because it is available online — is in the public domain. It is not ‘open source’ in the sense that the term is used in the computer industry, but I guess making that distinction is now a lost cause because mainstream media have re-versioned the phrase.

The great irony of the Bellingcat story is that the business model that finances the ‘free’ services (YouTube, Twitter, Facebook, Reddit, Instagram et al) that are polluting the public sphere and undermining democracy is also what provides Mr Higgins and his colleagues with the raw material from which their methodology extracts so many scoops and revelations. Mr Higgins doesn’t have much time for those of us who are hyper-critical of the tech industry. He sees it as a gift horse whose teeth should not be too carefully examined. And I suppose that, in his position, I might think the same.

Forthcoming in British Journalism Review, vol. 32, No 2, June 2021.

In Review: How do we avoid the ‘racket’ of sustainable development and green tech?

By Mallika Balakrishnan

Ahead of COP26, can the narrative be shifted away from what Camila Nobrega and Joana Varon describe as a “dangerous mix of ‘green economy’ and techno-solutionism? Mallika Balakrishnan explores the Minderoo Centre for Technology and Democracy’s reading & discussion of Nobrega, Camila & Joana Varon. “Big tech goes green(washing): feminist lenses to unveil new tools in the master’s houses.” GISWatch: Technology, the environment, and a sustainable world. 2021.

On June 17, the Minderoo Centre will be hosting thinkers from academia, civil society, and industry for our workshop on Technology & the Environment.

In the lead up to COP26, we’re keen to spark discussion and amplify action at the nexus of technology and its impact on the environment.

One of the themes we’re hoping to explore more is the environmental cost of technological convenience. 

Frankly, critiques of convenience are often the place my brain starts to tune out: “convenience” frequently serves as shorthand for a framework of climate destruction via individual consumption choices.

Several, though not all, of these analyses are ableist and anti-poor, and they refuse to start from a commitment to decoloniality. 

Nevertheless, the environmental and social costs of convenience are staggering, and will be crucial to understand on the road to environmental justice.

I proposed reading Joana Varon and Camila Nobrega’s recently published article because I resonated strongly with their feminist, power-based analysis of technology and the environment, specifically around the role of big tech companies and intergovernmental meetings such as COP.

Their work articulates the dissonance between big tech’s stated commitments to climate justice and actual consolidation of power, in a way that helped me start to think about convenience at a level of analysis that doesn’t feel disingenuous. 

“Especially in high-level fora such as COP26, it might be difficult to shift the narrative away from what the authors call a “dangerous mix of ‘green economy’ and techno-solutionism.” “

Some themes and remarks that surfaced in our discussion: 

When it comes to the environment, Big Tech companies are eager to centre themselves in policy-setting debates.

This article highlighted how tech companies have already positioned themselves as having useful tools to help solve the climate crisis, sweeping under the rug the ways they are exacerbating environmental destruction. As brought up in our discussion, this feels reminiscent of tobacco companies’ roles in shaping narratives around the risk of lung cancer. Especially in high-level fora such as COP26, it might be difficult to shift the narrative away from what the authors call a “dangerous mix of ‘green economy’ and techno-solutionism.” 

Solidarity with local resistance reminds us to avoid consumer/market-centric framing.

So how might MCTD work to address the gap between policy discussions and tangible justice for impacted communities? We discussed the importance of amplifying—and not tokenizing—voices in movement, recognizing many who have been doing this work for years.

There’s a connection to be made to the twin logics of extraction and abstraction (as highlighted in Kate Crawford’s Atlas of AI). The relationship between technology and the environment is easily abstracted to technocratic language or boiled down to carbon footprint. This abstraction eschews an explicitly anti-accumulation, structural analysis, and in turn makes it easier for tech companies to position themselves as “green” solutioneers.

We should be in solidarity with real-time resistance and reject framing issues in ways that suggest:

1) the only relevant harms are consumer harms

2) the only relevant solutions are market solutions

3) everything is consumable and replaceable.

As far as tactics for socio-environmental justice go, planting a tree for every square mile of land destroyed leaves a lot to be desired. And as Varon and Nobrega remind us in this article, we should be thinking about the human, social, and environmental costs of environmental destruction as linked.

We also talked about the relationship between environmental destruction and the destruction of the commons: while there were some reservations around the concept of the commons, folks discussed the emancipatory potential of bienes comunes in challenging companies’ privatization and ownership of (often unceded) land. 

We need to look beyond “effectiveness” and remember structures of power.

How do we avoid the “racket” of sustainable development and green tech?

At one level, we need to push back on the claim that Big Tech can effectively parachute in and solve problems of environmental injustice. But whether or not a tech company’s proposed solutions do what they promise, we should remember that the consolidation of power to these companies is the broader context in which this is taking place. 

Drawing from insights around online advertising ecosystems, we discussed how a lack of transparency can make it difficult to hold power to account, especially in terms of regulation. Nevertheless, we emphasized that whether or not a company’s tech solution works is incidental to the power the company has: rather, it’s about how Big Tech companies have consolidated restructured capacity and centered themselves infrastructurally.

Convenience is costly. We need to be asking why, and for whom.

When we think about convenience, it’s worth remembering to question what is convenient for companies, for workers, and for frontline communities—we should think beyond convenience as ascribed only to the individual consumer. Analyses that treat people as totally separate individuals forego possibilities for power through collective action. 

Have a different perspective to add? There’s still time to submit your provocation to our Technology & the Environment Workshop before the May 15 deadline!

Read our call for provocations (no set format; we just want bold questions)here

In Review: Is more state ownership the panacea that will save us from the big tech giants?

By Julia Rone

Living in a world with an increasingly uncontrolled accumulation of power by big tech, what alternatives are there to privately owned enterprises that could ensure the tech sector better serves democratic society? Julia Rone reviews Andrew Cumbers’ new book ‘Reclaiming Public Ownership. Making Space for Economic Democracy’ and starts a conversation on how to apply his writing to the tech sector.

Every discussion we’ve had so far on regulating tech giants ends up with discussing whether regulation (be it anti-trust/regulating ‘recommending’ algorithms/treating big tech as public utilities) is enough.

As a colleague smartly noted last time, we have reduced our expectations of the state to a form of (light-touch) regulation to take place only in case markets fail. But as Mariana Mazucatto has famously shown in her spectacular book “The Value of Everything”, “the state” has in fact funded the fundamental science and tech development behind not only the Internet but also the technologies used in purportedly private companies’ successes such as the iPhone. The state has been a key driver of innovation rather than some clumsy behemoth lagging behind technology and poking its nose in people’s business.

The sad thing, of course, is that the value created with public funding has been subsequently privatized/appropriated by private companies – not only in monetary terms but also in symbolic terms. I’ve never had random strangers at parties telling me about publicly funded researchers, yet I have endured hours of men (it’s usually men) praising Elon Musk and Steve Jobs.

Now, we might think that this “forgetting” of the role of the state is innocent, a childish fascination with mythical entrepreneurial figures. But that’s not the case. The bad-mouthing of the state we see in tech industry is part of a much broader trend (neoliberalism?) of framing the state as incompetent, wasteful, bureaucratic and incapable of innovation.

This is why, when, as a reaction to the 2008 economic crisis, the British government nationalized (fully or partially) large parts of UK’s retail or banking sector, they were quick to appoint private executives, often from the very banks that had caused the crisis to begin with.

What nationalization amounted to, in this case, was the public sector absorbing bad debts to allow private capital to restructure and start accumulating profits again. Andrew Cumbers begins his brilliant book on public ownership with this example and dedicates the rest of the book to 1) explaining why even amidst the biggest crisis of capitalism private executives were considered more competent; 2) what alternatives are there to privately owned enterprises.

While the neoliberal bad-mouthing of the state and its reduction to light-touch regulator have been undoubtedly super influential, the question I would like to bring to the table, drawing extensively on Cumbers, is: should we uncritically rehabilitate the state? Is more state the panacea that will save us from the big bad tech giants? Or should we try to think of new forms of ownership and democratic management, in our case, of digital platforms? In the following paragraphs I will present Cumbers’ book in detail (maybe too much detail but it’s really a great book) before returning to these key questions at the end.

Historic experiences with nationalization in the UK – “neither socialization nor modernization”

What makes Cumbers’ book so brilliant is that he engages in depth with existing theories, empirical examples and critiques of public ownership but then he moves beyond this purely analytical exercise of discussing ‘who is right and who is wrong’.

Instead, he puts forward an alternative – a vision of public ownership that goes beyond the state, embraces diversity and heterodoxy, and puts at its center the core principle of economic democracy.

To begin with, Cumbers argues that nationalization and state planning have such a bad name partially because of the way they were instituted in practice. Talking about the British post 1945 experience with nationalization, Cumebrs argues it was “neither socialization, nor modernization” (p. 14). More radical agendas never penetrated the upper echelons of the Labour establishment: referring to the nationalization programme as “socialization” was mainly PR and the government “was deeply suspicious of anything remotely ‘syndicalist’ that might provide more grass-roots or shop-floor representation and influence on the councils of nationalized industries” (p. 15).

Management was top-down and the large bureaucratic structures produced “an alienating environment for the average worker”, creating a “significant democratic deficit” in industries that were owned and managed supposedly on behalf of the people. Nationalization in the UK played out as centralization significantly weakening the power and authority of local governments vis-a-vis the national government (p.21)

What is more, “nationalized industries, in their varying ways. provided large and continuous subsidies to the private sector, while being severely constrained in their own operations!” (p.20). In the socialist USSR, nationalization was similarly not a synonym of economic democracy, with workers councils in Yugoslavia being the exception rather than the common practice. So nationalization in these and other cases analysed by Cumbers basically meant making the state the capitalist-in-chief. Now, this turned out not to be particularly efficient (even though there is a big difference between industries in this respect). There were plenty of thinkers eager to explain why this was the case.

Hayek’s critique of nationalization and central planning

The centralization of economic power and decision-making, according to thinkers such as Hayek, led to the crushing of individual freedoms and democracy. Central planning, Hayek and other critics emphasized, furthermore creates several knowledge problems – how could central planners “have all the knowledge needed about the individualized demands of millions of consumers in advanced economies?” (p.64). What is more, knowledge is dispersed in society and not easily appropriated by central managers, especially considering that economies are dynamic and evolutionary, and therefore ever changing and unpredictable (p. 65). According to Hayek, “markets and private ownership can solve such knowledge problems, because they involve dispersed decision-making and experimentation […] It is precisely the anarchy of market order, which is the key both to innovation and to the preservation of more democratic societies” (p. 64). So far so good. But we’ve all heard this before – socialism failed because it was too centralized and incapable of innovating.

The market is the solution to all evils, seriously?

What makes the book “Reclaiming public ownership” interesting is that Cumbers doesn’t stop here. Instead, he moves the argument forward by, first of all, explaining why Hayek’s solution is not so appealing as it seems. To begin with, he notes some spheres of life should just not be marketized – think of romantic love, health or education. The absurdity of the marketization of education in contexts such as the US and the UK becomes painfully obvious when compared to the fully free public education in countries such as Austria. Competition and profit are not and should not be the only drivers of economic decision-making (p. 80):

“It is precisely the incursion and spread of ‘free market values’ and norms – through heightened commodification processes – into all areas of economic life that needs to be resisted and rolled back if wider social goals, such as environmental sustainability, decent and ‘choiceworthy’ lives and social justice, are to be achieved” (p. 75).

But beyond such normative discussions, the binaries markets/democracy and planning/authoritarianism just don’t hold empirically. Market economies exist both under democratic and authoritarian regimes, as do forms of central planning (p.76)- just think of how much central planning goes on in private corporations such as Amazon.

Capitalist exploitation rests upon three pillars: “the employment relation, private property and the market” (p. 77).

Real-existing socialism or nationalization attempts in the UK achieved state ownership but they were associated with highly unequal, top-down managerial decision-making and power structures. They were also inefficient.

Markets, purportedly solve efficiency and innovation problems, but they also come with horrible employment relations (think again of Amazon workers peeing in bottles or workplace bullying as seen in *every single TV series about the US corporate world”). What is more, markets can’t and should not govern every aspect of human relations. And finally, they often lead to situations of mass concentration of private property in which a few own a lot and the majority owns nothing but their ability and time to work.

So rather than replacing the state with the market, or vice-versa, what we need to do is to think of alternatives that address all three pillars of exploitation – “the employment relation, private property and the market”.

The alternatives

When thinking of alternatives, Cumbers is careful to urge us not to search for a “one-size fits all solutions” or an all-encompassing model or vision (p. 81). One of the most interesting authors quoted in the book is the associational socialist Otto Neurath, who “used the phrase ‘pseudo-rationalism’ to refer to scientists and philosophers who believed that there is always a possibility of discovering one theory or solution to any problem through rational inquiry” (p. 79). The real world is messy, solutions are always provisional and there are a lot of diverse cultural traditions in the world that should be explored.

Going back to the three pillars (the employment relation, private property and the market), at the core of Cumbers’ alternative vision is the idea that 1) not only should we go beyond marketizing everything, but also 2) the workers should be able to take part in decision-making about companies, that is the employment relations should be democratic and participative. 3) Third, when it comes to property, there is a strong case to be made for “reclaiming public ownership” conceived much more broadly than simply “state ownerhsip”, i.e. nationalization. .

Forms of ownership and the principles behind them:

Cumbers puts forward at least six different forms of ownership, all of which can and should exist together: full state ownership, partial state ownership, local or municipal ownership, employee-owned firms, producer-cooperatives, and consumer cooperatives (p.165). In promoting these diverse forms of ownersip, Cumbers is led by several key principles, among which:

  • taking social justice as class justice: that is, essentially going beyond redistributive justice. i.e. distributing the surplus – or profit- that comes from the labour process through income taxation (not that we are scoring particularly well in this respect currently, anyway…). What is needed instead is to challenge the way the owners of capital control the labour process or “the wider decisions that make and shape economies” (p.146).
  • a commitment to distributed economic power, but not necessarily in decentralized forms: combining diverse forms of public ownership should allow “different groups of citizens to have some level of participation and a stake in the economy, compared to the situation at present, where a small minority globally (the 1 per cent!) hold most of the key decision-making power” (p. 150). In short, there should be different institutional arrangements that “foster distributed and dispersed powers of economic decision-making against tendencies towards hierarchy and centralization” (p 150).
  • tolerance, tradition and heterodox thinking: Traditional forms of collective ownership in fact can be crucial for articulating alternative ownership model. I am thinking here of indigenous communities fighting against corporations “patenting” uses of plants, etc. Another great example, that I encountered actually not in Cumbers’s book but in Xiaowei Wang’ Blockchain Chicken Farm, are Chinese Township and Village enterprises, a large share of which have been owned collectively and about which I will write soon. TVEs were among the key protagonists of China’s explosive growth, outperforming state-owned enterprises).

Not a utopia

The book then moves on from these more abstract principles to a situated analysis of different experiments with diverse forms of public ownership. Rather than being some utopian, never-tried out experiment, most of these forms of ownership are already present. Municipal-cooperative partnerships, for example, have been crucial for the boom of green energy in Denmark (Chapter 9). The state owned Norwegian Oil company has had a long period of intense parliamentary debates on its key decisions (Chapter 8). (This has changed showing that power battles over ownership and decision-making are ongoing and never settled completely.)

Finally, following strong contestation and opposition to water privatization in Latin America, multinational corporations have retreated with varying implications for ownership – in Bolivia, Venezuela and Uruguay operations have returned to the public sector; in Brazil and Chile a mix of private local and foreign capital remains (Chapter 5). But there have also been attempts to return water companies to municipal control – in Argentina, the Aguas Bonaerense (ABSA) public organization was created as a pubic-private partnership between the local authority and a workers’ cooperative (p.113).

So rather than inventing the hot water (or non-privatized water), we can learn from a number of best practices and try to think how different forms of public ownership can transform and democratize different types of economic activity, depending also on the scale of these activities: finance, utilities industries, public transportation, public services, consumer products, private services, consumer services clearly all operate on different scales.

Private ownership might actually be the best option for a small local hairdresser, state, local cooperatives or municipal ownership – for the management of water, and state or municipal ownership – for the management of railways or gas, etc. (p. 168).

Rather than a one-size fit all solution (“nationalize everything!”), thinking of alternatives should be open to combining different forms of ownership at different levels, with the ultimate goal of increasing participation – not of everyone in everything but of everyone at least in some respects and in what matters to them.

So what?

In short, Cumbers’ book is really interesting. Despite the long quotes I don’t think I have given it justice so just read it (there is also some fascinating critique of the concept of the commons inside). But why on Earth am I writing on this book in a blog for our very techie group?

Well, because I think when we criticize regulation as too light touch and want to rehabilitate the state, we should not forget that state ownership (or enterpreneurship) is not always the panacea. To be honest, I have no idea how exactly the argument in Cumbers’ book can be relevant for finding alternatives to the big tech giants.

In a previous post, I had argued that maybe what we need instead of Facebook, are public networks along national lines, with states owning the data of their citizens, using it for research and machine learning, instead of private companies doing this.

But could we instead think of citizens collectively owning their data? Or having citizen cooperatives managing interoperable networks?

Furthermore, what type of public ownership might be an adequate model for an alternative to Amazon? These are not easy questions. And I would love to discuss them with you.

The reason why I made such an extensive review of this book is because I think it might be relevant but it remains for us to explore how exactly. One thing I am certain of is that few things are worse than the current ownership model of big tech, with a few private corporations owning and exploiting all our data.

Going back to the three pillars outlined by Cumbers, when we think of how to reform big tech/find alternatives, we need to think of how to 1) change employment relations within tech firms allowing more participation in decision-making 2) change property relations – who owns the companies that own us? what forms of ownership might be adequate? 3) change the marketization of ourselves and our data – is this reversible in a world where we rent even our homes to strangers?

Each one of these three aspects should be considered and can be changed.

We just rarely frame the debate in these terms, and even more rarely think of all three aspects together. But this is precisely what we should do.

In Review: Democracy, law and controlling tech platforms

By John Naughton

Notes on a discussion of two readings

  1. Paul Nemitz: “Constitutional democracy and technology in the age of artificial intelligence”, Philosophical Transactions of the Royal Society A, 15 October 2018. https://doi.org/10.1098/rsta.2018.0089
  2. Daniel Hanley: “How Antitrust Lost its Bite”, Slate, April 6, 2021 – https://tinyurl.com/2ht4h8wf

I had proposed these readings because (a) Nemitz’s provided a vigorous argument for resisting the ‘ethics-theatre’ currently being orchestrated by the tech industry as a pre-emptive strike against regulation by law; and (b) the Hanley article argued the need for firm rules in antitrust legislation rather than the latitude currently offered to US judges by the so-called “rule of reason”.

Most of the discussion revolved around the Nemitz article. Here are my notes of the conversation, using the Chatham House Rule as a reporting principle.

  • Nemitz’s assertion that “The Internet and its failures have thrived on a culture of lawlessness and irresponsibility” was challenged as an “un-nuanced and uncritical view of how law operates in the platform economy”. The point was that platform companies do of course ignore and evade law as and when it suits them, but they also at a corporate level rely on it and use it as both ‘a sword and a shield’; law has as a result played a major role in structuring the internet that now exists and producing the dominant platform companies we have today and has been leveraged very successfully to their advantage. Even the egregious abuse of personal data (which may seem closest to being “lawless”) largely still runs within the law’s overly permissive framework. Where it doesn’t, it generally tries to evade the law by skirt around gaps created within the law, so even this seemingly extra-legal processing is itself shaped by the law (and cannot therefore be “lawless”). So any respect for the law that they profess is indeed, as you say, disingenuous, but describing the internet as a “lawless” space – as Nemitz does – misses a huge part of the dynamic that got us here and is a real problem if we’re going to talk about the potential role of law in getting us out. Legal reform is needed, but if it’s going to work then we have to be aware of and account for these things.
  • This critique stemmed from the view that law is both produced by society and in turn reproduces society, and in that sense always functions essentially as an instrument of power — so it has historically been (and remains) a tool of dominance, of hierarchy, of exclusion and marginalisation, of capital and of colonialism. In that sense, the embryonic Silicon Valley giants slotted neatly into that paradigm. And so, could Nemitz’s insistence on the rule of law — without a critical understanding of what that actually means — itself be a problem?

“They [tech companies] employ the law when it suits them and do so very strategically – as both a ‘sword’ and a ‘shield’ – and that’s played a major role in getting the platform ecosystem to where it is now.”

  • On the one hand, laws are the basic tools that liberal democracies have available for bringing companies under democratic (i.e. accountable) control. On the other hand, large companies have always been adept (and, in liberal democracies, very successful) at using the law to further their interests and cement their power.
  • This point is particularly relevant to tech companies. They’ve used law to bring users within their terms of service and thereby to hold on to assets (e.g. exabytes of user data) that they probably wouldn’t have been able to do otherwise. They use law to enable the pretence that click-through EULAs are, in fact, contracts. So they employ the law when it suits them and do so very strategically — as both a ‘sword’ and a ‘shield’ — and that’s played a major role in getting the platform ecosystem to where it is now.
  • Also, law plays a big role in driving and shaping technological development. Technologies don’t emerge in a vacuum, they’re a product of their context and law is a major part of that context. So the platform business models and what’s happening on the internet aren’t outside of the law; they’re constructed through, and depend upon, it. So it’s misleading when people argue (like Nemitz??) that we need to use law to change things — as if the law isn’t there already and may actually be partially enabling things that are societally damaging. So unless we properly understand the rule of law in getting us to our current problematique, talking about how law can help us is like talking about using a tool to fix a problem without realising that the tool is itself is part of the problem.

“It’s the primacy of democracy, not of law that’s crucial.”

  • There was quite a lot of critical discussion of the GDPR on two fronts — its ‘neoliberal’ emphasis on individual rights; and things that are missing from it. Those omissions and gaps are not necessarily mistakes; they may be the result of political choices.
  • One question is whether there is a deficit of law around who owns property in the cloud. If you upload a photo to Facebook or whatever it’s unclear if you have property rights over or if the cloud-computing provider does. General consensus seems to be that that’s a tricky question! (Questions about who owns your data generally are.)
  • Even if laws exist, enforcement looks like a serious problem. Sometimes legal coercion of companies is necessary but difficult. And because of the ‘placelessness’ of the internet, it seems possible that a corporation or an entity could operate in a place where there’s no nexus to coerce it. Years ago Tim Wu and Jack Goldsmith’s book recounted how Yahoo discovered that they couldn’t just do whatever they wanted in France because they had assets in that jurisdiction and France seized them. Would that be the case that with say, Facebook, now? (Just think of why all of the tech giants have their European HQs in Ireland.)
  • It’s the primacy of democracy, not of law that’s crucial. If the main argument of the Nemitz paper is interpreted as the view that law will solve our problems, that’s untenable. But if we take as the main argument that we need to democratically discuss what the laws are, then we all agree with this. (But isn’t that just vacuous motherhood and apple pie?)
  • More on GDPR… it sets up a legal framework in which we can regulate the consenting person that is, that’s a good thing that most people can agree on. But the way that GDPR is constructed is extremely individualistic. For example, it disempowers data subjects in even in the name of giving them rights because it individualises them. So even the way that it’s constructed actually goes some way towards undermining its good effects. It’s based on the assumption that if we give people rights then everything will be fine. (Shades of the so-called “Right to be Forgotten”.)

As for the much-criticised GDPR, one could see it as an example of ‘trickle-down’ regulation, in that GDPR has become a kind of gold standard for other jurisdictions.

  • Why hasn’t academic law been a more critical discipline in these areas? The answer seems to be that legal academia (at least in the UK, with some honourable exceptions) seems exceptionally uncritical of tech, and any kind of critical thinking is relatively marginalised within the discipline compared to other social sciences. Also most students want to go into legal practice, so legal teaching and scholarship tends to be closely tied to law as a profession and, accordingly, the academy tends to be oriented around ‘producing’ practising lawyers.
  • There was some dissent from the tenor of the preceding discourse about the centrality of law and especially about the improbability of overturning such a deeply embedded cognitive and professional system. This has echoes of a venerable strand in political thinking which says that in order to change anything you have to change everything and it’s worse to change a little bit than it is to change everything — which means nothing actually changes. This is the doctrine that it’s quite impossible to do any good at all unless you do the ultimate good, which is to change everything. (Which meant, capitalism and colonialism and original sin, basically!) On the other hand, there is pragmatic work — making tweaks and adjustments — which though limited in scope might be beneficial and appeal to liberal reformers (and are correspondingly disdained by lofty adherents to the Big Picture).
  • There were some interesting perspectives based on the Daniel article. Conversations with people across disciplines show that technologists seem to suggest a technical solution for everything (solutionism rules OK?), while lawyers view the law as a solution for everything. But discussions with political scientists and sociologists mostly involve “fishing for ideas” which is a feature, not a bug, because it suggests that minds are not set in silos — yet. But one of the problems with the current discourse — and with these two articles — is that the law currently seems to be filling the political void. And the discourse seems to reflect public approval of the judicial approach compared with the trade-offs implicit in Congress. But the Slate article shows the pernicious influence or even interference of an over-politicised judiciary in politics and policy enforcement. (The influence of Robert Bork’s 1978 book and the Chicago School is still astonishing to contemplate.)
  • The Slate piece seems to suffer from a kind of ‘neocolonial governance syndrome’ — the West and the Rest. We all know section 230 by heart. And now it’s the “rule of reason” and the consumer welfare criterion of Bork. It’s important to understand the US legal and political context. But we should also understand: the active role of the US administration; what happened recently in Australia (where the government intervened, both through diplomatic means and directly on behalf of the Facebook platform); and in Ireland (where the government went to the European Court to oppose a ruling that Apple had underpaid tax to the tune of 13 billion Euros). So the obsession with the US doesn’t say much about the rest of the world’s capacity to intervene and dictate the rules of the game. And yet China, India and Turkey have been busy in this space recently.
  • And as for the much-criticised GDPR, one could see it as an example of ‘trickle-down’ regulation, in that GDPR has become a kind of gold standard for other jurisdictions. Something like 12 countries have adopted GDPR-like legislation, and this includes many countries in Latin America Chile. Chile, Brazil, South Africa and South Africa, South Africa, Japan, Canada and so on so forth.

Call for provocations – Technology and The Environment Workshop

Every day, consumers around the world utilise digital technology with unprecedented convenience, but at what environmental cost?

The Minderoo Centre for Technology and Democracy is examining the environmental impact of digital technology to acquire and disseminate an informed, independent assessment of the planetary consequences of the industry’s continued rate of expansion.

Using the resources of leading academic research, we want to expose the tremendous environmental impact of our relationship with digital technology. For example, what is the carbon footprint of a Google search? What are the real-world ramifications for our communities and our planet of the click-to-delivery process of an Amazon order? How does tech ‘progress’ drive planned obsolescence in the smartphone market?

Call for provocations – Technology and the Environment workshop –17th June – 12pm BST (7pm AWST/7AM EDT)

The Minderoo Centre for Technology and Democracy is calling for participants to provide provocations for a workshop to further explore the ‘cost of convenience’ and the opaque impact that digital technology has on the environment.

The workshop aims to provide a forum for emerging researchers to enter into speculation, critique, exchange, and dialogue on the topic. Although it is primarily aimed at international academic researchers and PhD students, the workshop is also open to journalists, tech workers and those pursuing research outside an academic context. 

Apply now by email – Technology and the Environment workshop

Applicants are asked to produce a 150-word provocation on a topic across the environmental impact of technology/the political economy of the environment/technology nexus, that they would like to discuss at the workshop.

To submit a 150 word provocation or to ask any questions aheads of application, please email: minderoo@crassh.cam.ac.uk

Applications are accepted until May 15.

Review: What Tech Calls Reading

A Review of FSG x Logic Series

by Alina Utrata


Publisher Farrar, Straus and Giroux (FSG) and the tech magazine Logic teamed up to produce four books that capture “technology in all its contradictions and innovation, across borders and socioeconomic divisions, from history through the future, beyond platitudes and PR hype, and past doom and gloom.” In that, the FSG x Logic series succeeded beyond its wildest imagination. These books are some of the most well-researched, thought-provoking and—dare I say it—innovative takes on how technology is shaping our world. 

Here’s my review of three of the four—Blockchain Chicken Farm, Subprime Attention Crisis and What Tech Calls Thinking—but I highly recommend you read them all. (They average 200 pages each, so you could probably get through the whole series in the time it takes to finish Shoshana Zuboff’s Surveillance Capitalism.)


Blockchain Chicken Farm: And Other Stories of Tech in China’s Countryside

Xiaowei Wang

“Famine has its own vocabulary,” Xiaowei Wang writes, “a hungry language that haunts and lingers. My ninety-year-old great-uncle understands famine’s words well.” Wang writes as beautifully as they think, effortlessly weaving between ruminations on Chinese history, personal and family anecdotes, modern political and economic theory and first-hand research into the technological revolution sweeping rural China. Contradiction is a watchword in this book, as is contrast—they describe the difference between rural and urban life, of the East and the West, of family and the globe, of history and the present and the potential future. And yet, it all seems familiar. Wang invites us to think slowly about an industry that wants us to think fast—about whether any of this is actually about technology, or whether it is about capitalism, about globalization, about our politics and our communities—or, perhaps, about what it means to live a good life.

On blockchain chicken farms:

“The GoGoChicken project is a partnership between the village government and Lianmo Technology, a company that applies blockchain to physical objects, with a focus on provenance use cases—that is, tracking where something originates from. When falsified records and sprawling supply chains lead to issues of contamination and food safety, blockchain seems like a clear, logical solution. . . These chickens are delivered to consumers’ doors, butchered and vacuum sealed, with the ankle bracelet still attached, so customers can scan the QR code before preparing the chicken . . .”

On a Blockchain Chicken Farm in the Middle of Nowhere, pg 40

“A system of record keeping used to be textual, readable, and understandable to everyone. The technical component behind it was as simple as paper and pencil. That system was prone to falsification, but it was widely legible. Under governance by blockchain, records are tamperproof, but the technical systems are legible only to a select few. . . blockchain has yet to answer the question: If it takes power away from a central authority, can it truly put power back in the hands of the people, and not just a select group of people? Will it serve as an infrastructure that amplifies trust, rather than increasing both mistrust and a singular reliance on technical infrastructure? Will it provide ways to materially organize and enrich a community, rather than further accelerating financial systems that serve a select few?”

On a Blockchain Chicken Farm in the Middle of Nowhere, pg 48

On AI pig farming:

“In these large-scale farms, pigs are stamped with a unique identity mark on their bodies, similar to a QR code. That data is fed into a model made by Alibaba, and the model has the information it needs to monitor the pigs in real time, using video, temperature, and sound sensors. It’s through these channels that the model detects any sudden signs of fever or disease, or if pigs are crushing one another in their pens. If something does happen, the system recognizes the unique identifier on the pig’s body and gives an alert.”

When AI Farms Pigs, pg 63

“Like so many AI projects, ET Agricultural Brain naively assumes that the work of a farmer is to simply produce food for people in cities, and to make the food cheap and available. In this closed system, feeding humans is no different from feeding swaths of pigs on large farms. The project neglects the real work of smallholder farmers throughout the world. For thousands of years, the work of these farmers has been stewarding and maintaining the earth, rather than optimizing agricultural production. They use practices that yield nutrient-dense food, laying a foundation for healthy soils and rich ecology in an uncertain future. Their work is born out of commitment and responsibility: to their communities, to local ecology, to the land. Unlike machines, these farmers accept the responsibility of their actions with the land. They commit to the path of uncertainty.”

When AI Farms Pigs, pg 72

“After all, life is defined not by uncertainty itself but by a commitment to living despite it. In a time of economic and technological anxiety, the questions we ask cannot center on the inevitability of a closed system built by AI, and how to simply make those closed systems more rational or “fair.” What we face are the more difficult questions about the meaning of work, and the ways we commit, communicate, and exist in relation to each other. Answering these questions means looking beyond the rhetoric sold to us by tech companies. What we stand to gain is nothing short of true pleasure, a recognition that we are not isolated individuals, floating in a closed world.”

When AI Farms Pigs, pg 72

Subprime Attention Crisis: Advertising and the Time Bomb at the Heart of the Internet

Tim Hwang

Subprime Attention Crisis

In Subprime Attention Crisis, Tim Hwang argues that the terrifying thing about digital platforms is not how effective they are at manipulating behavior—it’s that they might not be very effective at all. Hwang documents, with precise and technical detail, how digital advertising markets work and how tech giants may be deliberately attempting to inflate their value, even as the actual effectiveness of online ads declines. If you think you’ve seen this film before, Hwang draws parallels to the subprime mortgages and financial systems that triggered the 2008 financial crash. He makes a compelling case that, sooner or later, the digital advertising bubble may burst—and the business model of the internet will explode overnight (not to mention all the things tech money subsidizes, from philanthropy to navigation maps to test and trace). Are Google and Facebook too big to fail? 

On potential systems breakdown:

“Whether underwriting a massive effort to scan the world’s books or enabling the purchase of leading robotics companies, Google’s revenue from programmatic advertising has, in effect, reshaped other industries. Major scientific breakthroughs, like recent advances in artificial intelligence and machine learning, have largely been made possible by a handful of corporations, many of which derive the vast majority of their wealth from online programmatic advertising. The fact that these invisible, silent programmatic marketplaces are critical to the continued functioning of the internet—and the solvency of so much more—begs a somewhat morbid thought experiment: What would a crisis in this elaborately designed system look like?”

The Plumbing, pg 25

“Intense dysfunction in the online advertising markets would threaten to create a structural breakdown of the classic bargain at the core of the information economy: services can be provided for free online to consumers, insofar as they are subsidized by the revenue generated from advertising. Companies would be forced to shift their business models in the face of a large and growing revenue gap, necessitating the rollout of models that require the consumer to pay directly for services. Paywalls, paid tiers of content, and subscription models would become more commonplace. Within the various properties owned by the dominant online platforms, services subsidized by advertising that are otherwise unprofitable might be shut down. How much would you be willing to pay for these services? What would you shell out for, and what would you leave behind? The ripple effects of a crisis in online advertising would fundamentally change how we consume and navigate the web.”

The Plumbing, pg 27

On fraud in digital advertising:

“One striking illustration is the subject of an ongoing lawsuit around claims that Facebook made in 2015 promoting the attractiveness of video advertising on its platform. At the time, the company was touting online video—and the advertising that could be sold alongside it—as the future of the platform, noting that it was “increasingly seeing a shift towards visual content on Facebook.” . . . But it turned out that Facebook overstated the level of attention being directed to its platform on the order of 60 to 80 percent. By undercounting the viewers of videos on Facebook, the platform overstated the average time users spent watching videos. . . . These inconsistencies have led some to claim that Facebook deliberately misled the advertising industry, a claim that Facebook has denied. Plaintiffs in a lawsuit against Facebook say that, in some cases, the company inflated its numbers by as much as 900 percent. Whatever the reasons for these errors in measurement, the “pivot to video” is a sharp illustration of how the modern advertising marketplace can leave buyers and sellers beholden to dominant platform decisions about what data to make available.”

Opacity, pg 70

On specific types of ad fraud:

“Click fraud is a widespread practice that uses automated scripts or armies of paid humans in “click farms” to deliver click-throughs on an ad. The result is that the advertising captures no real attention for the marketer. It is shown either to a human who was hired to click on the ad or to no one at all. The scale of this problem is enormous. A study conducted by Adobe in 2018 concluded that about 28 percent of website traffic showed “non-human signals,” indicating that it originated in automated scripts or in click farms. One study predicted that the advertising industry would lose $19 billion to click fraud in 2018—a loss of about $51 million per day. Some place this loss even higher. One estimate claims that $1 of every $3 spent on digital advertising is lost to click fraud.”

Subprime Attention, 85

What Tech Calls Thinking: An Inquiry into the Intellectual Bedrock of Silicon Valley

Adrian Daub

What Tech Calls Thinking

What Tech Calls Thinking is “about the history of ideas in a place that likes to pretend its ideas don’t have any history.” Daub has good reason to know this, as a professor of comparative literature at Stanford University (I never took a class with him, a fact I regretted more and more as the book went on). His turns of phrase do have the lyricism one associates with a literature seminar—e.g. “old motifs playing dress-up in a hoodie”—as he explores the ideas that run amok in Silicon Valley. He exposes delightful contradictions: thought leaders who engage only superficially with thoughts. CEOs who reject the university (drop out!), then build corporate campuses that look just like the university. As Daub explains the ideas of thinkers such as Abraham Maslow, Rene Girard, Ayn Rand, Jurgen Habermas, Karl Marx, Marshall McLuhan and Samuel Beckett, you get the sense, as Daub says, that these ideas “aren’t dangerous ideas in themselves. Their danger lies in the fact that they will probably lead to bad thinking.” The book is a compelling rejection of the pseudo-philosophy that has underpinned much of the Valley’s techno-determinism. “Quite frequently,” Daub explains, “these technologies are truly novel—but the companies that pioneer them use that novelty to suggest that traditional categories of understanding don’t do them justice, when in fact standard analytic tools largely apply just fine.” Daub’s analysis demonstrates the point well. 

On tech drop outs:

“You draw a regular salary and know what you’re doing with your life earlier than your peers, but you subsist on Snickers and Soylent far longer. You are prematurely self-directed and at the same time infantilized in ways that resemble college life for much longer than almost anyone in your age cohort. . . .  Dropping out is still understood as a rejection of a certain elite. But it is an anti-elitism whose very point is to usher you as quickly as possible into another elite—the elite of those who are sufficiently tuned in, the elite of those who get it, the ones who see through the world that the squares are happy to inhabit . . .  All of this seems to define the way tech practices dropping out of college: It’s a gesture of risk-taking that’s actually largely drained of risk. It’s a gesture of rejection that seems stuck on the very thing it’s supposedly rejecting.”

Dropping Out, pg 37

On platforms versus content creation:

“The idea that content is in a strange way secondary, even though the platforms Silicon Valley keeps inventing depend on it, is deeply ingrained. . . . To create content is to be distracted. To create the “platform” is to focus on the true structure of reality. Shaping media is better than shaping the content of such media. It is the person who makes the “platform” who becomes a billionaire. The person who provides the content—be it reviews on Yelp, self-published books on Amazon, your own car and waking hours through Uber—is a rube distracted by a glittering but pointless object.”

Content, pg 47

On gendered labor:

“Cartoonists, sex workers, mommy bloggers, book reviewers: there’s a pretty clear gender dimension to this division of labor. The programmers at Yelp are predominantly men. Its reviewers are mostly female . . . The problem isn’t that the act of providing content is ignored or uncompensated but rather that it isn’t recognized as labor. It is praised as essential, applauded as a form of civic engagement. Remunerated it is not. . . . And deciding what is and isn’t work has a long and ignominious history in the United States. They are “passionate,” “supportive” volunteers who want to help other people. These excuses are scripts, in other words, developed around domestic, especially female, labor. To explain why being a mom isn’t “real” work. To explain why women aren’t worth hiring, or promoting, or paying, or paying as much.”

Content, pg 51

On gendered data:

“There is the idea that running a company resembles being a sexual predator. But there is also the idea that data—resistant, squirrelly, but ultimately compliant—is a feminine resource to be seized, to be made to yield by a masculine force. . . .To grab data, to dispose of it, to make oneself its “boss”—the constant onslaught of highly publicized data breaches may well be a downstream effect of this kind of thinking. There isn’t very much of a care ethic when it comes to our data on the internet or in the cloud. Companies accumulate data and then withdraw from it, acting as though they have no responsibility for it—until the moment an evil hacker threatens said data. Which sounds, in other words, not too different from the heavily gendered imagery relied on by Snowflake. There is no sense of stewardship or responsibility for the data that you have “grabbed,” and the platform stays at a cool remove from the creaturely things that folks get up to when they go online and, wittingly or unwittingly, generate data.”

Content, pg 55

On disruption:

“There is an odd tension in the concept of “disruption,” and you can sense it here: disruption acts as though it thoroughly disrespects whatever existed previously, but in truth it often seeks to simply rearrange whatever exists. It is possessed of a deep fealty to whatever is already given. It seeks to make it more efficient, more exciting, more something, but it never wants to dispense altogether with what’s out there. This is why its gestures are always radical but its effects never really upset the apple cart: Uber claims to have “revolutionized” the experience of hailing a cab, but really that experience has stayed largely the same. What it managed to get rid of were steady jobs, unions, and anyone other than Uber’s making money on the whole enterprise.”

Desire, pg 104

Review: ‘The Social Dilemma’ — Take #2

In “More than tools: who is responsible for the social dilemma?, Microsoft researcher Niall Docherty has an original take on the thinking that underpins the film. If we are to pursue more productive discussions of the issues raised by the film, he argues, we need to re-frame social media as something more than a mere “tool”. After all, “when have human beings ever been fully and perfectly in control of the technologies around them? Is it not rather the case that technologies, far from being separate from human will, are intrinsically involved in its activation?”

French philosopher Bruno Latour famously uses the example of the gun to advance this idea, which he calls mediation. We are all aware of the platitude, “Guns don’t kill people, people kill people”. In its logic, the gun is simply a tool that allows the person, as the primary agent, to kill another. The gun exists only as an object, through which the person’s desire of killing flows. For Latour, this view is deeply misleading.

Instead, Latour draws our attention to the way the gun, in translating a human desire for killing into action, materializes that desire in the world: “you are a different person with the gun in your hand”, and the gun, by being in your hand, is different than if it were left snuggly in its rack. Only when the human intention and the capacities of the gun are brought together can a shooting, as an observably autonomous action, actually take place. It is impossible to neatly distinguish the primary agents of the scene. Responsibility of the shooting, which can only occur through the combination of human and gun, and by proxy, those who produced and provided it, is thus shared.

With this in mind, we must question how useful it is to think about social media in terms of manipulation and control. Social media, far from being a malicious yet inanimate object (like a weapon) is something more profound and complex: a generator of human will. Our interactions on social media platforms, our likes, our shares, our comments, are not raw resources to be mined – they simply could not have occurred without their technical mediation. Neither are they mere expressions of our autonomy, or, conversely, manipulation: the user does not, and cannot, act alone.

The implication of taking this Latourian view is that “neither human individuals, nor the manipulative design of platforms, seductive they may be, can be the sole causes of the psychological and political harm of social media”. Rather, it is the coming together of human users and user-interfaces, in specific historical settings, that co-produce the activity that occurs upon them. We, as users, as much as the technology itself, therefore, share responsibility for the issues that rage online today.