Can we imagine a better Internet?

The convenience of thinking together

by Alina Utrata and Julia Rone

Reflecting on our recent tech and environment workshop, two of our workshop hosts, Alina Utrata and Julia Rone, explore the questions from the event that are still making them think.

On June 17, over 40 participants from all over the world joined our workshop exploring “the cost of convenience” and the opaque impact that digital technology has on the environment.

Instead of having academics presenting long papers in front of Zoom screens with switched-off cameras, we opted for a more dialogic, interactive and (conveniently) short format.

We invited each participant (or team of participants) to share a provocation across the environmental impact of technology/the political economy of the environment/technology nexus and discussed in small groups. Then, in panel sessions we discussed the provocations (what we know already), the known unknowns (what we don’t know yet), and ideas for an action plan (what could we be doing). 

Below are our reflections on the workshop.

A visual representation of the workshop, produced by artist Tom Mclean.

There is no real technical or technological “fix” for the climate crisis

By Alina Utrata

I am currently working on the relationships between technology corporations and states.

For me, what stood out about the discussions was the sense among all participants that there was no real technical or technological “fix” for the climate crisis.

Instead, the conversations often revolved around globally embedded systems and structures of power—and asking why a certain technology is being deployed, by whom, for whom and how, rather than whether they could “fix” anything.

“I was inspired by how participants immediately recognised the importance of these systems, and instead focused our conversations on how to change them.”

Alina Utrata

In fact, it was pointed out that often the creators of these technological innovations deliberately promoted certain kinds of narratives about how they wanted the technology to be thought of—for example, the “cloud” as a kind of abstract, other place in the sky, rather than a real, tangible infrastructure with real costs.

The same could be said of the metaphors of “carbon footprint” or “carbon neutral”—the idea that as long as discrete, individual corporate entities were not personally responsible for a certain amount of emissions, then they could not be held culpable for a system that was failing the planet. 

Credit: Alex Machado for Unsplash

I was inspired by how participants immediately recognized the importance of these systems, and instead focused our conversations on how to change them.

Although many political concepts today are so commonplace that they seem ordinary, we discussed how they are often really quite modern or Western in origin.

For example, the idea of the shared, communal commons is an ancient one, and can be used as a political framework to tackle some of the harmful systems humans have put in place on our earth. 

Finally, we acknowledged that we all have a role to play in this fight for our future—but not all of us have or need to play the same role.

Some of us will be activists outside these systems of power, and some of us will be sympathetic voices from within.

The participants reaffirmed the need to both communicate and coordinate across disciplines within academia, and more broadly in sectors across the wider earth.


Should we abolish the Internet?

By Julia Rone

 Credit: Denny Müller for Unsplash

I am currently working on the democratic contestation of data centre construction.

John Naughton often says during our weekly meetings that the most interesting conversations are those that finish before you want them to end. That was definitely the case for me at the workshop since of each the sessions I hosted ended with a question that could be discussed for hours and that still lingers in my mind.

Concepts and conceptual problems

If I have to identify the key common threads running through the three sessions I hosted, the first one has to with concepts and conceptual problems. 

Several participants posed the crucial question how do we think of “progress”.

Is progress necessarily synonymous with growth, increased efficiency, better performance?

What are we sacrificing in the name of “progress”?

One participant asked the painfully straight-to-the-point question: “Should we abolish the Internet?” (considering the massive toll of tech companies on the environment, the rise of hate speech, cyber-bullying, polarization, etc.)

Do we feel loss at the thought? 

“Yes!” – I immediately said to myself.- “How could I talk to my family and to my friends”.

This question really provoked me to think further.

If I can’t live in a world without the Internet, can we think of a different Internet?

How can we re-invent the Internet to become more caring, accessible, more Earth-based and less extractive (as one of the provocations suggested).

Credit: Ehimetalor Akhere Unuabona for Unsplash

What does it mean to be sustainable?

Another, similarly important conceptual question was posed at the very end of the second session by a collegue who asked “What does it mean to be sustainable?” Why do we want to be sustainable? What and whom are we sustaining?

Should we not rather think of ways to radically change the system?

Our time ran out before discussing this in depth and therefore this question has also been bothering me since then. 

Ultimately, as another participant emphasised, research on the environmental impact of tech is most problematic and underdeveloped at two levels – the levels of concepts (how do we think of abstraction and extraction, for example?), but also at the lowest level of what individuals and communities do.

This latter question about on-the-ground labor, work and action is actually the second common thread between several of the contributions in the sessions I attended.

“It is difficult to disentangle the economic aspects of repair from the environmental ones.” 

A colleague studying workers who do repair for their livelihood (not as a hipster exercise) rightly pointed out that when discussing the environmental consequences of tech, and practices such as repair in particular, it is difficult to disentangle the economic aspects of repair from the environmental ones. 

Indeed, in a different context, scholars of the environmental impact of tech have clearly shown how tech companies’ extractive practices towards nature go hand-in-hand with dispossession, economic exploitation and extraction of value and profit from marginalised communities.

“In order to understand and better address the environmental consequences of digital tech, we need to be more open to the experiences of individuals and communities on the ground who often “know better” since they live (and occasionally also cause) the very consequences of tech we research.”

Julia Rone

Another colleague had studied the ways in which local leaders participate in decision-making about data centres in Thailand and controversies around water use – a topic very relevant to my own current project on data centres in the Netherlands.

Another participant yet had studied how participatory map-making not only consumes electricity but also changes the very way we see nature.

The reason why I found all these contributions so fascinating is that they challenged simplistic narratives of Big Tech Vs the Environment and showed how many players (with how many different intentions, principles and economic interests) are actually involved in the increasingly complex assemblage of humans-nature and tech. 

So to sum up – in order to understand and better address the environmental consequences of digital tech, we need to be more clear about the concepts we use as researchers but also to be more open to the experiences of individuals and communities on the ground who often “know better” since they live (and occasionally also cause) the very consequences of tech we research. 

To summarise…

Ultimately, each of us who attended (and hosted) the sessions of the workshop have a rich but still incomplete overview of the workshop.

By attending different sessions, there were provocations that individually we missed as sessions intertwined and overlapped (a bit like tectonic plates readjusting meaning, ideas and new perspectives for research).

We would love to hear from other attendees from the workshop, the ideas that struck them most during the sessions.

Luckily, some participants have submitted their provocation to our Zine, a unique document that we will share soon to help guide us forward in our thinking.

We can’t wait to share the Zine with you… stay tuned.

Some lessons of Trump’s short career as a blogger

By John Naughton

The same unaccountable power that deprived Donald J. Trump of his online megaphones could easily be deployed to silence other prominent figures, including those of whom liberals approve.

‘From the Desk of Donald J. Trump’ lasted just 29 days. It’s tempting to gloat over this humiliating failure of a politician hitherto regarded as an omnipotent master of the online universe.

Tempting but unwise, because Trump’s failure should alert us to a couple of unpalatable realities.

The first is that the eerie silence that descended after the former President was ‘deplatformed’ by Twitter and Facebook provided conclusive evidence of the power of these two private companies to control the networked public sphere.

Those who loathed Trump celebrate his silencing because they regarded him — rightly — as a threat to democracy.

But on the other hand nearly half of the American electorate voted for him. And the same unaccountable power that deprived him of his online megaphones could easily be deployed to silence other prominent figures, including those of whom liberals approve.

The other unpalatable reality is that Trump’s failure to build an online base from scratch should alert us to the way the utopian promise of the early Internet — that it would be the death of the ‘couch potato’, the archetypal passive media consumer — has not been realised. Trump, remember, had 88.9m followers on Twitter and over 33m fans on Facebook.

“The failure of Trump’s blog is not just a confirmation of the unaccountable power of those who own and control social media, but also a reflection of the way Internet users enthusiastically embraced the ‘push’ model of the Web over the ‘pull’ model that we techno-utopians once hoped might be the network’s future.”

Yet when he started his own blog they didn’t flock to it. In fact they were nowhere to be seen. Forbes reported that the blog had “less traffic than pet adoption site Petfinder and food site Eat This Not That.” And it was reported that he had shuttered it because “low readership made him look small and irrelevant”. Which it did.

What does this tell us? The answer, says Philip Napoli in an insightful essay in Wired,

“lies in the inescapable dynamics of how today’s online media ecosystem operates and how audiences have come to engage with content online. Many of us who study media have long distinguished between “push” media and “pull” media.

“Traditional broadcast television is a classic “push” medium, in which multiple content streams are delivered to a user’s device with very little effort required on the user’s part, beyond flipping the channels. In contrast, the web was initially the quintessential “pull” medium, where a user frequently needed to actively search to locate content interesting to them.

“Search engines and knowing how to navigate them effectively were central to locating the most relevant content online. Whereas TV was a “lean-back” medium for “passive” users, the web, we were told, was a “lean-forward” medium, where users were “active.” Though these generalizations no longer hold up, the distinction is instructive for thinking about why Trump’s blog failed so spectacularly.

“In the highly fragmented web landscape, with millions of sites to choose from, generating traffic is challenging. This is why early web startups spent millions of dollars on splashy Super Bowl ads on tired, old broadcast TV, essentially leveraging the push medium to inform and encourage people to pull their online content.

“Then social media helped to transform the web from a pull medium to a push medium...”

adem-ay-Tk9m_HP4rgQ-unsplash

Credit: Adem AY for Unsplash

This theme was nicely developed by Cory Doctorow in a recent essay, “Recommendation engines and ‘lean-back’ media”.  The optimism of the early Internet era, he mused, was indeed best summarized in that taxonomy.

“Lean-forward media was intensely sociable: not just because of the distributed conversation that consisted of blog-reblog-reply, but also thanks to user reviews and fannish message-board analysis and recommendations.

“I remember the thrill of being in a hotel room years after I’d left my hometown, using Napster to grab rare live recordings of a band I’d grown up seeing in clubs, and striking up a chat with the node’s proprietor that ranged fondly and widely over the shows we’d both seen.

“But that sociability was markedly different from the “social” in social media. From the earliest days of Myspace and Facebook, it was clear that this was a sea-change, though it was hard to say exactly what was changing and how.

“Around the time Rupert Murdoch bought Myspace, a close friend had a blazing argument with a TV executive who insisted that the internet was just a passing fad: that the day would come when all these online kids grew up, got beaten down by work and just wanted to lean back.

“To collapse on the sofa and consume media that someone else had programmed for them, anaesthetizing themselves with passive media that didn’t make them think too hard.

“This guy was obviously wrong – the internet didn’t disappear – but he was also right about the resurgence of passive, linear media.”

This passive media, however, wasn’t the “must-see TV” of the 80s and 90s.  Rather, it was the passivity of the recommendation algorithm, which created a per-user linear media feed, coupled with mechanisms like “endless scroll” and “autoplay,” that obliterated any trace of an active role for the  aptly-named Web “consumer”.

As Napoli puts it,

“Social media helped to transform the web from a pull medium to a push medium. As platforms like Twitter and Facebook generated massive user bases, introduced scrolling news feeds, and developed increasingly sophisticated algorithmic systems for curating and recommending content in these news feeds, they became a vital means by which online attention could be aggregated.

“Users evolved, or devolved, from active searchers to passive scrollers, clicking on whatever content that their friends, family, and the platforms’ news feed algorithms put in front of them. This gave rise to the still-relevant refrain “If the news is important, it will find me.” Ironically, on what had begun as the quintessential pull medium, social media users had reached a perhaps unprecedented degree of passivity in their media consumption. The leaned-back “couch potato” morphed into the hunched-over “smartphone zombie.””

So the failure of Trump’s blog is not just a confirmation of the unaccountable power of those who own and control social media, but also a reflection of the way Internet users enthusiastically embraced the ‘push’ model of the Web over the ‘pull’ model that we techno-utopians once hoped might be the network’s future.

Apple clearly has power, but it isn’t accountable

By John Naughton

The only body that has, to date, been able to exert real control over the data-tracking industry is a giant private company which itself is subject to serious concerns about its monopolistic behaviour. Where is democracy in all this?

A few weeks ago, Apple dropped its long-promised bombshell on the data-tracking industry.

The latest version (14.5) of iOS — the operating system of the iPhone — included a provision that required app users explicitly to confirm that they wished to be tracked across the Internet in their online activities.

At the heart of the switch is a code known as “the identity for advertisers” or IDFA. It turns out that every iPhone comes with one of these identifiers, the object of which is to provide participants in the hidden real-time bidding system with aggregate data about the user’s interests.

For years, iPhone users had had the option to switch it off by digging into the privacy settings of their devices; but, because they’re human, very few had bothered to do that.

From 14.5 onwards, however, they couldn’t avoid making a decision, and you didn’t have to be a Nobel laureate to guess that most iPhone users would opt out.

Which explains why those who profit from the data-tracking racket had for months been terminally anxious about Apple’s perfidy.

Some of the defensive PR mounted on their behalf — for example Facebook’s weeping about the impact on small, defenceless businesses — defied parody.

“We have evidence of its [real-time bidding] illegitimacy, and a powerful law on the statute book which in principle could bring it under control — but which we appear unable to enforce.”

Other counter-offensives included attacks on Apple’s monopolistic control over its Apps store, plus charges of rank hypocrisy – that changes in version 14.5 were not motivated by Apple’s concerns for users’ privacy but by its own plans to enter the advertising business. And so on.

It’ll be a while until we know for sure whether the apocalyptic fears of the data-trackers were accurate.

It takes time for most iPhone users to install operating system updates, and so these are still relatively early days. But the first figures are promising. One data-analytics company, for example, has found that in the early weeks the daily opt-out rate for American users has been around 94 percent.

This is much higher than surveys conducted in the run-up to the change had suggested — one had estimated an opt-out rate closer to 60 per cent.

If the opt-out rate is as high as we’ve seen so far, then it’s bad news for the data-tracking racket and good news for humanity. And if you think that description of what the Financial Times estimates to be a $350B industry is unduly harsh, then a glance at a dictionary may be helpful.

Merriam-Webster, for example, defines ‘racket’ as “a fraudulent scheme, enterprise, or activity” or “a usually illegitimate enterprise made workable by bribery or intimidation”.

It’s not clear whether the computerised, high-speed auction system in which online ads are traded benefits from ‘bribery or intimidation’, but it is certainly illegal — and currently unregulated.

That is the conclusion of a remarkable recent investigation by two legal scholars, Michael Veale and Frederik Zuiderveen Borgesius, who set out to examine whether this ‘real-time bidding’ (RTB) system conforms to European data-protection law.

“The irony in this particular case is that there’s no need for such an overhaul: Europe already has the law in place.”

They asked whether RTB complies with three rules of the GDPR (General Data Protection Regulation) — the requirement for a legal basis, transparency, and security. They showed that for each of the requirements, most RTB practices do not comply. “Indeed”, they wrote, “it seems close to impossible to make RTB comply”. So, they concluded, it needs to be regulated.

It does.

Often the problem with tech regulation is that our legal systems need to be overhauled to deal with digital technology. But the irony in this particular case is that there’s no need for such an overhaul: Europe already has the law in place.

It’s the GDPR, which is part of the legal code of every EU country and has provision for swingeing punishments of infringers. The problem is it’s not being effectively enforced.

Why not? The answer is that the EU delegates regulatory power to the relevant institutions — in this case Data Protection Authorities — of its member states. And these local outfits are overwhelmed by the scale of the task – and are lamentably under-resourced for it.

Half of Europe’s DPAs have only five technical experts or fewer. And the Irish Data Protection Authority, on whose patch most of the tech giants have their European HQs, has the heaviest enforcement workload in Europe and is clearly swamped.

So here’s where we are: an illegal online system has been running wild for years, generating billions of profits for its participants.

We have evidence of its illegitimacy, and a powerful law on the statute book which in principle could bring it under control — but which we appear unable to enforce.

And the only body that has, to date, been able to exert real control over the aforementioned racket is… a giant private company which itself is subject to serious concerns about its monopolistic behaviour. And the question for today: where is democracy in all this? You only have to ask to know the answer.


A version of this post appeared in The Observer on 23 May, 2021.

In Review: The Cloud and the Ground

By Julia Rone

In this literature review, Julia Rone outlines the key trends and logics behind the boom in data centre construction across the globe.

Hamlet: Do you see yonder cloud that’s almost in shape of a camel?

Polonius: By th’ mass, and ‘tis like a camel indeed.

Hamlet: Methinks it is like a weasel

Polonius: It is backed like a weasel.

Hamlet: Or like a whale?

Polonius: Very like a whale.

The cloud – this fundamental building block of digital capitalism – has been so far defined mainly by the PR of big tech companies.

The very metaphor of the “cloud” presupposes an ethereal, supposedly immaterial collection of bits gliding in the sky, safely removed from the corrupt organic and inorganic matter that surrounds us. This, of course, can’t be further from the truth.

But even when they acknowledge the materiality of the “cloud” and the way it is grounded in a very physical infrastructure of cables, data centres, etc., tech giants still present it in a neat and glamorous way. Data centres, for example, provide carefully curated tours and are presented as sites of harmoniously humming servers, surrounded by wild forests and sea. Some data centres even boast with saunas.  

Instead of accepting blindly the PR of tech companies and seeing “the cloud” as whatever they present it (similarly to the way Polonius accepts Hamlet’s interpretations of the cloud), we should be attuned to the multiplicity of existing perspectives on “the cloud”, coming from researchers, rural and urban communities, and environmentalists, among others.

In this lit review, I outline the key trends and logics behind the boom in data centre construction across the globe. I base the discussion on several papers from two special issues. The first one is The Nature of Data Centres, edited by Mél Hogan and Asta Vonderau for Culture Machine. The second: Location and Dislocation: Global Geographies of Digital Data, edited by Alix Johnson and Mél Hogan for Imaginations: Journal of Cross-Cultural Image Studies. I really recommend reading both issues – the contributions read like short stories and go straight to the core of the most pressing political economy problems of our times.

Credit: Zbynek Burival for Unsplash

The “nature” of data centres

Data centres as key units of the cloud are very material: noisy, hot, giant storage boxes containing thousands of servers, they occupy factories from the past or spring up on farm land all over the globe. Data centres are grounded in particular locations and depend on a number of “natural” factors for their work, including temperature, humidity, or air pollution. In order for data centres to function, they not only use up electricity (produced by burning coal or using wind energy, for example). They also employ technologies to circulate air and water to cool down and emit heat as a waste product.

But data centres are not only assemblages of technology and nature. Their very appearance, endurance and disappearance is defined by complex institutional and non-institutional social relations: regions and countries compete with each other to cut taxes for tech corporations that promise to bring jobs and development. Some states (e.g. Scandinavian states) are preferred over others because of their stable institutions and political “climate”.

No blank slate

To illustrate, the fact that data centres are built in the Sweden’s Norrbotten region has to do a lot with the “nature” of the region conceptualized reductively by tech companies as cheap energy, cheap water, cheap land and green imagery (Levenda and Mahmoudi, 2019, 2). But it also has to do a lot with the fact that Norrbotten is filled with the “ruins of infrastructural promises” (Vonderau, 2019, 3) – “a scarcely populated and resource-rich region, historically inhabited by native Sami people, the region was for a long-time regarded as no-man’s land” (ibid). Not only is Norrbotten scarcely populated but it also has an “extremely stable and redundant electricity grid which was originally designed for […]‘old’ industries” (ibid, 7).

A similar logic of operation could be discerned in the establishment of a data centre in the Midway Technology Centre in Chicago, where the Schulze Bakery was repurposed as a data centre (Pickren, 2017) Pickren was told in an interview with a developer working on the Schulze redevelopment project that “because the surrounding area had been deindustrialized, and because a large public housing project, the Robert Taylor Homes had closed down in recent decades, the nearby power substations actually had plenty of idle capacity to meet the new data centre needs” (Pickren, 2017). As Pickern observes, “there is no blank slate upon which the world of data simply emerges”(ibid.) There are multiple “continuities between an (always temporary) industrial period and the (similarly temporary) ascendancy of digital capitalism” (ibid).

Extraction and the third wave of urbanization

What the examples of Norrbotten in Sweden and the redevelopment of Chicago by the data industry show is that despite a carefully constructed PR around “being close to nature” and “being green”, decisions on data centre construction actually depend on availability of electricity for which depopulation is only a plus. Instead of “untouched” regions, what companies often go for are rather abandoned or scarcely populated regions with infrastructure left behind. Data centres use resources – industrial capacity or Green energy – that are already there, left from previous booms and busts of capitalism or from conscious state investment that is now used to the benefit of private companies.

“Urban interactions are increasingly mediated by tech and leave a digital trace – from paying your Uber to ordering a latte, from booking a restaurant to finding a date for the night.”

Both urban and rural communities are in fact all embedded within a common process of a “third wave of urbanization” that goes hand in hand with an increase in the commodification and extraction of both data and “natural” resources (Levenda and Mahmoudi, 2019). What this means is that urban interactions are increasingly mediated by tech and leave a digital trace – from paying your Uber to ordering a latte, from booking a restaurant to finding a date for the night.

Credit: Priscilla Du Preez for Unsplash

This urban data is then stored and analysed in predominantly rural settings: “[T]he restructuring of Seattle leads to agglomerations in urban data production, which rely on rural data storage and analysis” (ibid, 9). Put simply, “[J]ust as Facebook and Google use rural Oregon for their ‘natural’ resources, they use cities and agglomerations of ‘users’ to extract data”.

Ultimately, data centres manifest themselves as assemblages for the extraction of value from both people and nature.

As if in a perverse rendition of Captain Planet, all elements – water, air, earth, human beings and technology – unite forces so that data centres can function and you can upload a cat photo on Facebook. In this real life data-centre version of Captain Planet, however, all elements are used up, extracted, exhausted. Water is polluted.

People live with the humming noise of thousands of servers.

Taxes are not collected and therefore not invested in communities that are already deprived.

What is more, data centres often arrive in rural regions with the promise to create jobs and drive development. But as numerous authors have shown, actual jobs created by data centres are less than what was originally promised, with most jobs being precarious subcontracting (Mayer, 2019). As Pickren notes, “If the data centre is the ‘factory of the 21st century,’ whither the working class?”

Abstraction

Data centres do create jobs but predominantly in urban areas. “[W]here jobs are created, where they are destroyed and who is affected are socially and geographically uneven” (Pickern, 2017). Where value is extracted from and where value is allocated rarely coincide.

And if from a birds view perspective, what matters is the total number of jobs created, what matters in Sweden’s Norrbotten or The Netherlands’ Groningen, where data centres are built, is how many jobs are created there and furthermore, what types of jobs (Mayer, 2019). In the same way, while from an abstract point of view tech companies such as Microsoft might be “carbon neutral”, this does not change their questionable practices and dependence on coal in particular places.

The Introduction to the “Location and Dislocation” Special Issue quotes a classic formulation by Yi-Fu Tuan according to whom “place is space made meaningful”(Johnson and Hogan, 2017, 4).

“Whenever we hear big tech’s grandiose pledges of carbon neutrality and reducing carbon emissions, we need to understand that these companies are not simply “green-washing” but are also approaching the problem of global warming “in the abstract””.

One of the key issues with tech companies building data centres is the way they privilege space over place – an abstract logic of calculation and global flows over the very particular local relations of belonging and accountability.

In a great piece on “fungible forms of mediation in the cloud”, Pasek explores how the practice of big tech companies to buy renewable energy certificates does more harm than good, since it allows “data centre companies to symbolically negate their local impacts in coal-powered regions on papers, while still materially driving up local grid demand and thereby incentivizing the maintenance or expansion of fossil energy generation” (ibid, 7).

The impact for local communities can be disastrous: “In communities located near power plants, disproportionately black, brown and low-income, this has direct consequences for public health, including greater rates of asthma and infant mortality” (ibid).

So whenever we hear big tech’s grandiose pledges of carbon neutrality and reducing carbon emissions, we need to understand that these companies are not simply “green-washing” but are also approaching the problem of global warming “in the abstract”, at the global level, paying little attention to their effect in any particular locality.

As Pasek notes, this logic of abstraction subordinates the “urgencies of place” to the “logics of circulation”.

Unsurprisingly, it is precisely the places that have already lost the most from previous industrial transformations that are the ones who suffer most during the current digital transformations.

Invisibility and Hypervisibility

What makes possible the extraction practices of tech companies is a mix between how little we know about them and how much we believe in their promise of doing good (or well, not doing evil at least).

In her fascinating essay “The Second Coming: Google and Internet infrastructure”, Mayer (2019) explores the rumours around a new Google data centre in Groningen. Mayer explores how Google’s reputation as a leading company combined with a the total lack of concrete information about the new data centre create a mystical aura around the whole enterprise: “Google’s curation of aura harkens back to the early eras of Western sacred art, during which priests gave sacred objects their magical value by keeping them ‘invisible to the spectator’” (Mayer, 2019, 4).

Mayer contrasts a sleek Google PR video (with a lone windmill and blond girls looking at computer screens) with the reality brought about by a centre that offered only a few temporary subcontracting jobs. The narrative of regional growth presented by Google unfortunately turned out to be PR rather than a coherent development strategy.

Impermanence

Furthermore, in a fascinating essay on data centres as “impertinent infrastructures”, Velkova (2019) explores the temporality and impermanence of data centres that can be moved or abandoned easily. 

How could such impertinent structures provide regional development?

What is more, even if data centres do not move, they do reorganize global territories and connectivity speeds through the threat of moving: “data center companies are constantly reevaluating the economic profitability of  particular locations in synchrony with server replacement cycles and new legislative frameworks that come into force.

Data centres are above all impermanent – they can come and go. Rather than being responsible to a particular locality, data centres are part of what Pasek called a “logic of global circulation”

Should tax regulations, electricity prices, legislation or geopolitical dynamics shift, even a hyper-sized data center like Google’s in Finland or Facebook’s in Sweden could make a corresponding move to a place with more economically favourable conditions within three years” (Velkova, 2019, 5).

So data centres are on the one hand, hypervisible through corporate PR. On the other hand, they are invisible for local communities that are left guessing about construction permits, the conditions of data centres arrival, their impact on the environment and the economy.

But ultimately, and this is the crucial part, data centres are above all impermanent – they can come and go. Rather than being responsible to a particular locality, data centres are part of what Pasek called a “logic of global circulation”.

Holding each node accountable

Big tech’s logics of extraction, abstraction, invisibility, hypervisibility and impermanence are driving the current third wave of urbanization and unequal development under digital capitalism.

But it is possible to imagine another politics that would “hold each node accountable to the communities in which they are located” (Pasek, 9).

The papers from the two special issues I review here provide an exhaustive and inspiring overview of the “nature” and imaginaries of data centres.

Yet, with few exceptions (such as the work of Asta Vonderau), we know little about the politics of resistance to data centres and the local social movements that are appearing and demanding more democratic participation in decision making.

Would it be possible for us – citizens – to define what the cloud should look like? Not sure. But this is a crucial element of any project for democratizing digital sovereignty. And this is what I work on now.

In Review: Bellingcat and the unstoppable Mr Higgins

By John Naughton

Review of We are Bellingcat: An Intelligence Agency for the People, by Eliot Higgins, Bloomsbury, 255pp

On the face of it, this book tells an implausible story. It’s about how an ordinary guy – a bored administrator in Leicester, to be precise – becomes a skilled Internet sleuth solving puzzles and crimes which appear to defeat some of the world’s intelligence agencies. And yet it’s true. Eliot Higgins was indeed a bored administrator, out of a job and looking after his young daughter in 2011 while his wife went out to work. He was an avid watcher of YouTube videos, especially of those emanating from the Syrian civil war, and one day had an epiphany: “If you searched online you could find facts that neither the press nor the experts knew.”

Higgins realised that one reason why mainstream media were ignoring the torrent of material from the war zone that was being uploaded to YouTube and other social media channels was that these outlets were unable to verify or corroborate it. So he started a blog — the Brown Moses blog — and discovered that a smattering of other people had had a similar realisation, which was the seed crystal for the emergence of an online community that converged around news events that had left clues on YouTube, Facebook, Twitter and elsewhere.

This community of sleuths now sails under the flag of Bellingcat, a name taken from the children’s story about the ingenious mice who twig that the key to obtaining early warning of a cat’s approach is to put a bell round its neck. This has led to careless journalists calling members of the community “Bellingcats” — which leads them indignantly to point out that they are the mice, not the predators!

The engaging name belies a formidable little operation which has had a series of impressive scoops. One of the earliest involved confirming Russian involvement in the downing of MH17, the Malaysia Airlines aircraft brought down by a missile when flying over Ukraine. Other impressive scoops included identification of the Russian FSB agents responsible for the Skripal poisonings and finding the FSB operative who tried to assassinate Alexai Navalny, the Russian democratic campaigner and Putin opponent who is now imprisoned — and, reportedly, seriously ill — in a Russian gaol.

‘We are Bellingcat’ is a low-key account of how this remarkable outfit evolved and of the role that Mr Higgins played in its development. The deadpan style reflects the author’s desire to project himself as an ordinary Joe who stumbled on something significant and worked at it in collaboration with others. This level of understatement is admirable but not entirely persuasive for the simple reason that Higgins is no ordinary Joe. After all, one doesn’t make the transition from a bored, low-level administrator to become a Research Fellow at U.C. Berkeley’s Human Rights Center and a member of the International Criminal Court’s Technology Advisory Board without having some exceptional qualities.

“One could say that the most seminal contribution Bellingcat has made so far is to explore and disseminate the tools needed to convert user-generated content into more credible information — and maybe, sometimes, into the first draft of history.”

One of the most striking things about Bellingcat’s success is that — at least up to this stage — its investigative methodology is (to use a cliché) not rocket science. It’s a combination of determination, stamina, cooperation, Internet-saviness, geolocation (where did something happen?), chronolocation (when did it happen?) and an inexhaustible appetite for social-media-trawling. There is, in other words, a Bellingcat methodology — and any journalist can learn it, provided his or her employer is prepared to provide the time and opportunity to do so. In response, Bellingcat has been doing ‘boot camps’ for journalists — first in Germany, Britain and France and — hopefully — in the US. And the good news is that some mainstream news outlets, including the New York Times, the Wall Street Journal and the BBC, have been setting up journalistic units working in similar ways.

In the heady days of the so-called ‘Arab spring’ there was a lot of excited hype about the way the smartphone had launched a new age of ‘Citizen Journalism’. This was a kind of category error which confused user-generated content badged as ‘witnessing’ with the scepticism, corroboration, verification, etc. that professional journalism requires. So in that sense one could say that the most seminal contribution Bellingcat has made so far is to explore and disseminate the tools needed to convert user-generated content into more credible information — and maybe, sometimes, into the first draft of history.

Mr Higgins makes continuous use of the phrase “open source” to describe information that he and his colleagues find online, when what he really means is that the information — because it is available online — is in the public domain. It is not ‘open source’ in the sense that the term is used in the computer industry, but I guess making that distinction is now a lost cause because mainstream media have re-versioned the phrase.

The great irony of the Bellingcat story is that the business model that finances the ‘free’ services (YouTube, Twitter, Facebook, Reddit, Instagram et al) that are polluting the public sphere and undermining democracy is also what provides Mr Higgins and his colleagues with the raw material from which their methodology extracts so many scoops and revelations. Mr Higgins doesn’t have much time for those of us who are hyper-critical of the tech industry. He sees it as a gift horse whose teeth should not be too carefully examined. And I suppose that, in his position, I might think the same.

Forthcoming in British Journalism Review, vol. 32, No 2, June 2021.

In Review: How do we avoid the ‘racket’ of sustainable development and green tech?

By Mallika Balakrishnan

Ahead of COP26, can the narrative be shifted away from what Camila Nobrega and Joana Varon describe as a “dangerous mix of ‘green economy’ and techno-solutionism? Mallika Balakrishnan explores the Minderoo Centre for Technology and Democracy’s reading & discussion of Nobrega, Camila & Joana Varon. “Big tech goes green(washing): feminist lenses to unveil new tools in the master’s houses.” GISWatch: Technology, the environment, and a sustainable world. 2021.

On June 17, the Minderoo Centre will be hosting thinkers from academia, civil society, and industry for our workshop on Technology & the Environment.

In the lead up to COP26, we’re keen to spark discussion and amplify action at the nexus of technology and its impact on the environment.

One of the themes we’re hoping to explore more is the environmental cost of technological convenience. 

Frankly, critiques of convenience are often the place my brain starts to tune out: “convenience” frequently serves as shorthand for a framework of climate destruction via individual consumption choices.

Several, though not all, of these analyses are ableist and anti-poor, and they refuse to start from a commitment to decoloniality. 

Nevertheless, the environmental and social costs of convenience are staggering, and will be crucial to understand on the road to environmental justice.

I proposed reading Joana Varon and Camila Nobrega’s recently published article because I resonated strongly with their feminist, power-based analysis of technology and the environment, specifically around the role of big tech companies and intergovernmental meetings such as COP.

Their work articulates the dissonance between big tech’s stated commitments to climate justice and actual consolidation of power, in a way that helped me start to think about convenience at a level of analysis that doesn’t feel disingenuous. 

“Especially in high-level fora such as COP26, it might be difficult to shift the narrative away from what the authors call a “dangerous mix of ‘green economy’ and techno-solutionism.” “

Some themes and remarks that surfaced in our discussion: 

When it comes to the environment, Big Tech companies are eager to centre themselves in policy-setting debates.

This article highlighted how tech companies have already positioned themselves as having useful tools to help solve the climate crisis, sweeping under the rug the ways they are exacerbating environmental destruction. As brought up in our discussion, this feels reminiscent of tobacco companies’ roles in shaping narratives around the risk of lung cancer. Especially in high-level fora such as COP26, it might be difficult to shift the narrative away from what the authors call a “dangerous mix of ‘green economy’ and techno-solutionism.” 

Solidarity with local resistance reminds us to avoid consumer/market-centric framing.

So how might MCTD work to address the gap between policy discussions and tangible justice for impacted communities? We discussed the importance of amplifying—and not tokenizing—voices in movement, recognizing many who have been doing this work for years.

There’s a connection to be made to the twin logics of extraction and abstraction (as highlighted in Kate Crawford’s Atlas of AI). The relationship between technology and the environment is easily abstracted to technocratic language or boiled down to carbon footprint. This abstraction eschews an explicitly anti-accumulation, structural analysis, and in turn makes it easier for tech companies to position themselves as “green” solutioneers.

We should be in solidarity with real-time resistance and reject framing issues in ways that suggest:

1) the only relevant harms are consumer harms

2) the only relevant solutions are market solutions

3) everything is consumable and replaceable.

As far as tactics for socio-environmental justice go, planting a tree for every square mile of land destroyed leaves a lot to be desired. And as Varon and Nobrega remind us in this article, we should be thinking about the human, social, and environmental costs of environmental destruction as linked.

We also talked about the relationship between environmental destruction and the destruction of the commons: while there were some reservations around the concept of the commons, folks discussed the emancipatory potential of bienes comunes in challenging companies’ privatization and ownership of (often unceded) land. 

We need to look beyond “effectiveness” and remember structures of power.

How do we avoid the “racket” of sustainable development and green tech?

At one level, we need to push back on the claim that Big Tech can effectively parachute in and solve problems of environmental injustice. But whether or not a tech company’s proposed solutions do what they promise, we should remember that the consolidation of power to these companies is the broader context in which this is taking place. 

Drawing from insights around online advertising ecosystems, we discussed how a lack of transparency can make it difficult to hold power to account, especially in terms of regulation. Nevertheless, we emphasized that whether or not a company’s tech solution works is incidental to the power the company has: rather, it’s about how Big Tech companies have consolidated restructured capacity and centered themselves infrastructurally.

Convenience is costly. We need to be asking why, and for whom.

When we think about convenience, it’s worth remembering to question what is convenient for companies, for workers, and for frontline communities—we should think beyond convenience as ascribed only to the individual consumer. Analyses that treat people as totally separate individuals forego possibilities for power through collective action. 

Have a different perspective to add? There’s still time to submit your provocation to our Technology & the Environment Workshop before the May 15 deadline!

Read our call for provocations (no set format; we just want bold questions)here

Worried about data overload or AI overlords? Here’s how the CDH Social Data School can help

By Anne Alexander

Ahead of the CDH Social Data School application Q&A on May 4, Dr Anne Alexander, Director of Learning at Cambridge Digital Humanities (CDH), explains how the programme provides the digital research tools necessary for the data-driven world.

The world we live in has long been shaped by the proliferation of data – companies, governments and even our fellow citizens all collect and create data about us every day of our lives.

Much of our communications are relayed digitally, the buildings we live in and the urban spaces we pass through have been turned into sensors, we work, play and even sleep with our digital devices. Particularly over the past year, as the pandemic has dramatically reduced in-person interactions for many, the data overload has come to seem overwhelming. 

The CDH Social Data School (June 16-29) which Cambridge Digital Humanities is organising in collaboration with the Minderoo Centre for Technology and Democracy is aimed at people working with data in the media, NGOs and civil society organisations and in education who want to equip themselves with new skills in designing and carrying out digital research projects, but who don’t enjoy easy access to education in data collection, management and analysis.

We want to make available the methods of inquiry and the technical skills we teach to students and staff at the University of Cambridge to a much wider audience. 

This year’s CDH Social Data School will include modules exploring the ethical and societal implications of new applications in Machine Learning, with a specific focus on the problems of structural injustice which permeate the computer vision techniques underpinning technologies such as facial recognition and image-based demographic profiling. 

We are keen to hear from participants whose work supports public interest journalism, human rights advocacy, trade unionism and campaigns for social justice, environmental sustainability and the decolonisation of education. 

Although criticism of the deployment of these technologies is now much more widespread than in the past, it often focuses on the problems with specific use cases rather than more general principles.

In the CDH Social Data School we will take a “bottom-up” approach by providing an accessible introduction to the technical fundamentals of machine learning systems, in order to equip participants with a better understanding of what can (and usually does) go wrong when such systems are deployed in wider society. 

We will also engage with these ideas through an experimental approach to learning, giving participants access to easy-to-use tools and methods allowing them to pose the questions which are most relevant to their own work. 

Participants are not expected to have any prior knowledge of programming to take part – familiarity with working with basic office tools such as spreadsheets will be helpful. We will be using free or open source software to reduce barriers to participation. 

We are particularly interested in applications from participants from countries, communities and groups which suffer from under-resourcing, marginalization and discrimination.

We are keen to hear from participants whose work supports public interest journalism, human rights advocacy, trade unionism and campaigns for social justice, environmental sustainability and the decolonisation of education. 

The CDH Social Data School will run online from June 16-29.

Apply now for the CDH Social Data School 2021

Please join us for a Q&A session with the teaching team:

Tuesday 4 May 2 – 2.45pm BST

Registration essential: Sign up here

Read more on the background and apply for your place at the School here.

In Review: Is more state ownership the panacea that will save us from the big tech giants?

By Julia Rone

Living in a world with an increasingly uncontrolled accumulation of power by big tech, what alternatives are there to privately owned enterprises that could ensure the tech sector better serves democratic society? Julia Rone reviews Andrew Cumbers’ new book ‘Reclaiming Public Ownership. Making Space for Economic Democracy’ and starts a conversation on how to apply his writing to the tech sector.

Every discussion we’ve had so far on regulating tech giants ends up with discussing whether regulation (be it anti-trust/regulating ‘recommending’ algorithms/treating big tech as public utilities) is enough.

As a colleague smartly noted last time, we have reduced our expectations of the state to a form of (light-touch) regulation to take place only in case markets fail. But as Mariana Mazucatto has famously shown in her spectacular book “The Value of Everything”, “the state” has in fact funded the fundamental science and tech development behind not only the Internet but also the technologies used in purportedly private companies’ successes such as the iPhone. The state has been a key driver of innovation rather than some clumsy behemoth lagging behind technology and poking its nose in people’s business.

The sad thing, of course, is that the value created with public funding has been subsequently privatized/appropriated by private companies – not only in monetary terms but also in symbolic terms. I’ve never had random strangers at parties telling me about publicly funded researchers, yet I have endured hours of men (it’s usually men) praising Elon Musk and Steve Jobs.

Now, we might think that this “forgetting” of the role of the state is innocent, a childish fascination with mythical entrepreneurial figures. But that’s not the case. The bad-mouthing of the state we see in tech industry is part of a much broader trend (neoliberalism?) of framing the state as incompetent, wasteful, bureaucratic and incapable of innovation.

This is why, when, as a reaction to the 2008 economic crisis, the British government nationalized (fully or partially) large parts of UK’s retail or banking sector, they were quick to appoint private executives, often from the very banks that had caused the crisis to begin with.

What nationalization amounted to, in this case, was the public sector absorbing bad debts to allow private capital to restructure and start accumulating profits again. Andrew Cumbers begins his brilliant book on public ownership with this example and dedicates the rest of the book to 1) explaining why even amidst the biggest crisis of capitalism private executives were considered more competent; 2) what alternatives are there to privately owned enterprises.

While the neoliberal bad-mouthing of the state and its reduction to light-touch regulator have been undoubtedly super influential, the question I would like to bring to the table, drawing extensively on Cumbers, is: should we uncritically rehabilitate the state? Is more state the panacea that will save us from the big bad tech giants? Or should we try to think of new forms of ownership and democratic management, in our case, of digital platforms? In the following paragraphs I will present Cumbers’ book in detail (maybe too much detail but it’s really a great book) before returning to these key questions at the end.

Historic experiences with nationalization in the UK – “neither socialization nor modernization”

What makes Cumbers’ book so brilliant is that he engages in depth with existing theories, empirical examples and critiques of public ownership but then he moves beyond this purely analytical exercise of discussing ‘who is right and who is wrong’.

Instead, he puts forward an alternative – a vision of public ownership that goes beyond the state, embraces diversity and heterodoxy, and puts at its center the core principle of economic democracy.

To begin with, Cumbers argues that nationalization and state planning have such a bad name partially because of the way they were instituted in practice. Talking about the British post 1945 experience with nationalization, Cumebrs argues it was “neither socialization, nor modernization” (p. 14). More radical agendas never penetrated the upper echelons of the Labour establishment: referring to the nationalization programme as “socialization” was mainly PR and the government “was deeply suspicious of anything remotely ‘syndicalist’ that might provide more grass-roots or shop-floor representation and influence on the councils of nationalized industries” (p. 15).

Management was top-down and the large bureaucratic structures produced “an alienating environment for the average worker”, creating a “significant democratic deficit” in industries that were owned and managed supposedly on behalf of the people. Nationalization in the UK played out as centralization significantly weakening the power and authority of local governments vis-a-vis the national government (p.21)

What is more, “nationalized industries, in their varying ways. provided large and continuous subsidies to the private sector, while being severely constrained in their own operations!” (p.20). In the socialist USSR, nationalization was similarly not a synonym of economic democracy, with workers councils in Yugoslavia being the exception rather than the common practice. So nationalization in these and other cases analysed by Cumbers basically meant making the state the capitalist-in-chief. Now, this turned out not to be particularly efficient (even though there is a big difference between industries in this respect). There were plenty of thinkers eager to explain why this was the case.

Hayek’s critique of nationalization and central planning

The centralization of economic power and decision-making, according to thinkers such as Hayek, led to the crushing of individual freedoms and democracy. Central planning, Hayek and other critics emphasized, furthermore creates several knowledge problems – how could central planners “have all the knowledge needed about the individualized demands of millions of consumers in advanced economies?” (p.64). What is more, knowledge is dispersed in society and not easily appropriated by central managers, especially considering that economies are dynamic and evolutionary, and therefore ever changing and unpredictable (p. 65). According to Hayek, “markets and private ownership can solve such knowledge problems, because they involve dispersed decision-making and experimentation […] It is precisely the anarchy of market order, which is the key both to innovation and to the preservation of more democratic societies” (p. 64). So far so good. But we’ve all heard this before – socialism failed because it was too centralized and incapable of innovating.

The market is the solution to all evils, seriously?

What makes the book “Reclaiming public ownership” interesting is that Cumbers doesn’t stop here. Instead, he moves the argument forward by, first of all, explaining why Hayek’s solution is not so appealing as it seems. To begin with, he notes some spheres of life should just not be marketized – think of romantic love, health or education. The absurdity of the marketization of education in contexts such as the US and the UK becomes painfully obvious when compared to the fully free public education in countries such as Austria. Competition and profit are not and should not be the only drivers of economic decision-making (p. 80):

“It is precisely the incursion and spread of ‘free market values’ and norms – through heightened commodification processes – into all areas of economic life that needs to be resisted and rolled back if wider social goals, such as environmental sustainability, decent and ‘choiceworthy’ lives and social justice, are to be achieved” (p. 75).

But beyond such normative discussions, the binaries markets/democracy and planning/authoritarianism just don’t hold empirically. Market economies exist both under democratic and authoritarian regimes, as do forms of central planning (p.76)- just think of how much central planning goes on in private corporations such as Amazon.

Capitalist exploitation rests upon three pillars: “the employment relation, private property and the market” (p. 77).

Real-existing socialism or nationalization attempts in the UK achieved state ownership but they were associated with highly unequal, top-down managerial decision-making and power structures. They were also inefficient.

Markets, purportedly solve efficiency and innovation problems, but they also come with horrible employment relations (think again of Amazon workers peeing in bottles or workplace bullying as seen in *every single TV series about the US corporate world”). What is more, markets can’t and should not govern every aspect of human relations. And finally, they often lead to situations of mass concentration of private property in which a few own a lot and the majority owns nothing but their ability and time to work.

So rather than replacing the state with the market, or vice-versa, what we need to do is to think of alternatives that address all three pillars of exploitation – “the employment relation, private property and the market”.

The alternatives

When thinking of alternatives, Cumbers is careful to urge us not to search for a “one-size fits all solutions” or an all-encompassing model or vision (p. 81). One of the most interesting authors quoted in the book is the associational socialist Otto Neurath, who “used the phrase ‘pseudo-rationalism’ to refer to scientists and philosophers who believed that there is always a possibility of discovering one theory or solution to any problem through rational inquiry” (p. 79). The real world is messy, solutions are always provisional and there are a lot of diverse cultural traditions in the world that should be explored.

Going back to the three pillars (the employment relation, private property and the market), at the core of Cumbers’ alternative vision is the idea that 1) not only should we go beyond marketizing everything, but also 2) the workers should be able to take part in decision-making about companies, that is the employment relations should be democratic and participative. 3) Third, when it comes to property, there is a strong case to be made for “reclaiming public ownership” conceived much more broadly than simply “state ownerhsip”, i.e. nationalization. .

Forms of ownership and the principles behind them:

Cumbers puts forward at least six different forms of ownership, all of which can and should exist together: full state ownership, partial state ownership, local or municipal ownership, employee-owned firms, producer-cooperatives, and consumer cooperatives (p.165). In promoting these diverse forms of ownersip, Cumbers is led by several key principles, among which:

  • taking social justice as class justice: that is, essentially going beyond redistributive justice. i.e. distributing the surplus – or profit- that comes from the labour process through income taxation (not that we are scoring particularly well in this respect currently, anyway…). What is needed instead is to challenge the way the owners of capital control the labour process or “the wider decisions that make and shape economies” (p.146).
  • a commitment to distributed economic power, but not necessarily in decentralized forms: combining diverse forms of public ownership should allow “different groups of citizens to have some level of participation and a stake in the economy, compared to the situation at present, where a small minority globally (the 1 per cent!) hold most of the key decision-making power” (p. 150). In short, there should be different institutional arrangements that “foster distributed and dispersed powers of economic decision-making against tendencies towards hierarchy and centralization” (p 150).
  • tolerance, tradition and heterodox thinking: Traditional forms of collective ownership in fact can be crucial for articulating alternative ownership model. I am thinking here of indigenous communities fighting against corporations “patenting” uses of plants, etc. Another great example, that I encountered actually not in Cumbers’s book but in Xiaowei Wang’ Blockchain Chicken Farm, are Chinese Township and Village enterprises, a large share of which have been owned collectively and about which I will write soon. TVEs were among the key protagonists of China’s explosive growth, outperforming state-owned enterprises).

Not a utopia

The book then moves on from these more abstract principles to a situated analysis of different experiments with diverse forms of public ownership. Rather than being some utopian, never-tried out experiment, most of these forms of ownership are already present. Municipal-cooperative partnerships, for example, have been crucial for the boom of green energy in Denmark (Chapter 9). The state owned Norwegian Oil company has had a long period of intense parliamentary debates on its key decisions (Chapter 8). (This has changed showing that power battles over ownership and decision-making are ongoing and never settled completely.)

Finally, following strong contestation and opposition to water privatization in Latin America, multinational corporations have retreated with varying implications for ownership – in Bolivia, Venezuela and Uruguay operations have returned to the public sector; in Brazil and Chile a mix of private local and foreign capital remains (Chapter 5). But there have also been attempts to return water companies to municipal control – in Argentina, the Aguas Bonaerense (ABSA) public organization was created as a pubic-private partnership between the local authority and a workers’ cooperative (p.113).

So rather than inventing the hot water (or non-privatized water), we can learn from a number of best practices and try to think how different forms of public ownership can transform and democratize different types of economic activity, depending also on the scale of these activities: finance, utilities industries, public transportation, public services, consumer products, private services, consumer services clearly all operate on different scales.

Private ownership might actually be the best option for a small local hairdresser, state, local cooperatives or municipal ownership – for the management of water, and state or municipal ownership – for the management of railways or gas, etc. (p. 168).

Rather than a one-size fit all solution (“nationalize everything!”), thinking of alternatives should be open to combining different forms of ownership at different levels, with the ultimate goal of increasing participation – not of everyone in everything but of everyone at least in some respects and in what matters to them.

So what?

In short, Cumbers’ book is really interesting. Despite the long quotes I don’t think I have given it justice so just read it (there is also some fascinating critique of the concept of the commons inside). But why on Earth am I writing on this book in a blog for our very techie group?

Well, because I think when we criticize regulation as too light touch and want to rehabilitate the state, we should not forget that state ownership (or enterpreneurship) is not always the panacea. To be honest, I have no idea how exactly the argument in Cumbers’ book can be relevant for finding alternatives to the big tech giants.

In a previous post, I had argued that maybe what we need instead of Facebook, are public networks along national lines, with states owning the data of their citizens, using it for research and machine learning, instead of private companies doing this.

But could we instead think of citizens collectively owning their data? Or having citizen cooperatives managing interoperable networks?

Furthermore, what type of public ownership might be an adequate model for an alternative to Amazon? These are not easy questions. And I would love to discuss them with you.

The reason why I made such an extensive review of this book is because I think it might be relevant but it remains for us to explore how exactly. One thing I am certain of is that few things are worse than the current ownership model of big tech, with a few private corporations owning and exploiting all our data.

Going back to the three pillars outlined by Cumbers, when we think of how to reform big tech/find alternatives, we need to think of how to 1) change employment relations within tech firms allowing more participation in decision-making 2) change property relations – who owns the companies that own us? what forms of ownership might be adequate? 3) change the marketization of ourselves and our data – is this reversible in a world where we rent even our homes to strangers?

Each one of these three aspects should be considered and can be changed.

We just rarely frame the debate in these terms, and even more rarely think of all three aspects together. But this is precisely what we should do.

Bridging digital divides: We are proud partners of the CDH Social Data School 2021

By Hugo Leal

In the data-driven age, we believe it is our duty to help bridge some of the digital divides that plague our societies. The Minderoo Centre for Technology and Democracy is proud to partner on the CDH Social Data School 2021.

We are pleased to announce that applications are open to the CDH Social Data School 2021, taking place entirely online from 16-29 June.

Originally conceived by Cambridge Digital Humanities, this year’s event is organised in association with us, the Minderoo Centre for Technology and Democracy. We co-designed and will deliver together a new version of an already outstanding initiative.

Two of our goals at the Minderoo Centre for Technology and Democracy are to enhance public understanding of digital technologies and build journalistic capacity to interrogate big data and Big Tech.

These goals align neatly with the objectives of a Data School, borne out of the need to democratise the exploration of digital methods and push back against abusive practices of data appropriation and exploitation by internet giants.

In the data-driven age, we believe it is our duty to help bridge some of the digital divides that plague our societies.”

I was part of the team that originally put together the Data School pilot, back in 2019, and it was clear to us then that academia was, once again, falling short on its mission and failing the public it should serve.

In the data-driven age, we believe it is our duty to help bridge some of the digital divides that plague our societies.

The yawing skills-gap between those who can and those who cannot understand key aspects of digital data manipulation and analysis, is one of the digital divides that must be urgently closed.

For that purpose, the CDH Social Data School utilises in-house expertise in digital methods and provides hands-on training and knowledge exchange across sectors, professions and disciplines.

Who can apply?

We invite, in particular, people and organisations whose role is to form and inform the public, such as journalists, watchdogs and NGOs, academics, and civil servants, to join us in Cambridge.

The CDH Social Data School also strives to address, even if modestly, other digital divides that fall along the traditional class, gender and racial fault-lines.

Although open to all, the selection procedure prioritises individuals from organisations whose access to digital methods training is limited or non-existent due to insufficient human or financial resources, especially the ones located in the Global South.

“The Social Data School is a venue for that dialogue and an avenue to foster the development of better technical, legal and ethical practices in digital methods research.”

Furthermore, we particularly welcome applications from women and black and minority ethnic candidates as they have historically been under-represented in the technology and data science sectors.

While this will do little to redress centuries of colonial, affluent, white, male fuelled inequalities, we at the Minderoo Centre for Technology and Democracy believe that academic institutions, widely perceived as bastions of elitism, have a special responsibility to adopt inclusive practices and adapt our events to pressing public needs.

If academics want to remain relevant and have proper impact beyond obtuse journal impact factors, we must remove the barriers standing in the way of cross-sectorial and interdisciplinary dialogue.

The CDH Social Data School is a venue for that dialogue and an avenue to foster the development of better technical, legal and ethical practices in digital methods research. Both our delivery format and our programme intend to facilitate a conversation among professions, disciplines and methods.

It is less about having experts looking into a pool of data than about inviting participants to share their knowledge and experiences within the context of a guided immersion into digital methods.

Whenever someone asks to describe the CDH Social Data School format the term “data stroll” comes to mind.

It bears some resemblance with what our colleague Tommaso Venturini calls a “data sprint”, intensive code and data-driven gatherings of people with different skill sets focused on specific research question, but with a more critical and even contemplative nature.

For starters, the pace is slower as we intend to reflect critically upon problems arising from data rather than solving them in a week.

This peripatetic wondering confers the Data School its “strolling” colours. It caters more to “adventurous beginners” willing to get their hands dirty than to data whizzes obsessed over data cleaning.

It is less about having experts looking into a pool of data than about inviting participants to share their knowledge and experiences within the context of a guided immersion into digital methods.

For these reasons, having people from diverse backgrounds is not just a matter of desegregating or decolonising curricula but also an opportunity to confront and learn from different regional, disciplinary, cultural or gender informed perspectives over the widespread practices of data surveillance.

In the context of the Social Data School, democratising access to digital methods is also a call to reclaim our data and demonstrate that data appropriated for private profit can be reappropriated for the common good.

This year’s programme is very rich and ambitious, covering topics ranging from data protection and surveillance to machine learning.

We will also try to make (some) sense of the online disinformation nonsense.

If you are a journalist who has the interest but lacks the tools to investigate the spread of misinformation, work for an NGO who wants to monitor online abuses, a watchdog trying to assess the impact of Machine Learning, a civil servant working to improve the health of our online spaces, an academic willing but hesitant to experiment with digital methods, the Social Data School was designed for you.

Apply now

In Review: Democracy, law and controlling tech platforms

By John Naughton

Notes on a discussion of two readings

  1. Paul Nemitz: “Constitutional democracy and technology in the age of artificial intelligence”, Philosophical Transactions of the Royal Society A, 15 October 2018. https://doi.org/10.1098/rsta.2018.0089
  2. Daniel Hanley: “How Antitrust Lost its Bite”, Slate, April 6, 2021 – https://tinyurl.com/2ht4h8wf

I had proposed these readings because (a) Nemitz’s provided a vigorous argument for resisting the ‘ethics-theatre’ currently being orchestrated by the tech industry as a pre-emptive strike against regulation by law; and (b) the Hanley article argued the need for firm rules in antitrust legislation rather than the latitude currently offered to US judges by the so-called “rule of reason”.

Most of the discussion revolved around the Nemitz article. Here are my notes of the conversation, using the Chatham House Rule as a reporting principle.

  • Nemitz’s assertion that “The Internet and its failures have thrived on a culture of lawlessness and irresponsibility” was challenged as an “un-nuanced and uncritical view of how law operates in the platform economy”. The point was that platform companies do of course ignore and evade law as and when it suits them, but they also at a corporate level rely on it and use it as both ‘a sword and a shield’; law has as a result played a major role in structuring the internet that now exists and producing the dominant platform companies we have today and has been leveraged very successfully to their advantage. Even the egregious abuse of personal data (which may seem closest to being “lawless”) largely still runs within the law’s overly permissive framework. Where it doesn’t, it generally tries to evade the law by skirt around gaps created within the law, so even this seemingly extra-legal processing is itself shaped by the law (and cannot therefore be “lawless”). So any respect for the law that they profess is indeed, as you say, disingenuous, but describing the internet as a “lawless” space – as Nemitz does – misses a huge part of the dynamic that got us here and is a real problem if we’re going to talk about the potential role of law in getting us out. Legal reform is needed, but if it’s going to work then we have to be aware of and account for these things.
  • This critique stemmed from the view that law is both produced by society and in turn reproduces society, and in that sense always functions essentially as an instrument of power — so it has historically been (and remains) a tool of dominance, of hierarchy, of exclusion and marginalisation, of capital and of colonialism. In that sense, the embryonic Silicon Valley giants slotted neatly into that paradigm. And so, could Nemitz’s insistence on the rule of law — without a critical understanding of what that actually means — itself be a problem?

“They [tech companies] employ the law when it suits them and do so very strategically – as both a ‘sword’ and a ‘shield’ – and that’s played a major role in getting the platform ecosystem to where it is now.”

  • On the one hand, laws are the basic tools that liberal democracies have available for bringing companies under democratic (i.e. accountable) control. On the other hand, large companies have always been adept (and, in liberal democracies, very successful) at using the law to further their interests and cement their power.
  • This point is particularly relevant to tech companies. They’ve used law to bring users within their terms of service and thereby to hold on to assets (e.g. exabytes of user data) that they probably wouldn’t have been able to do otherwise. They use law to enable the pretence that click-through EULAs are, in fact, contracts. So they employ the law when it suits them and do so very strategically — as both a ‘sword’ and a ‘shield’ — and that’s played a major role in getting the platform ecosystem to where it is now.
  • Also, law plays a big role in driving and shaping technological development. Technologies don’t emerge in a vacuum, they’re a product of their context and law is a major part of that context. So the platform business models and what’s happening on the internet aren’t outside of the law; they’re constructed through, and depend upon, it. So it’s misleading when people argue (like Nemitz??) that we need to use law to change things — as if the law isn’t there already and may actually be partially enabling things that are societally damaging. So unless we properly understand the rule of law in getting us to our current problematique, talking about how law can help us is like talking about using a tool to fix a problem without realising that the tool is itself is part of the problem.

“It’s the primacy of democracy, not of law that’s crucial.”

  • There was quite a lot of critical discussion of the GDPR on two fronts — its ‘neoliberal’ emphasis on individual rights; and things that are missing from it. Those omissions and gaps are not necessarily mistakes; they may be the result of political choices.
  • One question is whether there is a deficit of law around who owns property in the cloud. If you upload a photo to Facebook or whatever it’s unclear if you have property rights over or if the cloud-computing provider does. General consensus seems to be that that’s a tricky question! (Questions about who owns your data generally are.)
  • Even if laws exist, enforcement looks like a serious problem. Sometimes legal coercion of companies is necessary but difficult. And because of the ‘placelessness’ of the internet, it seems possible that a corporation or an entity could operate in a place where there’s no nexus to coerce it. Years ago Tim Wu and Jack Goldsmith’s book recounted how Yahoo discovered that they couldn’t just do whatever they wanted in France because they had assets in that jurisdiction and France seized them. Would that be the case that with say, Facebook, now? (Just think of why all of the tech giants have their European HQs in Ireland.)
  • It’s the primacy of democracy, not of law that’s crucial. If the main argument of the Nemitz paper is interpreted as the view that law will solve our problems, that’s untenable. But if we take as the main argument that we need to democratically discuss what the laws are, then we all agree with this. (But isn’t that just vacuous motherhood and apple pie?)
  • More on GDPR… it sets up a legal framework in which we can regulate the consenting person that is, that’s a good thing that most people can agree on. But the way that GDPR is constructed is extremely individualistic. For example, it disempowers data subjects in even in the name of giving them rights because it individualises them. So even the way that it’s constructed actually goes some way towards undermining its good effects. It’s based on the assumption that if we give people rights then everything will be fine. (Shades of the so-called “Right to be Forgotten”.)

As for the much-criticised GDPR, one could see it as an example of ‘trickle-down’ regulation, in that GDPR has become a kind of gold standard for other jurisdictions.

  • Why hasn’t academic law been a more critical discipline in these areas? The answer seems to be that legal academia (at least in the UK, with some honourable exceptions) seems exceptionally uncritical of tech, and any kind of critical thinking is relatively marginalised within the discipline compared to other social sciences. Also most students want to go into legal practice, so legal teaching and scholarship tends to be closely tied to law as a profession and, accordingly, the academy tends to be oriented around ‘producing’ practising lawyers.
  • There was some dissent from the tenor of the preceding discourse about the centrality of law and especially about the improbability of overturning such a deeply embedded cognitive and professional system. This has echoes of a venerable strand in political thinking which says that in order to change anything you have to change everything and it’s worse to change a little bit than it is to change everything — which means nothing actually changes. This is the doctrine that it’s quite impossible to do any good at all unless you do the ultimate good, which is to change everything. (Which meant, capitalism and colonialism and original sin, basically!) On the other hand, there is pragmatic work — making tweaks and adjustments — which though limited in scope might be beneficial and appeal to liberal reformers (and are correspondingly disdained by lofty adherents to the Big Picture).
  • There were some interesting perspectives based on the Daniel article. Conversations with people across disciplines show that technologists seem to suggest a technical solution for everything (solutionism rules OK?), while lawyers view the law as a solution for everything. But discussions with political scientists and sociologists mostly involve “fishing for ideas” which is a feature, not a bug, because it suggests that minds are not set in silos — yet. But one of the problems with the current discourse — and with these two articles — is that the law currently seems to be filling the political void. And the discourse seems to reflect public approval of the judicial approach compared with the trade-offs implicit in Congress. But the Slate article shows the pernicious influence or even interference of an over-politicised judiciary in politics and policy enforcement. (The influence of Robert Bork’s 1978 book and the Chicago School is still astonishing to contemplate.)
  • The Slate piece seems to suffer from a kind of ‘neocolonial governance syndrome’ — the West and the Rest. We all know section 230 by heart. And now it’s the “rule of reason” and the consumer welfare criterion of Bork. It’s important to understand the US legal and political context. But we should also understand: the active role of the US administration; what happened recently in Australia (where the government intervened, both through diplomatic means and directly on behalf of the Facebook platform); and in Ireland (where the government went to the European Court to oppose a ruling that Apple had underpaid tax to the tune of 13 billion Euros). So the obsession with the US doesn’t say much about the rest of the world’s capacity to intervene and dictate the rules of the game. And yet China, India and Turkey have been busy in this space recently.
  • And as for the much-criticised GDPR, one could see it as an example of ‘trickle-down’ regulation, in that GDPR has become a kind of gold standard for other jurisdictions. Something like 12 countries have adopted GDPR-like legislation, and this includes many countries in Latin America Chile. Chile, Brazil, South Africa and South Africa, South Africa, Japan, Canada and so on so forth.