In Review: The Cloud and the Ground

By Julia Rone

In this literature review, Julia Rone outlines the key trends and logics behind the boom in data centre construction across the globe.

Hamlet: Do you see yonder cloud that’s almost in shape of a camel?

Polonius: By th’ mass, and ‘tis like a camel indeed.

Hamlet: Methinks it is like a weasel

Polonius: It is backed like a weasel.

Hamlet: Or like a whale?

Polonius: Very like a whale.

The cloud – this fundamental building block of digital capitalism – has been so far defined mainly by the PR of big tech companies.

The very metaphor of the “cloud” presupposes an ethereal, supposedly immaterial collection of bits gliding in the sky, safely removed from the corrupt organic and inorganic matter that surrounds us. This, of course, can’t be further from the truth.

But even when they acknowledge the materiality of the “cloud” and the way it is grounded in a very physical infrastructure of cables, data centres, etc., tech giants still present it in a neat and glamorous way. Data centres, for example, provide carefully curated tours and are presented as sites of harmoniously humming servers, surrounded by wild forests and sea. Some data centres even boast with saunas.  

Instead of accepting blindly the PR of tech companies and seeing “the cloud” as whatever they present it (similarly to the way Polonius accepts Hamlet’s interpretations of the cloud), we should be attuned to the multiplicity of existing perspectives on “the cloud”, coming from researchers, rural and urban communities, and environmentalists, among others.

In this lit review, I outline the key trends and logics behind the boom in data centre construction across the globe. I base the discussion on several papers from two special issues. The first one is The Nature of Data Centres, edited by Mél Hogan and Asta Vonderau for Culture Machine. The second: Location and Dislocation: Global Geographies of Digital Data, edited by Alix Johnson and Mél Hogan for Imaginations: Journal of Cross-Cultural Image Studies. I really recommend reading both issues – the contributions read like short stories and go straight to the core of the most pressing political economy problems of our times.

Credit: Zbynek Burival for Unsplash

The “nature” of data centres

Data centres as key units of the cloud are very material: noisy, hot, giant storage boxes containing thousands of servers, they occupy factories from the past or spring up on farm land all over the globe. Data centres are grounded in particular locations and depend on a number of “natural” factors for their work, including temperature, humidity, or air pollution. In order for data centres to function, they not only use up electricity (produced by burning coal or using wind energy, for example). They also employ technologies to circulate air and water to cool down and emit heat as a waste product.

But data centres are not only assemblages of technology and nature. Their very appearance, endurance and disappearance is defined by complex institutional and non-institutional social relations: regions and countries compete with each other to cut taxes for tech corporations that promise to bring jobs and development. Some states (e.g. Scandinavian states) are preferred over others because of their stable institutions and political “climate”.

No blank slate

To illustrate, the fact that data centres are built in the Sweden’s Norrbotten region has to do a lot with the “nature” of the region conceptualized reductively by tech companies as cheap energy, cheap water, cheap land and green imagery (Levenda and Mahmoudi, 2019, 2). But it also has to do a lot with the fact that Norrbotten is filled with the “ruins of infrastructural promises” (Vonderau, 2019, 3) – “a scarcely populated and resource-rich region, historically inhabited by native Sami people, the region was for a long-time regarded as no-man’s land” (ibid). Not only is Norrbotten scarcely populated but it also has an “extremely stable and redundant electricity grid which was originally designed for […]‘old’ industries” (ibid, 7).

A similar logic of operation could be discerned in the establishment of a data centre in the Midway Technology Centre in Chicago, where the Schulze Bakery was repurposed as a data centre (Pickren, 2017) Pickren was told in an interview with a developer working on the Schulze redevelopment project that “because the surrounding area had been deindustrialized, and because a large public housing project, the Robert Taylor Homes had closed down in recent decades, the nearby power substations actually had plenty of idle capacity to meet the new data centre needs” (Pickren, 2017). As Pickern observes, “there is no blank slate upon which the world of data simply emerges”(ibid.) There are multiple “continuities between an (always temporary) industrial period and the (similarly temporary) ascendancy of digital capitalism” (ibid).

Extraction and the third wave of urbanization

What the examples of Norrbotten in Sweden and the redevelopment of Chicago by the data industry show is that despite a carefully constructed PR around “being close to nature” and “being green”, decisions on data centre construction actually depend on availability of electricity for which depopulation is only a plus. Instead of “untouched” regions, what companies often go for are rather abandoned or scarcely populated regions with infrastructure left behind. Data centres use resources – industrial capacity or Green energy – that are already there, left from previous booms and busts of capitalism or from conscious state investment that is now used to the benefit of private companies.

“Urban interactions are increasingly mediated by tech and leave a digital trace – from paying your Uber to ordering a latte, from booking a restaurant to finding a date for the night.”

Both urban and rural communities are in fact all embedded within a common process of a “third wave of urbanization” that goes hand in hand with an increase in the commodification and extraction of both data and “natural” resources (Levenda and Mahmoudi, 2019). What this means is that urban interactions are increasingly mediated by tech and leave a digital trace – from paying your Uber to ordering a latte, from booking a restaurant to finding a date for the night.

Credit: Priscilla Du Preez for Unsplash

This urban data is then stored and analysed in predominantly rural settings: “[T]he restructuring of Seattle leads to agglomerations in urban data production, which rely on rural data storage and analysis” (ibid, 9). Put simply, “[J]ust as Facebook and Google use rural Oregon for their ‘natural’ resources, they use cities and agglomerations of ‘users’ to extract data”.

Ultimately, data centres manifest themselves as assemblages for the extraction of value from both people and nature.

As if in a perverse rendition of Captain Planet, all elements – water, air, earth, human beings and technology – unite forces so that data centres can function and you can upload a cat photo on Facebook. In this real life data-centre version of Captain Planet, however, all elements are used up, extracted, exhausted. Water is polluted.

People live with the humming noise of thousands of servers.

Taxes are not collected and therefore not invested in communities that are already deprived.

What is more, data centres often arrive in rural regions with the promise to create jobs and drive development. But as numerous authors have shown, actual jobs created by data centres are less than what was originally promised, with most jobs being precarious subcontracting (Mayer, 2019). As Pickren notes, “If the data centre is the ‘factory of the 21st century,’ whither the working class?”

Abstraction

Data centres do create jobs but predominantly in urban areas. “[W]here jobs are created, where they are destroyed and who is affected are socially and geographically uneven” (Pickern, 2017). Where value is extracted from and where value is allocated rarely coincide.

And if from a birds view perspective, what matters is the total number of jobs created, what matters in Sweden’s Norrbotten or The Netherlands’ Groningen, where data centres are built, is how many jobs are created there and furthermore, what types of jobs (Mayer, 2019). In the same way, while from an abstract point of view tech companies such as Microsoft might be “carbon neutral”, this does not change their questionable practices and dependence on coal in particular places.

The Introduction to the “Location and Dislocation” Special Issue quotes a classic formulation by Yi-Fu Tuan according to whom “place is space made meaningful”(Johnson and Hogan, 2017, 4).

“Whenever we hear big tech’s grandiose pledges of carbon neutrality and reducing carbon emissions, we need to understand that these companies are not simply “green-washing” but are also approaching the problem of global warming “in the abstract””.

One of the key issues with tech companies building data centres is the way they privilege space over place – an abstract logic of calculation and global flows over the very particular local relations of belonging and accountability.

In a great piece on “fungible forms of mediation in the cloud”, Pasek explores how the practice of big tech companies to buy renewable energy certificates does more harm than good, since it allows “data centre companies to symbolically negate their local impacts in coal-powered regions on papers, while still materially driving up local grid demand and thereby incentivizing the maintenance or expansion of fossil energy generation” (ibid, 7).

The impact for local communities can be disastrous: “In communities located near power plants, disproportionately black, brown and low-income, this has direct consequences for public health, including greater rates of asthma and infant mortality” (ibid).

So whenever we hear big tech’s grandiose pledges of carbon neutrality and reducing carbon emissions, we need to understand that these companies are not simply “green-washing” but are also approaching the problem of global warming “in the abstract”, at the global level, paying little attention to their effect in any particular locality.

As Pasek notes, this logic of abstraction subordinates the “urgencies of place” to the “logics of circulation”.

Unsurprisingly, it is precisely the places that have already lost the most from previous industrial transformations that are the ones who suffer most during the current digital transformations.

Invisibility and Hypervisibility

What makes possible the extraction practices of tech companies is a mix between how little we know about them and how much we believe in their promise of doing good (or well, not doing evil at least).

In her fascinating essay “The Second Coming: Google and Internet infrastructure”, Mayer (2019) explores the rumours around a new Google data centre in Groningen. Mayer explores how Google’s reputation as a leading company combined with a the total lack of concrete information about the new data centre create a mystical aura around the whole enterprise: “Google’s curation of aura harkens back to the early eras of Western sacred art, during which priests gave sacred objects their magical value by keeping them ‘invisible to the spectator’” (Mayer, 2019, 4).

Mayer contrasts a sleek Google PR video (with a lone windmill and blond girls looking at computer screens) with the reality brought about by a centre that offered only a few temporary subcontracting jobs. The narrative of regional growth presented by Google unfortunately turned out to be PR rather than a coherent development strategy.

Impermanence

Furthermore, in a fascinating essay on data centres as “impertinent infrastructures”, Velkova (2019) explores the temporality and impermanence of data centres that can be moved or abandoned easily. 

How could such impertinent structures provide regional development?

What is more, even if data centres do not move, they do reorganize global territories and connectivity speeds through the threat of moving: “data center companies are constantly reevaluating the economic profitability of  particular locations in synchrony with server replacement cycles and new legislative frameworks that come into force.

Data centres are above all impermanent – they can come and go. Rather than being responsible to a particular locality, data centres are part of what Pasek called a “logic of global circulation”

Should tax regulations, electricity prices, legislation or geopolitical dynamics shift, even a hyper-sized data center like Google’s in Finland or Facebook’s in Sweden could make a corresponding move to a place with more economically favourable conditions within three years” (Velkova, 2019, 5).

So data centres are on the one hand, hypervisible through corporate PR. On the other hand, they are invisible for local communities that are left guessing about construction permits, the conditions of data centres arrival, their impact on the environment and the economy.

But ultimately, and this is the crucial part, data centres are above all impermanent – they can come and go. Rather than being responsible to a particular locality, data centres are part of what Pasek called a “logic of global circulation”.

Holding each node accountable

Big tech’s logics of extraction, abstraction, invisibility, hypervisibility and impermanence are driving the current third wave of urbanization and unequal development under digital capitalism.

But it is possible to imagine another politics that would “hold each node accountable to the communities in which they are located” (Pasek, 9).

The papers from the two special issues I review here provide an exhaustive and inspiring overview of the “nature” and imaginaries of data centres.

Yet, with few exceptions (such as the work of Asta Vonderau), we know little about the politics of resistance to data centres and the local social movements that are appearing and demanding more democratic participation in decision making.

Would it be possible for us – citizens – to define what the cloud should look like? Not sure. But this is a crucial element of any project for democratizing digital sovereignty. And this is what I work on now.

Public networks instead of social networks?

We need state-owned, interoperable, democratically governed online public networks. From the people for the people.

posted by Julia Rone

The conversation so far

The following comments on Trump being banned from Twitter/ the removal of Parler from Android and iOS stores were, somewhat aptly, inspired by two threads on Twitter itself: the first by the British-Canadian blogger Cory Doctorow and the other by Canadian scholar Blayne Haggart. The point of this post ideally is to start the conversation from where Doctorow and Haggart have left it and involve more people from our team. Ideally, nobody will be censored in the process :p

Doctorow insists that the big problem with Apple and Android removing Parler is not so much censorship – ultimately different app stores can have different rules and this should be the case – but rather the fact that there are no alternative app stores. Thus, the core of his argument is that the US needs to enforce anti-trust laws that would allow for a fair competition between a number of competitors. The same argument can be extended to breaking up social media monopolists such as Facebook and Twitter. What we need is more competition.

Haggart attacks this argument in three ways:

First, he reminds that “market regulation of the type that @doctorow wants requires perfect competition. This is unlikely to happen for a number of reasons (e.g, low consumer understanding of platform issues, tendency to natural monopoly)”. Thus, the most likely outcome becomes the establishment of “a few more corporate oligarchs”. This basically leaves the state as a key regulator – much to the disappointment of cyber-libertarians who have argued against state regulation for decades.

The problem is, and this is Haggart’s second key point, that “as a non-American, it’s beyond frustrating that this debate (like so many internet policy debates) basically amounts to Americans arguing with other Americans about how to run the world. Other countries need to assert their standing in this debate” . This point had been made years ago also in Martin Hardie’s great paper “Foreigner in a free land” in which he noticed how most debates about copyright law focused on the US. Even progressive people such as Larry Lessig built their whole argumentation on the basis of references to the US constitution. But what about all of us – the poor souls from the rest of the world who don’t live in the US?

Of course, Facebook, Twitter, Alphabet, Amazon, etc. are all US tech companies. But they do operate globally. So even if the US states interferes in regulating them, the regulation it imposes might not chime well with people in France or Germany, let’s say. The famous American prudence with nudity is the oft quoted example of different standards when it comes to content regulation. No French person would be horrified by the sight of a bare breast (at least if we believe stereotypes) so why should nude photos be removed from the French social media. If we want platform governance to be truly democratic, the people affected by it should “have a say in that decision”. But as Haggart notes “This cannot happen so long as platforms are global, or decisions about them are made only in DC”.

So what does Haggart offer? Simple: break social media giants not along market lines but along national lines. Well, maybe not that simple…

If we take the idea of breaking up monopolies along national lines seriously…

This post starts from Haggart’s proposal to break up social media along national lines, assuming it is a good proposal. In fact I do this not for rhetorical purposes or for the sake of setting a straw man but because I actually think it is a good proposal. So the following lines aim to take the proposal seriously and consider different aspects of it discussing what potential drawbacks/problems should we keep in mind.

How to do this??

The first key problem is: who on Earth, can convince companies such as Facebook/Twitter to “break along national lines”. These companies spend fortunes on lobbying the US government and they are US national champions. Why would the US support breaking them up along national lines? (As a matter of fact, the question of how is also a notable problem in Deibert’s “Reset” – his idea that hacktivism, civil disobedience, and whistleblowers’ pressure can make private monopolists exercise restraint is very much wishful thinking). There are historical precedents for nationalization of companies but they seem to have involved either a violent revolution or a massive indebtedness of these companies making it necessary for the state to step in and save them with public money. Are there any precedents for nationalizing a company and then revealing how it operates to other states in order to make these states create their respective national versions of it? Maybe. But it seems highly unlikely that anyone in the US would want to do this.

Which leaves us with the rather utopian option two: all big democratic states get together and develop interoperable social media. The project is such a success that people fed up with Facebook and Google decide to join and the undue influence of private monopolists finally comes to an end. But this utopian vision itself opens up a series of new questions.

Okay, assuming we can have state platforms operating along national lines..

Inscribing values in design is not always as straight-forward as it seems, as discussed in the fascinating conversation between Solon Barocas, Seda Gurses, Arvind Narayanan and Vincent Toubiana on decentralized personal data architectures. But, assuming that states can build and maintain (or hire someone to build and maintain) such platforms that don’t crash, are not easy to hack and are user friendly, the next question is: who is going to own the infrastructure and the data?

Who will own the infrastructure and the data?

One option would be for each individual citizen to own their data but this might be too risky and unpractical. Another option would be to treat the data as public data – the same way we treat data from surveys and national statistics. The personal data from current social media platforms is used for online advertising/ training machine learning. If states own their citizens’ data, we might go back to a stage in which the best research was done by state bodies and universities rather than what we have now – the most cutting edge research is done in private companies, often in secret from the public. Mike Savage described this process of increased privatization of research in his brilliant piece The Coming Crisis of Empirical sociology. If anything, the recent case with Google firing AI researcher Timnit Gebru reveals the need to have independent public research that is not in-house research by social media giants or funded by them. It would be naive to think such independent academics can do such research in the current situation when the bulk of interesting data to be analysed is privately owned.

How to prevent authoritarian censorship and surveillance?

Finally, if we assume that states will own their own online public networks – fulfilling the same functions such as Facebook, but without the advertising, the one million dollar question is how to prevent censorship, overreach and surveillance. As Ron Deibert discusses in “Reset”, most states are currently involved in some sort of hacking and surveillance operations of foreign but also domestic citizens. What can be done about this? Here Haggart’s argument about the need for democratic accountability reveals its true importance and relevance. State-owned online public networks would have to abide by standards that have been democratically discussed and to be accountable to the public.

But what Hagart means when discussing democratic accountability should be expanded. Democracy and satisfaction with it have been declining in many Western nations with more and more decision-making power delegated to technocratic bodies. Yet, what the protests from 2010s in the US and the EU clearly showed is that people are dissatisfied with democracy not because they want authoritarianism but because they want more democracy, that is democratic deepening. Or in the words of the Spanish Indignados protesters:

“Real democracy, now”

Thus, to bring to conclusion the utopia of state public networks, the decisions about their governance should be made not by technocratic bodies or with “democratic accountability” used as a form of window-dressing which sadly is often the case now. Instead, policy decisions should be discussed broadly through a combination of public consultations, assemblies and in already existing national and regional assemblies in order to ensure people have ownership of the policies decided. State public networks should be not only democratically accountable but also democratically governed. Such a scenario would be one of what I call “democratic digital sovereignty” that goes beyond the arbitrariness of decisions by private CEOs but also escapes the pitfalls of state censorship and authoritarianism.

To sum up: we need state-owned interoperable online public networks. Citizen data gathered from the use of these media would be owned by the state and would be available for public academic research (which would be open access in order to encourage both transparency and innovation). The moderation policies of these public platforms would be democratically discussed and decided. In short, these will be platforms of the people and for the people. Nothing more, nothing less.

Create your website with WordPress.com
Get started