Can we imagine a better Internet?

The convenience of thinking together

by Alina Utrata and Julia Rone

Reflecting on our recent tech and environment workshop, two of our workshop hosts, Alina Utrata and Julia Rone, explore the questions from the event that are still making them think.

On June 17, over 40 participants from all over the world joined our workshop exploring “the cost of convenience” and the opaque impact that digital technology has on the environment.

Instead of having academics presenting long papers in front of Zoom screens with switched-off cameras, we opted for a more dialogic, interactive and (conveniently) short format.

We invited each participant (or team of participants) to share a provocation across the environmental impact of technology/the political economy of the environment/technology nexus and discussed in small groups. Then, in panel sessions we discussed the provocations (what we know already), the known unknowns (what we don’t know yet), and ideas for an action plan (what could we be doing). 

Below are our reflections on the workshop.

A visual representation of the workshop, produced by artist Tom Mclean.

There is no real technical or technological “fix” for the climate crisis

By Alina Utrata

I am currently working on the relationships between technology corporations and states.

For me, what stood out about the discussions was the sense among all participants that there was no real technical or technological “fix” for the climate crisis.

Instead, the conversations often revolved around globally embedded systems and structures of power—and asking why a certain technology is being deployed, by whom, for whom and how, rather than whether they could “fix” anything.

“I was inspired by how participants immediately recognised the importance of these systems, and instead focused our conversations on how to change them.”

Alina Utrata

In fact, it was pointed out that often the creators of these technological innovations deliberately promoted certain kinds of narratives about how they wanted the technology to be thought of—for example, the “cloud” as a kind of abstract, other place in the sky, rather than a real, tangible infrastructure with real costs.

The same could be said of the metaphors of “carbon footprint” or “carbon neutral”—the idea that as long as discrete, individual corporate entities were not personally responsible for a certain amount of emissions, then they could not be held culpable for a system that was failing the planet. 

Credit: Alex Machado for Unsplash

I was inspired by how participants immediately recognized the importance of these systems, and instead focused our conversations on how to change them.

Although many political concepts today are so commonplace that they seem ordinary, we discussed how they are often really quite modern or Western in origin.

For example, the idea of the shared, communal commons is an ancient one, and can be used as a political framework to tackle some of the harmful systems humans have put in place on our earth. 

Finally, we acknowledged that we all have a role to play in this fight for our future—but not all of us have or need to play the same role.

Some of us will be activists outside these systems of power, and some of us will be sympathetic voices from within.

The participants reaffirmed the need to both communicate and coordinate across disciplines within academia, and more broadly in sectors across the wider earth.


Should we abolish the Internet?

By Julia Rone

 Credit: Denny Müller for Unsplash

I am currently working on the democratic contestation of data centre construction.

John Naughton often says during our weekly meetings that the most interesting conversations are those that finish before you want them to end. That was definitely the case for me at the workshop since of each the sessions I hosted ended with a question that could be discussed for hours and that still lingers in my mind.

Concepts and conceptual problems

If I have to identify the key common threads running through the three sessions I hosted, the first one has to with concepts and conceptual problems. 

Several participants posed the crucial question how do we think of “progress”.

Is progress necessarily synonymous with growth, increased efficiency, better performance?

What are we sacrificing in the name of “progress”?

One participant asked the painfully straight-to-the-point question: “Should we abolish the Internet?” (considering the massive toll of tech companies on the environment, the rise of hate speech, cyber-bullying, polarization, etc.)

Do we feel loss at the thought? 

“Yes!” – I immediately said to myself.- “How could I talk to my family and to my friends”.

This question really provoked me to think further.

If I can’t live in a world without the Internet, can we think of a different Internet?

How can we re-invent the Internet to become more caring, accessible, more Earth-based and less extractive (as one of the provocations suggested).

Credit: Ehimetalor Akhere Unuabona for Unsplash

What does it mean to be sustainable?

Another, similarly important conceptual question was posed at the very end of the second session by a collegue who asked “What does it mean to be sustainable?” Why do we want to be sustainable? What and whom are we sustaining?

Should we not rather think of ways to radically change the system?

Our time ran out before discussing this in depth and therefore this question has also been bothering me since then. 

Ultimately, as another participant emphasised, research on the environmental impact of tech is most problematic and underdeveloped at two levels – the levels of concepts (how do we think of abstraction and extraction, for example?), but also at the lowest level of what individuals and communities do.

This latter question about on-the-ground labor, work and action is actually the second common thread between several of the contributions in the sessions I attended.

“It is difficult to disentangle the economic aspects of repair from the environmental ones.” 

A colleague studying workers who do repair for their livelihood (not as a hipster exercise) rightly pointed out that when discussing the environmental consequences of tech, and practices such as repair in particular, it is difficult to disentangle the economic aspects of repair from the environmental ones. 

Indeed, in a different context, scholars of the environmental impact of tech have clearly shown how tech companies’ extractive practices towards nature go hand-in-hand with dispossession, economic exploitation and extraction of value and profit from marginalised communities.

“In order to understand and better address the environmental consequences of digital tech, we need to be more open to the experiences of individuals and communities on the ground who often “know better” since they live (and occasionally also cause) the very consequences of tech we research.”

Julia Rone

Another colleague had studied the ways in which local leaders participate in decision-making about data centres in Thailand and controversies around water use – a topic very relevant to my own current project on data centres in the Netherlands.

Another participant yet had studied how participatory map-making not only consumes electricity but also changes the very way we see nature.

The reason why I found all these contributions so fascinating is that they challenged simplistic narratives of Big Tech Vs the Environment and showed how many players (with how many different intentions, principles and economic interests) are actually involved in the increasingly complex assemblage of humans-nature and tech. 

So to sum up – in order to understand and better address the environmental consequences of digital tech, we need to be more clear about the concepts we use as researchers but also to be more open to the experiences of individuals and communities on the ground who often “know better” since they live (and occasionally also cause) the very consequences of tech we research. 

To summarise…

Ultimately, each of us who attended (and hosted) the sessions of the workshop have a rich but still incomplete overview of the workshop.

By attending different sessions, there were provocations that individually we missed as sessions intertwined and overlapped (a bit like tectonic plates readjusting meaning, ideas and new perspectives for research).

We would love to hear from other attendees from the workshop, the ideas that struck them most during the sessions.

Luckily, some participants have submitted their provocation to our Zine, a unique document that we will share soon to help guide us forward in our thinking.

We can’t wait to share the Zine with you… stay tuned.

Some lessons of Trump’s short career as a blogger

By John Naughton

The same unaccountable power that deprived Donald J. Trump of his online megaphones could easily be deployed to silence other prominent figures, including those of whom liberals approve.

‘From the Desk of Donald J. Trump’ lasted just 29 days. It’s tempting to gloat over this humiliating failure of a politician hitherto regarded as an omnipotent master of the online universe.

Tempting but unwise, because Trump’s failure should alert us to a couple of unpalatable realities.

The first is that the eerie silence that descended after the former President was ‘deplatformed’ by Twitter and Facebook provided conclusive evidence of the power of these two private companies to control the networked public sphere.

Those who loathed Trump celebrate his silencing because they regarded him — rightly — as a threat to democracy.

But on the other hand nearly half of the American electorate voted for him. And the same unaccountable power that deprived him of his online megaphones could easily be deployed to silence other prominent figures, including those of whom liberals approve.

The other unpalatable reality is that Trump’s failure to build an online base from scratch should alert us to the way the utopian promise of the early Internet — that it would be the death of the ‘couch potato’, the archetypal passive media consumer — has not been realised. Trump, remember, had 88.9m followers on Twitter and over 33m fans on Facebook.

“The failure of Trump’s blog is not just a confirmation of the unaccountable power of those who own and control social media, but also a reflection of the way Internet users enthusiastically embraced the ‘push’ model of the Web over the ‘pull’ model that we techno-utopians once hoped might be the network’s future.”

Yet when he started his own blog they didn’t flock to it. In fact they were nowhere to be seen. Forbes reported that the blog had “less traffic than pet adoption site Petfinder and food site Eat This Not That.” And it was reported that he had shuttered it because “low readership made him look small and irrelevant”. Which it did.

What does this tell us? The answer, says Philip Napoli in an insightful essay in Wired,

“lies in the inescapable dynamics of how today’s online media ecosystem operates and how audiences have come to engage with content online. Many of us who study media have long distinguished between “push” media and “pull” media.

“Traditional broadcast television is a classic “push” medium, in which multiple content streams are delivered to a user’s device with very little effort required on the user’s part, beyond flipping the channels. In contrast, the web was initially the quintessential “pull” medium, where a user frequently needed to actively search to locate content interesting to them.

“Search engines and knowing how to navigate them effectively were central to locating the most relevant content online. Whereas TV was a “lean-back” medium for “passive” users, the web, we were told, was a “lean-forward” medium, where users were “active.” Though these generalizations no longer hold up, the distinction is instructive for thinking about why Trump’s blog failed so spectacularly.

“In the highly fragmented web landscape, with millions of sites to choose from, generating traffic is challenging. This is why early web startups spent millions of dollars on splashy Super Bowl ads on tired, old broadcast TV, essentially leveraging the push medium to inform and encourage people to pull their online content.

“Then social media helped to transform the web from a pull medium to a push medium...”

adem-ay-Tk9m_HP4rgQ-unsplash

Credit: Adem AY for Unsplash

This theme was nicely developed by Cory Doctorow in a recent essay, “Recommendation engines and ‘lean-back’ media”.  The optimism of the early Internet era, he mused, was indeed best summarized in that taxonomy.

“Lean-forward media was intensely sociable: not just because of the distributed conversation that consisted of blog-reblog-reply, but also thanks to user reviews and fannish message-board analysis and recommendations.

“I remember the thrill of being in a hotel room years after I’d left my hometown, using Napster to grab rare live recordings of a band I’d grown up seeing in clubs, and striking up a chat with the node’s proprietor that ranged fondly and widely over the shows we’d both seen.

“But that sociability was markedly different from the “social” in social media. From the earliest days of Myspace and Facebook, it was clear that this was a sea-change, though it was hard to say exactly what was changing and how.

“Around the time Rupert Murdoch bought Myspace, a close friend had a blazing argument with a TV executive who insisted that the internet was just a passing fad: that the day would come when all these online kids grew up, got beaten down by work and just wanted to lean back.

“To collapse on the sofa and consume media that someone else had programmed for them, anaesthetizing themselves with passive media that didn’t make them think too hard.

“This guy was obviously wrong – the internet didn’t disappear – but he was also right about the resurgence of passive, linear media.”

This passive media, however, wasn’t the “must-see TV” of the 80s and 90s.  Rather, it was the passivity of the recommendation algorithm, which created a per-user linear media feed, coupled with mechanisms like “endless scroll” and “autoplay,” that obliterated any trace of an active role for the  aptly-named Web “consumer”.

As Napoli puts it,

“Social media helped to transform the web from a pull medium to a push medium. As platforms like Twitter and Facebook generated massive user bases, introduced scrolling news feeds, and developed increasingly sophisticated algorithmic systems for curating and recommending content in these news feeds, they became a vital means by which online attention could be aggregated.

“Users evolved, or devolved, from active searchers to passive scrollers, clicking on whatever content that their friends, family, and the platforms’ news feed algorithms put in front of them. This gave rise to the still-relevant refrain “If the news is important, it will find me.” Ironically, on what had begun as the quintessential pull medium, social media users had reached a perhaps unprecedented degree of passivity in their media consumption. The leaned-back “couch potato” morphed into the hunched-over “smartphone zombie.””

So the failure of Trump’s blog is not just a confirmation of the unaccountable power of those who own and control social media, but also a reflection of the way Internet users enthusiastically embraced the ‘push’ model of the Web over the ‘pull’ model that we techno-utopians once hoped might be the network’s future.

In Review: Democracy, law and controlling tech platforms

By John Naughton

Notes on a discussion of two readings

  1. Paul Nemitz: “Constitutional democracy and technology in the age of artificial intelligence”, Philosophical Transactions of the Royal Society A, 15 October 2018. https://doi.org/10.1098/rsta.2018.0089
  2. Daniel Hanley: “How Antitrust Lost its Bite”, Slate, April 6, 2021 – https://tinyurl.com/2ht4h8wf

I had proposed these readings because (a) Nemitz’s provided a vigorous argument for resisting the ‘ethics-theatre’ currently being orchestrated by the tech industry as a pre-emptive strike against regulation by law; and (b) the Hanley article argued the need for firm rules in antitrust legislation rather than the latitude currently offered to US judges by the so-called “rule of reason”.

Most of the discussion revolved around the Nemitz article. Here are my notes of the conversation, using the Chatham House Rule as a reporting principle.

  • Nemitz’s assertion that “The Internet and its failures have thrived on a culture of lawlessness and irresponsibility” was challenged as an “un-nuanced and uncritical view of how law operates in the platform economy”. The point was that platform companies do of course ignore and evade law as and when it suits them, but they also at a corporate level rely on it and use it as both ‘a sword and a shield’; law has as a result played a major role in structuring the internet that now exists and producing the dominant platform companies we have today and has been leveraged very successfully to their advantage. Even the egregious abuse of personal data (which may seem closest to being “lawless”) largely still runs within the law’s overly permissive framework. Where it doesn’t, it generally tries to evade the law by skirt around gaps created within the law, so even this seemingly extra-legal processing is itself shaped by the law (and cannot therefore be “lawless”). So any respect for the law that they profess is indeed, as you say, disingenuous, but describing the internet as a “lawless” space – as Nemitz does – misses a huge part of the dynamic that got us here and is a real problem if we’re going to talk about the potential role of law in getting us out. Legal reform is needed, but if it’s going to work then we have to be aware of and account for these things.
  • This critique stemmed from the view that law is both produced by society and in turn reproduces society, and in that sense always functions essentially as an instrument of power — so it has historically been (and remains) a tool of dominance, of hierarchy, of exclusion and marginalisation, of capital and of colonialism. In that sense, the embryonic Silicon Valley giants slotted neatly into that paradigm. And so, could Nemitz’s insistence on the rule of law — without a critical understanding of what that actually means — itself be a problem?

“They [tech companies] employ the law when it suits them and do so very strategically – as both a ‘sword’ and a ‘shield’ – and that’s played a major role in getting the platform ecosystem to where it is now.”

  • On the one hand, laws are the basic tools that liberal democracies have available for bringing companies under democratic (i.e. accountable) control. On the other hand, large companies have always been adept (and, in liberal democracies, very successful) at using the law to further their interests and cement their power.
  • This point is particularly relevant to tech companies. They’ve used law to bring users within their terms of service and thereby to hold on to assets (e.g. exabytes of user data) that they probably wouldn’t have been able to do otherwise. They use law to enable the pretence that click-through EULAs are, in fact, contracts. So they employ the law when it suits them and do so very strategically — as both a ‘sword’ and a ‘shield’ — and that’s played a major role in getting the platform ecosystem to where it is now.
  • Also, law plays a big role in driving and shaping technological development. Technologies don’t emerge in a vacuum, they’re a product of their context and law is a major part of that context. So the platform business models and what’s happening on the internet aren’t outside of the law; they’re constructed through, and depend upon, it. So it’s misleading when people argue (like Nemitz??) that we need to use law to change things — as if the law isn’t there already and may actually be partially enabling things that are societally damaging. So unless we properly understand the rule of law in getting us to our current problematique, talking about how law can help us is like talking about using a tool to fix a problem without realising that the tool is itself is part of the problem.

“It’s the primacy of democracy, not of law that’s crucial.”

  • There was quite a lot of critical discussion of the GDPR on two fronts — its ‘neoliberal’ emphasis on individual rights; and things that are missing from it. Those omissions and gaps are not necessarily mistakes; they may be the result of political choices.
  • One question is whether there is a deficit of law around who owns property in the cloud. If you upload a photo to Facebook or whatever it’s unclear if you have property rights over or if the cloud-computing provider does. General consensus seems to be that that’s a tricky question! (Questions about who owns your data generally are.)
  • Even if laws exist, enforcement looks like a serious problem. Sometimes legal coercion of companies is necessary but difficult. And because of the ‘placelessness’ of the internet, it seems possible that a corporation or an entity could operate in a place where there’s no nexus to coerce it. Years ago Tim Wu and Jack Goldsmith’s book recounted how Yahoo discovered that they couldn’t just do whatever they wanted in France because they had assets in that jurisdiction and France seized them. Would that be the case that with say, Facebook, now? (Just think of why all of the tech giants have their European HQs in Ireland.)
  • It’s the primacy of democracy, not of law that’s crucial. If the main argument of the Nemitz paper is interpreted as the view that law will solve our problems, that’s untenable. But if we take as the main argument that we need to democratically discuss what the laws are, then we all agree with this. (But isn’t that just vacuous motherhood and apple pie?)
  • More on GDPR… it sets up a legal framework in which we can regulate the consenting person that is, that’s a good thing that most people can agree on. But the way that GDPR is constructed is extremely individualistic. For example, it disempowers data subjects in even in the name of giving them rights because it individualises them. So even the way that it’s constructed actually goes some way towards undermining its good effects. It’s based on the assumption that if we give people rights then everything will be fine. (Shades of the so-called “Right to be Forgotten”.)

As for the much-criticised GDPR, one could see it as an example of ‘trickle-down’ regulation, in that GDPR has become a kind of gold standard for other jurisdictions.

  • Why hasn’t academic law been a more critical discipline in these areas? The answer seems to be that legal academia (at least in the UK, with some honourable exceptions) seems exceptionally uncritical of tech, and any kind of critical thinking is relatively marginalised within the discipline compared to other social sciences. Also most students want to go into legal practice, so legal teaching and scholarship tends to be closely tied to law as a profession and, accordingly, the academy tends to be oriented around ‘producing’ practising lawyers.
  • There was some dissent from the tenor of the preceding discourse about the centrality of law and especially about the improbability of overturning such a deeply embedded cognitive and professional system. This has echoes of a venerable strand in political thinking which says that in order to change anything you have to change everything and it’s worse to change a little bit than it is to change everything — which means nothing actually changes. This is the doctrine that it’s quite impossible to do any good at all unless you do the ultimate good, which is to change everything. (Which meant, capitalism and colonialism and original sin, basically!) On the other hand, there is pragmatic work — making tweaks and adjustments — which though limited in scope might be beneficial and appeal to liberal reformers (and are correspondingly disdained by lofty adherents to the Big Picture).
  • There were some interesting perspectives based on the Daniel article. Conversations with people across disciplines show that technologists seem to suggest a technical solution for everything (solutionism rules OK?), while lawyers view the law as a solution for everything. But discussions with political scientists and sociologists mostly involve “fishing for ideas” which is a feature, not a bug, because it suggests that minds are not set in silos — yet. But one of the problems with the current discourse — and with these two articles — is that the law currently seems to be filling the political void. And the discourse seems to reflect public approval of the judicial approach compared with the trade-offs implicit in Congress. But the Slate article shows the pernicious influence or even interference of an over-politicised judiciary in politics and policy enforcement. (The influence of Robert Bork’s 1978 book and the Chicago School is still astonishing to contemplate.)
  • The Slate piece seems to suffer from a kind of ‘neocolonial governance syndrome’ — the West and the Rest. We all know section 230 by heart. And now it’s the “rule of reason” and the consumer welfare criterion of Bork. It’s important to understand the US legal and political context. But we should also understand: the active role of the US administration; what happened recently in Australia (where the government intervened, both through diplomatic means and directly on behalf of the Facebook platform); and in Ireland (where the government went to the European Court to oppose a ruling that Apple had underpaid tax to the tune of 13 billion Euros). So the obsession with the US doesn’t say much about the rest of the world’s capacity to intervene and dictate the rules of the game. And yet China, India and Turkey have been busy in this space recently.
  • And as for the much-criticised GDPR, one could see it as an example of ‘trickle-down’ regulation, in that GDPR has become a kind of gold standard for other jurisdictions. Something like 12 countries have adopted GDPR-like legislation, and this includes many countries in Latin America Chile. Chile, Brazil, South Africa and South Africa, South Africa, Japan, Canada and so on so forth.

Mail-In Voter Fraud: Anatomy of a Disinformation Campaign

John Naughton:

Yochai Benkler and a team from the Berkman-Klein Centre have published an interesting study which comes to conclusions that challenge conventional wisdom about the power of social media.

“Contrary to the focus of most contemporary work on disinformation”, they write,

our findings suggest that this highly effective disinformation campaign, with potentially profound effects for both participation in and the legitimacy of the 2020 election, was an elite-driven, mass-media led process. Social media played only a secondary and supportive role. This chimes with the study on networked propaganda that Yochai, Robert Faris and Hal Roberts conducted in 2015-16 and published in 2018 in  Network Propaganda: Manipulation, Disinformation, and Radicalization in American Politics. They argued that the right-wing media ecosystem in the US operates fundamentally differently than the rest of the media environment. Their view was that longstanding institutional, political, and cultural patterns in American politics interacted with technological change since the 1970s to create a propaganda feedback loop in American conservative media. This dynamic has, they thought, marginalised centre-right media and politicians, radicalised the right wing ecosystem, and rendered it susceptible to propaganda efforts, foreign and domestic.

The key insight in both studies is that we are dealing with an ecosystem, not a machine, which is why focussing exclusively on social media as a prime explanation for the political upheavals of the last decade is unduly reductionist. In that sense, much of the public (and academic) commentary on social media’s role brings to mind the cartoon of the drunk looking for his car keys under a lamppost, not because he lost them there, but because at least there’s light. Because social media are relatively new arrivals on the scene, it’s (too) tempting to over-estimate their impact. Media-ecology provides a better analytical lens because it means being alert to factors like diversity, symbiosis, feedback loops and parasitism rather than to uni-causal explanations.

(Footnote: there’s a whole chapter on this — with case-studies — in my book From Gutenberg to Zuckerberg — published way back in 2012!)

Public networks instead of social networks?

We need state-owned, interoperable, democratically governed online public networks. From the people for the people.

posted by Julia Rone

The conversation so far

The following comments on Trump being banned from Twitter/ the removal of Parler from Android and iOS stores were, somewhat aptly, inspired by two threads on Twitter itself: the first by the British-Canadian blogger Cory Doctorow and the other by Canadian scholar Blayne Haggart. The point of this post ideally is to start the conversation from where Doctorow and Haggart have left it and involve more people from our team. Ideally, nobody will be censored in the process :p

Doctorow insists that the big problem with Apple and Android removing Parler is not so much censorship – ultimately different app stores can have different rules and this should be the case – but rather the fact that there are no alternative app stores. Thus, the core of his argument is that the US needs to enforce anti-trust laws that would allow for a fair competition between a number of competitors. The same argument can be extended to breaking up social media monopolists such as Facebook and Twitter. What we need is more competition.

Haggart attacks this argument in three ways:

First, he reminds that “market regulation of the type that @doctorow wants requires perfect competition. This is unlikely to happen for a number of reasons (e.g, low consumer understanding of platform issues, tendency to natural monopoly)”. Thus, the most likely outcome becomes the establishment of “a few more corporate oligarchs”. This basically leaves the state as a key regulator – much to the disappointment of cyber-libertarians who have argued against state regulation for decades.

The problem is, and this is Haggart’s second key point, that “as a non-American, it’s beyond frustrating that this debate (like so many internet policy debates) basically amounts to Americans arguing with other Americans about how to run the world. Other countries need to assert their standing in this debate” . This point had been made years ago also in Martin Hardie’s great paper “Foreigner in a free land” in which he noticed how most debates about copyright law focused on the US. Even progressive people such as Larry Lessig built their whole argumentation on the basis of references to the US constitution. But what about all of us – the poor souls from the rest of the world who don’t live in the US?

Of course, Facebook, Twitter, Alphabet, Amazon, etc. are all US tech companies. But they do operate globally. So even if the US states interferes in regulating them, the regulation it imposes might not chime well with people in France or Germany, let’s say. The famous American prudence with nudity is the oft quoted example of different standards when it comes to content regulation. No French person would be horrified by the sight of a bare breast (at least if we believe stereotypes) so why should nude photos be removed from the French social media. If we want platform governance to be truly democratic, the people affected by it should “have a say in that decision”. But as Haggart notes “This cannot happen so long as platforms are global, or decisions about them are made only in DC”.

So what does Haggart offer? Simple: break social media giants not along market lines but along national lines. Well, maybe not that simple…

If we take the idea of breaking up monopolies along national lines seriously…

This post starts from Haggart’s proposal to break up social media along national lines, assuming it is a good proposal. In fact I do this not for rhetorical purposes or for the sake of setting a straw man but because I actually think it is a good proposal. So the following lines aim to take the proposal seriously and consider different aspects of it discussing what potential drawbacks/problems should we keep in mind.

How to do this??

The first key problem is: who on Earth, can convince companies such as Facebook/Twitter to “break along national lines”. These companies spend fortunes on lobbying the US government and they are US national champions. Why would the US support breaking them up along national lines? (As a matter of fact, the question of how is also a notable problem in Deibert’s “Reset” – his idea that hacktivism, civil disobedience, and whistleblowers’ pressure can make private monopolists exercise restraint is very much wishful thinking). There are historical precedents for nationalization of companies but they seem to have involved either a violent revolution or a massive indebtedness of these companies making it necessary for the state to step in and save them with public money. Are there any precedents for nationalizing a company and then revealing how it operates to other states in order to make these states create their respective national versions of it? Maybe. But it seems highly unlikely that anyone in the US would want to do this.

Which leaves us with the rather utopian option two: all big democratic states get together and develop interoperable social media. The project is such a success that people fed up with Facebook and Google decide to join and the undue influence of private monopolists finally comes to an end. But this utopian vision itself opens up a series of new questions.

Okay, assuming we can have state platforms operating along national lines..

Inscribing values in design is not always as straight-forward as it seems, as discussed in the fascinating conversation between Solon Barocas, Seda Gurses, Arvind Narayanan and Vincent Toubiana on decentralized personal data architectures. But, assuming that states can build and maintain (or hire someone to build and maintain) such platforms that don’t crash, are not easy to hack and are user friendly, the next question is: who is going to own the infrastructure and the data?

Who will own the infrastructure and the data?

One option would be for each individual citizen to own their data but this might be too risky and unpractical. Another option would be to treat the data as public data – the same way we treat data from surveys and national statistics. The personal data from current social media platforms is used for online advertising/ training machine learning. If states own their citizens’ data, we might go back to a stage in which the best research was done by state bodies and universities rather than what we have now – the most cutting edge research is done in private companies, often in secret from the public. Mike Savage described this process of increased privatization of research in his brilliant piece The Coming Crisis of Empirical sociology. If anything, the recent case with Google firing AI researcher Timnit Gebru reveals the need to have independent public research that is not in-house research by social media giants or funded by them. It would be naive to think such independent academics can do such research in the current situation when the bulk of interesting data to be analysed is privately owned.

How to prevent authoritarian censorship and surveillance?

Finally, if we assume that states will own their own online public networks – fulfilling the same functions such as Facebook, but without the advertising, the one million dollar question is how to prevent censorship, overreach and surveillance. As Ron Deibert discusses in “Reset”, most states are currently involved in some sort of hacking and surveillance operations of foreign but also domestic citizens. What can be done about this? Here Haggart’s argument about the need for democratic accountability reveals its true importance and relevance. State-owned online public networks would have to abide by standards that have been democratically discussed and to be accountable to the public.

But what Hagart means when discussing democratic accountability should be expanded. Democracy and satisfaction with it have been declining in many Western nations with more and more decision-making power delegated to technocratic bodies. Yet, what the protests from 2010s in the US and the EU clearly showed is that people are dissatisfied with democracy not because they want authoritarianism but because they want more democracy, that is democratic deepening. Or in the words of the Spanish Indignados protesters:

“Real democracy, now”

Thus, to bring to conclusion the utopia of state public networks, the decisions about their governance should be made not by technocratic bodies or with “democratic accountability” used as a form of window-dressing which sadly is often the case now. Instead, policy decisions should be discussed broadly through a combination of public consultations, assemblies and in already existing national and regional assemblies in order to ensure people have ownership of the policies decided. State public networks should be not only democratically accountable but also democratically governed. Such a scenario would be one of what I call “democratic digital sovereignty” that goes beyond the arbitrariness of decisions by private CEOs but also escapes the pitfalls of state censorship and authoritarianism.

To sum up: we need state-owned interoperable online public networks. Citizen data gathered from the use of these media would be owned by the state and would be available for public academic research (which would be open access in order to encourage both transparency and innovation). The moderation policies of these public platforms would be democratically discussed and decided. In short, these will be platforms of the people and for the people. Nothing more, nothing less.

Democratizing digital sovereignty: an impossible task?

Julia Rone

The concept of digital sovereignty has increasingly gained traction in the last decade. A study by the Canadian scholars Stephan Couture and Sophie Toupin in the ProQuest database has shown that while the term appeared only 6 times in general publications before 2008, it was used almost 240 times between 2015-2018. As every new trendy term, “digital sovereignty” has been used in a variety of fields in multiple often conflicting ways. It has been “mobilized by a diversity of actors, from heads of states to indigenous scholars, to grassroots movements, and anarchist-oriented “tech collectives,” with very diverse conceptualizations, to promote goals as diverse as state protectionism, multistakeholder Internet governance or protection against state surveillance”.

Within the EU, Germany has been a champion of “digital sovereignty” — promoted in domestic discourse as a panacea, a magic solution that can at the same time increase the competitiveness of German digital industries, allow individuals to control their data and give power to the state to manage vulnerabilities in critical infrastructures. As Daniel Lambach and Kai Opperman have found, German domestic players have used the term in very vague ways, which has made it easier to organize coalitions around it to apply for funding or push for particular policies. Furthermore, the German Federal Foreign Office has made considerable efforts to promote the term in European policy debates. It has been more cautious at the international scene, where the US has promoted an open Internet (which completely suits its economic and geopolitical interests, one must add) and has been very suspicious of notions of digital sovereignty, associated with Chinese and Russian doctrines above all. Attempting to avoid qualifications of sovereignty as necessarily authoritarian, French President Emmanuel Macron proposed in a 2018 speech at the Internet Governance Forum a vision of the return of the democratic state in Internet governance, as different from both the Chinese model of control and the Californian model of private self-regulation. This unfortunately turned out to be easier said than done.

What all of this comes to show is that beyond the fact that more and more political and economic players talk about “digital sovereignty”, the term itself is up for grabs and there is no single accepted meaning for it. This might seem confusing but I argue it is liberating since it allows us to imagine digital sovereignty as how we want it to be rather than encountering it as a stable, ossified reality. Drawing on a recent discussion on conflicts of sovereignty in the European Union, I claim that discussions about digital sovereignty have been dominated by the same tension as more general discussions on sovereignty – namely the tension between national and supranational sovereignty. Yet, as Brack, Crespy and Coman convincingly argue, the more important sovereignty conflicts in recent European Union politics have in fact been between the people and parliaments, as bearers of democratic sovereignty, on the one hand, and executives at both the national and supranational level, on the other. The demand for “real democracy now” that informed the Spanish Indignados protests reverberated strongly across Europe and in a decade of protests against both austerity and free trade protesters and civil society alike made strong claims for democratic deepening. Sovereignty is ultimately bound with the question of “who rules” and since the French Revolution in Europe the answer to this question at least normatively has been “the people”. Of course, how do “the people” rule and who constitutes “the people” are questions that have sparked both theoretical and practical, sometimes extremely violent, debates over centuries. Yet, the democratic impulse behind the contemporary notion of sovereignty remains there and has become increasingly prominent in the aftermath of the 2008 financial crisis in which the insulation of markets from democratic control has become painfully visible.

What is remarkable is that none of these debates on sovereignty as, ultimately, democratic sovereignty has reached the field of digital policy. Talk about digital sovereignty in policy circles has often presupposed either an authoritarian omnipotent state — as evidenced in Russian and Chinese doctrines of digital sovereignty — or a democratic state but where all decisions are made by the executive, as in Macron’s vision of the ‘return of the state’ in Internet policy. Yet, almost all interesting issues of Internet regulation are issues that deserve a proper democratic debate and participation. States such as France attempting to regulate disinformation without even a basic consultation with citizens have rightly been accused of censorship and stifling political speech.

Who can decide what constitutes disinformation, hate speech or online harms? There is no easy answer to this question but certainly greater democratic involvement and discussion in decisions about silencing political messages would be appropriate. This democratic involvement can take the form of parliamentary debates, hearing and resolutions. But it can also take the form of debates at democratic neighbourhood assemblies or organized mini-publics events. It can take place at the European level with more involvement of the European Parliament and innovative uses of so-far ‘blunt’ instruments such as online public consultations or the European Citizen Initiative. Or it can take place at the national level, with parliaments even of small EU member states building up their capacity to monitor and debate Internet policy proposals. National citizens can also get involved in debates on Internet policy through petitions, referenda, and public consultations. Such type of initiatives will not only promote awareness about specific digital policies but will also increase their legitimacy and potentially their effectiveness if citizens have a sense of “ownership” with regard to new laws and regulations and have taken place in coining them.

Some of this might sound utopian. Some of it might sound painstakingly banal and obvious. But the truth is that while our democracies are struggling with the challenges posed by big tech, a lot of proposals for regulation have been shaped by the presence and power of private companies themselves or have been put forward by illiberal leaders with authoritarian tendencies. In such a context, demands for more digital democratic sovereignty could emancipate us from excessive private and executive power and allow us to reimagine digital content, data and infrastructures as something that is collectively owned and governed.

The early years of the Internet were marked by the techno-deterministic promise that digital tech would democratise politics. What happened instead was the immense concentration of power and influence in the hands of a few tech giants. The solution to this is not to take power from the private companies and give it back to powerful states acting as Big Brothers but instead to democratize both. We can use democracy as a technology, or what the ancient Greeks would call techne, to make both private corporations and states more open, participative and accountable. This is certainly not what Putin, Macron or Merkel would mean when they talk about digital sovereignty. But it is something that we as citizens should push for. Is it possible to democratize digital sovereignty? Or is such a vision bound to end up as the toothless reality of an occasional public consultation whose results decision makers ignore? This is ultimately a political question not a conceptual one. The notion of “digital sovereignty” is up for grabs. So is our democratic future.

Create your website with WordPress.com
Get started