Design a site like this with WordPress.com
Get started

Silencing Trump and authoritarian tech power

John Naughton:

It was eerily quiet on social media last week. That’s because Trump and his cultists had been “deplatformed”. By banning him, Twitter effectively took away the megaphone he’s been masterfully deploying since he ran for president. The shock of the 6 January assault on the Capitol was seismic enough to convince even Mark Zuckerberg that the plug finally had to be pulled. And so it was, even to the point of Amazon Web Services terminating the hosting of Parler, a Twitter alternative for alt-right extremists.

The deafening silence that followed these measures was, however, offset by an explosion of commentary about their implications for freedom, democracy and the future of civilisation as we know it. Wading knee-deep through such a torrent of opinion about the first amendment, free speech, censorship, tech power and “accountability” (whatever that might mean), it was sometimes hard to keep one’s bearings. But what came to mind continually was H L Mencken’s astute insight that “for every complex problem there is an answer that is clear, simple and wrong”. The air was filled with people touting such answers.

In the midst of the discursive chaos, though, some general themes could be discerned. The first highlighted cultural differences, especially between the US with its sacred first amendment on the one hand and European and other societies, which have more ambivalent histories of moderating speech. The obvious problem with this line of discussion is that the first amendment is about government regulation of speech and has nothing whatsoever to do with tech companies, which are free to do as they like on their platforms.

A second theme viewed the root cause of the problem as the lax regulatory climate in the US over the last three decades, which led to the emergence of a few giant tech companies that effectively became the hosts for much of the public sphere. If there were many Facebooks, YouTubes and Twitters, so the counter-argument runs, then censorship would be less effective and problematic because anyone denied a platform could always go elsewhere.

Then there were arguments about power and accountability. In a democracy, those who make decisions about which speech is acceptable and which isn’t ought to be democratically accountable. “The fact that a CEO can pull the plug on Potus’s loudspeaker without any checks and balances,” fumed EU commissioner Thierry Breton, “is not only confirmation of the power of these platforms, but it also displays deep weaknesses in the way our society is organised in the digital space.” Or, to put it another way, who elected the bosses of Facebook, Google, YouTube and Twitter?

What was missing from the discourse was any consideration of whether the problem exposed by the sudden deplatforming of Trump and his associates and camp followers is actually soluble – at least in the way it has been framed until now. The paradox that the internet is a global system but law is territorial (and culture-specific) has traditionally been a way of stopping conversations about how to get the technology under democratic control. And it was running through the discussion all week like a length of barbed wire that snagged anyone trying to make progress through the morass.

All of which suggests that it’d be worth trying to reframe the problem in more productive ways. One interesting suggestion for how to do that came last week in a thoughtful Twitter thread by Blayne Haggart, a Canadian political scientist. Forget about speech for a moment, he suggests, and think about an analogous problem in another sphere – banking. “Different societies have different tolerances for financial risk,” he writes, “with different regulatory regimes to match. Just like countries are free to set their own banking rules, they should be free to set strong conditions, including ownership rules, on how platforms operate in their territory. Decisions by a company in one country should not be binding on citizens in another country.”

In those terms, HSBC may be a “global” bank, but when it’s operating in the UK it has to obey British regulations. Similarly, when operating in the US, it follows that jurisdiction’s rules. Translating that to the tech sphere, it suggests that the time has come to stop accepting the tech giant’s claims to be hyper-global corporations, whereas in fact they are US companies operating in many jurisdictions across the globe, paying as little local tax as possible and resisting local regulation with all the lobbying resources they can muster. Facebook, YouTube, Google and Twitter can bleat as sanctimoniously as they like about freedom of speech and the first amendment in the US, but when they operate here, as Facebook UK, say, then they’re merely British subsidiaries of an American corporation incorporated in California. And these subsidiaries obey British laws on defamation, hate speech and other statutes that have nothing to do with the first amendment. Oh, and they should also pay taxes on their local revenues.

Advertisement

Public networks instead of social networks?

We need state-owned, interoperable, democratically governed online public networks. From the people for the people.

posted by Julia Rone

The conversation so far

The following comments on Trump being banned from Twitter/ the removal of Parler from Android and iOS stores were, somewhat aptly, inspired by two threads on Twitter itself: the first by the British-Canadian blogger Cory Doctorow and the other by Canadian scholar Blayne Haggart. The point of this post ideally is to start the conversation from where Doctorow and Haggart have left it and involve more people from our team. Ideally, nobody will be censored in the process :p

Doctorow insists that the big problem with Apple and Android removing Parler is not so much censorship – ultimately different app stores can have different rules and this should be the case – but rather the fact that there are no alternative app stores. Thus, the core of his argument is that the US needs to enforce anti-trust laws that would allow for a fair competition between a number of competitors. The same argument can be extended to breaking up social media monopolists such as Facebook and Twitter. What we need is more competition.

Haggart attacks this argument in three ways:

First, he reminds that “market regulation of the type that @doctorow wants requires perfect competition. This is unlikely to happen for a number of reasons (e.g, low consumer understanding of platform issues, tendency to natural monopoly)”. Thus, the most likely outcome becomes the establishment of “a few more corporate oligarchs”. This basically leaves the state as a key regulator – much to the disappointment of cyber-libertarians who have argued against state regulation for decades.

The problem is, and this is Haggart’s second key point, that “as a non-American, it’s beyond frustrating that this debate (like so many internet policy debates) basically amounts to Americans arguing with other Americans about how to run the world. Other countries need to assert their standing in this debate” . This point had been made years ago also in Martin Hardie’s great paper “Foreigner in a free land” in which he noticed how most debates about copyright law focused on the US. Even progressive people such as Larry Lessig built their whole argumentation on the basis of references to the US constitution. But what about all of us – the poor souls from the rest of the world who don’t live in the US?

Of course, Facebook, Twitter, Alphabet, Amazon, etc. are all US tech companies. But they do operate globally. So even if the US states interferes in regulating them, the regulation it imposes might not chime well with people in France or Germany, let’s say. The famous American prudence with nudity is the oft quoted example of different standards when it comes to content regulation. No French person would be horrified by the sight of a bare breast (at least if we believe stereotypes) so why should nude photos be removed from the French social media. If we want platform governance to be truly democratic, the people affected by it should “have a say in that decision”. But as Haggart notes “This cannot happen so long as platforms are global, or decisions about them are made only in DC”.

So what does Haggart offer? Simple: break social media giants not along market lines but along national lines. Well, maybe not that simple…

If we take the idea of breaking up monopolies along national lines seriously…

This post starts from Haggart’s proposal to break up social media along national lines, assuming it is a good proposal. In fact I do this not for rhetorical purposes or for the sake of setting a straw man but because I actually think it is a good proposal. So the following lines aim to take the proposal seriously and consider different aspects of it discussing what potential drawbacks/problems should we keep in mind.

How to do this??

The first key problem is: who on Earth, can convince companies such as Facebook/Twitter to “break along national lines”. These companies spend fortunes on lobbying the US government and they are US national champions. Why would the US support breaking them up along national lines? (As a matter of fact, the question of how is also a notable problem in Deibert’s “Reset” – his idea that hacktivism, civil disobedience, and whistleblowers’ pressure can make private monopolists exercise restraint is very much wishful thinking). There are historical precedents for nationalization of companies but they seem to have involved either a violent revolution or a massive indebtedness of these companies making it necessary for the state to step in and save them with public money. Are there any precedents for nationalizing a company and then revealing how it operates to other states in order to make these states create their respective national versions of it? Maybe. But it seems highly unlikely that anyone in the US would want to do this.

Which leaves us with the rather utopian option two: all big democratic states get together and develop interoperable social media. The project is such a success that people fed up with Facebook and Google decide to join and the undue influence of private monopolists finally comes to an end. But this utopian vision itself opens up a series of new questions.

Okay, assuming we can have state platforms operating along national lines..

Inscribing values in design is not always as straight-forward as it seems, as discussed in the fascinating conversation between Solon Barocas, Seda Gurses, Arvind Narayanan and Vincent Toubiana on decentralized personal data architectures. But, assuming that states can build and maintain (or hire someone to build and maintain) such platforms that don’t crash, are not easy to hack and are user friendly, the next question is: who is going to own the infrastructure and the data?

Who will own the infrastructure and the data?

One option would be for each individual citizen to own their data but this might be too risky and unpractical. Another option would be to treat the data as public data – the same way we treat data from surveys and national statistics. The personal data from current social media platforms is used for online advertising/ training machine learning. If states own their citizens’ data, we might go back to a stage in which the best research was done by state bodies and universities rather than what we have now – the most cutting edge research is done in private companies, often in secret from the public. Mike Savage described this process of increased privatization of research in his brilliant piece The Coming Crisis of Empirical sociology. If anything, the recent case with Google firing AI researcher Timnit Gebru reveals the need to have independent public research that is not in-house research by social media giants or funded by them. It would be naive to think such independent academics can do such research in the current situation when the bulk of interesting data to be analysed is privately owned.

How to prevent authoritarian censorship and surveillance?

Finally, if we assume that states will own their own online public networks – fulfilling the same functions such as Facebook, but without the advertising, the one million dollar question is how to prevent censorship, overreach and surveillance. As Ron Deibert discusses in “Reset”, most states are currently involved in some sort of hacking and surveillance operations of foreign but also domestic citizens. What can be done about this? Here Haggart’s argument about the need for democratic accountability reveals its true importance and relevance. State-owned online public networks would have to abide by standards that have been democratically discussed and to be accountable to the public.

But what Hagart means when discussing democratic accountability should be expanded. Democracy and satisfaction with it have been declining in many Western nations with more and more decision-making power delegated to technocratic bodies. Yet, what the protests from 2010s in the US and the EU clearly showed is that people are dissatisfied with democracy not because they want authoritarianism but because they want more democracy, that is democratic deepening. Or in the words of the Spanish Indignados protesters:

“Real democracy, now”

Thus, to bring to conclusion the utopia of state public networks, the decisions about their governance should be made not by technocratic bodies or with “democratic accountability” used as a form of window-dressing which sadly is often the case now. Instead, policy decisions should be discussed broadly through a combination of public consultations, assemblies and in already existing national and regional assemblies in order to ensure people have ownership of the policies decided. State public networks should be not only democratically accountable but also democratically governed. Such a scenario would be one of what I call “democratic digital sovereignty” that goes beyond the arbitrariness of decisions by private CEOs but also escapes the pitfalls of state censorship and authoritarianism.

To sum up: we need state-owned interoperable online public networks. Citizen data gathered from the use of these media would be owned by the state and would be available for public academic research (which would be open access in order to encourage both transparency and innovation). The moderation policies of these public platforms would be democratically discussed and decided. In short, these will be platforms of the people and for the people. Nothing more, nothing less.

Is the UK really going to innovate in regulation of Big Tech?

On Tuesday last week the UK Competition and Markets Authority (CMA) outlined plans for an innovative way of regulating powerful tech firms in a way that overcomes the procedural treacle-wading implicit in competition law that had been designed for an analogue era.

The proposals emerged from an urgent investigation by the Digital Markets Taskforce, an ad-hoc body set up in March and led by the CMA with inputs from the Information Commissioner’s Office and OFCOM, the telecommunications and media regulator. The Taskforce was charged with providing advice to the government on the design and implementation of a pro-competition regime for digital markets. It was set up following the publication of the Treasury’s Furman Review on ‘Unlocking digital competition’ which reported in March 2019 and drew on evidence from the CMA’s previous market study into online platforms and digital advertising.

This is an intriguing development in many ways. First of all it seems genuinely innovative. Hitherto, competition laws have been framed to cover market domination or monopolistic abuse without mentioning any particular company, but the new UK approach for tech companies could set specific rules for named companies — Facebook and Google, say. More importantly, the approach bypasses the sterile arguments we have had for years about whether antique conceptions of ‘monopoly’ actually apply to firms which adroitly argue that they don’t meet the definition — while at the same time patently functioning as monopolies. Witness the disputes about whether Amazon really is a monopoly in retailing.

Rather than being lured down that particular rabbit-hole, the CMA proposes instead to focus attention on firms with what it calls ‘Strategic Market Status’ (SMS), i.e. firms with dominant presences in digital markets where there’s not much actual competition. That is to say, markets where difficulty of entry or expansion by potential rivals is effectively undermined by factors like network effects, economies of scale, consumer passivity (i.e. learned helplessness), the power of default settings, unequal (and possibly illegal) access to user data, lack of transparency, vertical integration and conflicts of interest.

At the heart of the new proposals is the establishment of a powerful, statutory Digital Markets Unit (DMU) located within the Competition and Markets Authority. This would have the power to impose legally-enforceable Codes of Conduct on SMS firms. The codes would, according to the proposals, be based on relatively high-level principles like ‘fair trading’, ‘open choices’ and ‘trust and transparency’ — all of which are novel ideas for tech firms. Possible remedies for specific companies (think Facebook and Google) could include mandated data access and interoperability to address Facebook’s dominance in social media or Google’s market power in general search.

It would be odd if, in due course, Amazon, Apple and Microsoft don’t also fall into the SMS category of “strategic”. Indeed it’s inconceivable that Amazon would not, given that it has morphed into critical infrastructure for many locked-down economies.

The government says that it going to consult on these radical proposals early next year and will then legislate to put the DMU on a statutory basis “when Parliamentary time allows”.

Accordingly, we can now look forward to a period of intensive corporate lobbying from Facebook & Co as they seek to derail or emasculate the proposals. Given recent history and the behaviour of which these outfits are capable, it would be prudent for journalists and civil society organisations to keep their guard up until this stuff is on the statute book.

The day after the CMA proposals were published (and after a prolonged legal battle) the Bureau of Investigative Journalists were finally able to publish the Minutes of a secret meeting that Matt Hancock had with the Facebook boss, Mark Zuckerberg, in May 2018. Hancock was at that time Secretary of State for DCMS, the department charged with combating digital harms. According to the Bureau’s report, he had sought “increased dialogue” with Zuckerberg, so he could “bring forward the message that he has support from Facebook at the highest level”. The meeting took place at the VivaTech conference in Paris. It was arranged “after several days of wrangling” by Matthew Gould, the former culture department civil servant that Hancock later made chief executive of NHS X. Civil servants had to give Zuckerberg “explicit assurances” that the meeting would be positive and Hancock would not simply demand that the Facebook boss attend the DCMS Select Committee inquiry into the Cambridge Analytica scandal (which he had refused to do).

The following month Hancock had a follow-up meeting with Elliot Schrage, Facebook’s top lobbyist, who afterwards wrote to the minister thanking him for setting out his thinking on “how we can work together on building a model for sensible co-regulation on online safety issues”.

Now that the UK government is intent on demonstrating its independence from foreign domination, perhaps the time has come to explain to tech companies a couple of novel ideas. Sovereign nations do regulation, not ‘co-regulation’; and companies obey the law.

……………………..

A version of this post was published in the Observer on Sunday, 13 December, 2020.

Great expectations: the role of digital media for protest diffusion in the 2010s

The decade after the 2008 economic crisis started with great expectations about the empowering potential of digital media for social movements. The wave of contention that started from Iceland and the MENA countries swept also Europe, where hundreds of thousands Spanish protesters took part in the Indignados protests in 2011 and a smaller but dedicated group organized Occupy London – the British version of the US Occupy movement that shook US politics for years to come. Protesters during the Arab Spring were often carrying posters and placards with the logos and names of Facebook, Twitter and similar platforms or were even spraying them as graffiti on walls.

It was a period of ubiquitous enthusiasm with some scholars even claiming that the Internet is a necessary and sufficient condition for democratizaton. What is more, a number of scholars saw in the rise of digital platforms a great opportunity for the diffusion of protests within nations and transnationally at an unprecedented speed – leading political journalists and researchers noted that digital media had a key role in ‘Occupy protests spreading like wildfire’ and in spreading information during the Arab Spring.

Photo by Essam Sharaf

Already back in the early 2010s, the beginning of this techno-utopian decade, researchers emphasized that in Egypt, protests and information about them in fact spread in more traditional ways – through the interpersonal networks of cab drivers, labour unions, and football hooligans, among others. What is more, protests in the aftermath of the 2008 economic crisis spread much more slowly than the 1848 Spring of the Peoples protests due to the need of laborious cultural translation from one region to another. Ultimately, in spite of the major promises of social media, most protest mobilization and diffusion still depends on face-to-face interactions and established protest traditions.

Yet, the trend of expecting too much from digital media is countered by an equally dangerous trend – claiming they haven’t changed anything in the world of mobilization. The media ecology approach of Emilano Trere and Alice Mattoni escapes the pitfalls of both approaches by studying how activists use digital media in combination and interaction with a number of other types of media in hybrid media ecologies.

In a book that I just published, I apply the media ecology approach to study the diffusion of Green and left-wing protests against austerity and free trade in the EU after 2008. One of the greatest things about trying to focus on other media beyond Facebook and Twitter is the multiple unexpected angles it gives to events we all thought we knew well. While both activists and researchers alike have been fascinated with the promise of digital media, looking at the empirical material with unbiased eyes revealed so much about the key role of other types of media for protest diffusion.

To begin with: books! The very name of the Indignados protests came from the title of Stéphane Hessel’s book “Indignez-vous!”. But the books by authors such as Joseph Stiglitz, Wolfgang Streeck, Ernesto Laclau and Yannis Varoufakis have been no less important for spreading ideas and informing protesters across the EU. In his recent book “Translating the Crisis”, Spanish scholar Fruela Fernandez notes the boom of publishing houses translating political books in Spain in the period surrounding the birth and eruption into public space of the Indignados movement.

Similarly, mainstream media have been of crucial importance for spreading information on protests, protest ideas and tactics across the EU in the last decade. Mainstream media such as The Guardian, BBC, El País, etc. reported in much detail on the use of digital media by social movements such as Occupy or Indignados, even sharing Twitter and Facebook hashtags, links to Facebook groups and live-streams in articles. Mainstream media thus popularized the message (and media practices) of protesters further than they could have possibly imagined. In fact, mainstream media’s fascination with the digital practices of new social movements goes a long way to explain their largely favourable attitude to the protests of the early 2010s, Such a favorable coverage by mainstream media indeed contradicts most expectations of social movement scholars that media would largely ignore or misrepresent protesters.

Another type of protest diffusion that has remained woefully neglected but played a key role in the spread of progressive economic protests in the EU was face-to-face communication and, as simple as it may sound, walking! During the Spanish Indignados protests hundreds of protesters marched from all parts of Spain to gather in Madrid. A small part of them continued marching to Brussels where they staged theater plays and discussions and then headed to Greece. These marches took weeks and involved protesters stopping in villages and cities on the way and engaging local people in discussions. Sharing a physical space and sharing food have been among the most efficient ways to diffuse a message and reach more people with it. Of course, the marchers kept live blogs and diaries of their journeys (which in themselves constitute rich materials for future research), but it is the combination of diffusion through traveling, meeting people in person, and using digital media which is the truly interesting combination.

In my book, I give many more examples of how progressive protesters used various types of media to spread protest. Beyond providing a richer and more accurate picture of progressive economic protests in the 2010s, the book can hopefully serve also as a useful reminder for researchers of the radical right. The 2010s that started with research on social movements and democratization end with a major academic trend for studying the far right, and especially the way the far right has blossomed in the digital sphere.

If there is one thing to be learned from my book, it is that digital media are not the only tool activists use to spread protest. Thus, if one needs to understand the diffusion of far right campaigns and ideas, one needs to focus also on the blossoming of far right publishing houses, the increasing mainstreaming of far right ideas in mainstream press, and last but not least, the ways in which far right activists make inroads into civil society organizations and travel to share experiences – it is well-known, for example, that during the refugee crisis far right activists from Western Europe made several joint actions with activists from Eastern Europe to patrol borders together.

Understanding how protests, protest ideas and repertoires diffuse is crucial for activists who want to help spread progressive causes, but also for those who are worried about the spread of dangerous and anti-democratic ideas. After a decade of great expectations about the potential of digital media to democratize our societies, we find ourselves politically in an era of backlash. Yet, at least analytically we are now past the naive enthusiasm of the early 2010s and have a much better instrumentarium to understand how protest diffusion works. To rephrase Gramsci, we are now entering a period of pessimism of the will and optimism of the intellect.

It is not what we wished for. But shedding our illusions and utopian expectations about the potential of digital media is an important step for moving beyond techno-fetishism and understanding better the processes of mobilization that currently define our society.

Seeing Like a Social Media Site

The Anarchist’s Approach to Facebook

When John Perry Barlow published “A Declaration of the Independence of Cyberspace” nearly twenty-five years ago, he was expressing an idea that seemed almost obvious at the time: the internet was going to be a powerful tool to subvert state control. As Barlow explained to the “governments of the Industrial World,” those “weary giants of flesh and steel”—cyberspace does not lie within your borders. Cyberspace was a “civilization of the mind.” States might be able to control individuals’ bodies, but their weakness lay in their inability to capture minds.

In retrospect, this is a rather peculiar perspective of states’ real weakness, which has always been space. Literal, physical space—the endlessly vast terrain of the physical world—has historically been the friend of those attempting to avoid the state. As scholar James Scott documented in The Art of Not Being Governed, in early stages of state formation if the central government got too overbearing the population simply could—and often did—move. Similarly, John Torpey noted inThe Invention of the Passport that individuals wanting to avoid eighteenth-century France’s system of passes could simply walk from town to town, and passes were often “lost” (or, indeed, actually lost). As Richard Cobb noted, “there is no one more difficult to control than the pedestrian.” More technologically-savvy ways of traveling—the bus, the boat, the airplane—actually made it easier for the state to track and control movement.

Cyberspace may be the easiest place to track of all. It is, by definition, a mediated space. To visit, you must be in possession of hardware, which must be connected to a network, which is connected to other hardware, and other networks, and so on and so forth. Every single thing in the digital world is owned, controlled or monitored by someone else. It is impossible to be a pedestrian in cyberspace—you never walk alone. 

States have always attempted to make their populations more trackable, and thus more controllable. Scott calls this the process of making things “legible.” It includes “the creation of permanent last names, the standardization of weights and measures, the establishment of cadastral surveys and population registers, the invention of freehold tenure, the standardization of language and legal discourse, the design of cities, and the organization of transportation.” These things make previously complicated, complex and unstandardized facts knowable to the center, and thus more easy to administrate. If the state knows who you are, and where you are, then it can design systems to control you. What is legible is manipulable.

Cyberspace—and the associated processing of data—offers exciting new possibilities for the administrative center to make individuals more legible precisely because, as Barlow noted, it is “a space of the mind.” Only now, it’s not just states that have the capacity to do this—but sites. As Shoshana Zuboff documented in her book The Age of Surveillance Capitalism, sites like Facebook collect data about us in an attempt to make us more legible and, thus, more manipulatable. This is not, however, the first time that “technologically brilliant” centralized administrators have attempted to engineer society. 

Scott use the term “high modernism” to characterize schemes—attempted by planners across the political spectrum—that possess a “self-confidence about scientific and technical progress, the expansion of production, the growing satisfaction of human needs, the mastery of nature (including human nature), and, above all, the rational design of social order commensurate with the scientific understanding of natural laws.” In Seeing Like a State, Scott examines a number of these “high modernist” attempts to engineer forests in eighteenth-century Prussia and Saxony, urban cities in Paris and Brasilia, rural populations in ujamaa villages, and agricultural production in Soviet collective farms (to name a few). Each time, central administrators attempted to make complex, complicated processes—from people to nature—legible, and then engineer them into rational, organized systems based on scientific principles. It usually ended up going disastrously wrong—or, at least, not at all the way central authorities had planned it. 

The problem, Scott explained, is that “certain forms of knowledge and control require a narrowing of vision. . . designed or planned social order is necessarily schematic; it always ignores essential features of any real, functioning social order.” For example, mono-cropped forests became more vulnerable to disease and depleted soil structure—not to mention destroyed the diversity of the flora, insect, mammal, and bird populations which took generations to restore. The streets of Brasilia had not been designed with any local, community spaces where neighbors might interact; and, anyway, they forgot—ironically—to plan for construction workers, who subsequently founded their own settlement on the outskirts of the city, organized to defend their land and demanded urban services and secure titles. By 1980, Scott explained, “seventy-five percent of the population of Brasilia lived in settlements that had never been anticipated, while the planned city had reached less than half of its projected population of 557,000.” Contrary to Zuboff’s assertion that we are losing “the right to a future tense,” individuals and organic social processes have shown a remarkable capacity to resist and subvert otherwise brilliant plans to control them.

And yet this high-modernism characterizes most approaches to “regulating” social media, whether self-regulatory or state-imposed. And, precisely because cyberspace is so mediated, it is more difficult for users to resist or subvert the centrally-controlled processes imposed upon them. Misinformation on Facebook proliferates—and so the central administrators of Facebook try to engineer better algorithms, or hire legions of content moderators, or make centralized decisions about labeling posts, or simply kick off users. It is, in other words, a classic high-modernist approach to socially engineer the space of Facebook, and all it does is result in the platforms’ ruler—Mark Zuckerberg—consolidating more power. (Coincidentally, fellow Power-Shift contributor Jennifer Cobbe argued something quite similar in her recent article about the power of algorithmic censorship). Like previous attempts to engineer society, this one probably will not work well in practice—and there may be disastrous, authoritarian consequences as a result.

So what is the anarchist approach to social media? Consider this description of an urban community by twentieth-century activist Jane Jacobs, as recounted by Scott:

“The public peace-the sidewalk and street peace-of cities . . . is kept by an intricate, almost unconscious network of voluntary controls and standards among the people themselves, and enforced by the people themselves. . . . [an] incident that occurred on [Jacobs’] mixed-used street in Manhattan when an older man seemed to be trying to cajole an eight or nine-year-old girl to go with him. As Jacobs watched this from her second-floor window, wondering if she should intervene, the butcher’s wife appeared on the sidewalk, as did the owner of the deli, two patrons of a bar, a fruit vendor, and a laundryman, and several other people watched openly from their tenement windows, ready to frustrate a possible abduction. No “peace officer” appeared or was necessary. . . . There are no formal public or voluntary organizations of urban order here—no police, no private guards or neighborhood watch, no formal meetings or officeholders. Instead, the order is embedded in the logic of daily practice.”

How do we make social media sites more like Jacobs’ Manhattan, where people—not police or administrators—on “sidewalk terms” are empowered to shape their own cyber spaces? 

There may already be one example: Wikipedia. 

Wikipedia is not often thought of as an example of a social media site—but, as many librarians will tell you, it is not an encyclopedia. Yet Wikipedia is not only a remarkable repository of user-generated content, it also has been incredibly resilient to misinformation and extremist content. Indeed, as debates around Facebook wonder whether the site has eroded public discourse to such an extent that democracy itself has been undermined, debates around Wikipedia center around whether it is as accurate as the expert-generated content of Encyclopedia Britannica. (Encyclopedia Britannica says no; Wikipedia says it’s close.)

The difference is that Wikipedia empowers users. Anyone, absolutely anyone, can update Wikipedia. Everyone can see who has edited what, allowing users to self-regulate—and how users identified that suspected Russian agent Maria Butina was probably changing her own Wikipedia page, and changed it back. This radical transparency and empowerment produces organic social processes where, much like in the Manhattan street, individuals collectively mediate their own space. And, most importantly, it is dynamic—Wikipedia changes all the time. Instead of a static ruling (such as Facebook’s determination that the iconic photo of Napalm Girl would be banned for child nudity), Wikipedia’s process produces dialogue and deliberation, where communities constantly socially construct meaning and knowledge. Finally, because cyberspace is ultimately mediated space—individuals cannot just “walk” or “wander” across sidewalks, like in the real world—Wikipedia is mission-driven. It does not have the amorphous goal of “connecting the global community”, but rather “to create a world in which everyone can freely share in the sum of all knowledge.”

This suggests that no amount of design updates or changes to terms of service will ever “fix” Facebook—whether they are imposed by the US government, Mark Zuckerberg or Facebook’s Oversight Board. Instead, it is the high-modernism that is the problem. The anarchist’s approach would prioritize building designs that empower people and communities—so why not adopt the wiki-approach to the public square functions that social media currently serves, like wiki-newspapers or wiki-newsfeeds?

It might be better to take the anarchist’s approach. No algorithms are needed.

by Alina Utrata

Review: ‘The Social Dilemma’ — Take #2

In “More than tools: who is responsible for the social dilemma?, Microsoft researcher Niall Docherty has an original take on the thinking that underpins the film. If we are to pursue more productive discussions of the issues raised by the film, he argues, we need to re-frame social media as something more than a mere “tool”. After all, “when have human beings ever been fully and perfectly in control of the technologies around them? Is it not rather the case that technologies, far from being separate from human will, are intrinsically involved in its activation?”

French philosopher Bruno Latour famously uses the example of the gun to advance this idea, which he calls mediation. We are all aware of the platitude, “Guns don’t kill people, people kill people”. In its logic, the gun is simply a tool that allows the person, as the primary agent, to kill another. The gun exists only as an object, through which the person’s desire of killing flows. For Latour, this view is deeply misleading.

Instead, Latour draws our attention to the way the gun, in translating a human desire for killing into action, materializes that desire in the world: “you are a different person with the gun in your hand”, and the gun, by being in your hand, is different than if it were left snuggly in its rack. Only when the human intention and the capacities of the gun are brought together can a shooting, as an observably autonomous action, actually take place. It is impossible to neatly distinguish the primary agents of the scene. Responsibility of the shooting, which can only occur through the combination of human and gun, and by proxy, those who produced and provided it, is thus shared.

With this in mind, we must question how useful it is to think about social media in terms of manipulation and control. Social media, far from being a malicious yet inanimate object (like a weapon) is something more profound and complex: a generator of human will. Our interactions on social media platforms, our likes, our shares, our comments, are not raw resources to be mined – they simply could not have occurred without their technical mediation. Neither are they mere expressions of our autonomy, or, conversely, manipulation: the user does not, and cannot, act alone.

The implication of taking this Latourian view is that “neither human individuals, nor the manipulative design of platforms, seductive they may be, can be the sole causes of the psychological and political harm of social media”. Rather, it is the coming together of human users and user-interfaces, in specific historical settings, that co-produce the activity that occurs upon them. We, as users, as much as the technology itself, therefore, share responsibility for the issues that rage online today.

Trust in/distrust of public sector data repositories

Posted by JN

My eye was caught by an ad for a PhD internship in the Social Media Collective, an interesting group of scholars in Microsoft Research’s NYC lab.  What’s significant is the background they cite to the project.

Microsoft Research NYC is looking for an advanced PhD student to conduct an original research project on a topic under the rubric of “(dis)trust in public-sector data infrastructures.” MSR internships provide PhD students with an opportunity to work on an independent research project that advances their intellectual development while collaborating with a multi-disciplinary group of scholars. Interns typically relish the networks that they build through this program. This internship will be mentored by danah boyd; the intern will be part of both the NYC lab’s cohort and a member of the Social Media Collective. Applicants for this internship should be interested in conducting original research related to how trust in public-sector data infrastructures is formed and/or destroyed.

Substantive Context: In the United States, federal data infrastructures are under attack. Political interference has threatened the legitimacy of federal agencies and the data infrastructures they protect. Climate science relies on data collected by NOAA, the Department of Energy, NASA, and the Department of Agriculture. Yet, anti-science political rhetoric has restricted funding, undermined hiring, and pushed for the erasure of critical sources of data. And then there was Sharpie-gate. In the midst of a pandemic, policymakers in government and leaders in industry need to trust public health data to make informed decisions. Yet, the CDC has faced such severe attacks on its data infrastructure and organization that non-governmental groups have formed to create shadow sources of data. The census is democracy’s data infrastructure, yet it too has been plagued by political interference.

Data has long been a source of political power and state legitimacy, as well as a tool to argue for specific policies and defend core values. Yet, the history of public-sector data infrastructures is fraught, in no small part because state data has long been used to oppress, colonize, and control. Numbers have politics and politics has numbers.  Anti-colonial and anti-racist movements have long challenged what data the state collects, about whom, and for what purposes. Decades of public policy debates about privacy and power have shaped public-sector data infrastructures. Amidst these efforts to ensure that data is used to ensure equity — and not abuse — there have been a range of adversarial forces who have invested in polluting data for political, financial, or ideological purposes.

The legitimacy of public-sector data infrastructures is socially constructed. It is not driven by either the quality or quantity of data, but how the data — and the institution that uses its credibility to guarantee the data —  is perceived. When data are manipulated or political interests contort the appearance of data, data infrastructures are at risk. As with any type of infrastructure, data infrastructures must be maintained as sociotechnical systems. Data infrastructures are rendered visible when they break, but the cracks in the system should be negotiated long before the system has collapsed.

At the moment, I suspect that this is a problem that’s mostly confined to the US.  But the stresses of the pandemic and of alt-right disruption may mean that it’s coming to Europe (and elsewhere) soon.

Davids can sometimes really upset tech Goliaths

John Naughton

The leading David at the moment is Max Schrems, the Austrian activist and founder of the most formidable data-privacy campaigning organisation outside of the US.  As a student, he launched the campaign that eventually led to the Court of Justice of the European Union ruling that the ‘Safe Harbour’ agreement negotiated between the EU and the US to regulate data transfer between Europe and the US was invalid.  NOYB was established as a European non-profit that works on strategic litigation to ensure that the GDPR is upheld. It started with a concept, a website and a crowdfunding tool and within two months acquired thousands of “supporters” that has allowed it to begin operations with basic funding at €250,000 per year.   A quick survey of its website suggests that it’s been very busy.  And Schrems’s dispute with the Irish Data Protection Commissioner (DPC) about her failure to regulate Facebook’s handling of European users’ data has led to the Irish High Court ordering the  DPC to cover the costs of Schrems’s legal team in relation to the Court of Justice ruling on EU-US data transfers.

What’s interesting about this story is the way it challenges the “learned helplessness” that has characterised much of the public response to abuses of power by tech giants.  The right kind of strategic litigation, precisely targeted and properly researched can bring results.

The European Commission launches Amazon probe

John Naughton

The European commission has opened an antitrust investigation of Amazon, on the grounds that the company has breached EU antitrust rules against distorting competition in online retail markets. Amazon, says the commission, has been using its privileged access to non-public data of independent sellers who sell on its marketplace to benefit the parts of its own retail business that directly compete with those third-party sellers. The commission has also opened a second investigation into the possible preferential treatment of Amazon’s own retail offers compared with those of marketplace sellers that use Amazon’s logistics and delivery services.

The good news about this is not so much that the EU is taking action as that it is doing so in an intelligently targeted manner. Too much of the discourse about tech companies in the last two years has been about “breaking them up”. But “break ’em up” is a slogan, not a policy, and it has a kind of Trumpian ring to it. The commission is avoiding that.

It is also avoiding another trap – that of generally labelling Amazon as a “monopoly”. As the analyst Benedict Evans never tires of pointing out, a monopoly in what market, exactly? In the US, Amazon has about 40% of e-commerce. That looks like near dominance, in competitive terms. But e-commerce is only 16-20% of all retail. “So,” asks Evans, “does Amazon have 40% of e-commerce or 10% of retail? Amazon’s lawyers would argue, entirely reasonably, that Amazon competes with Walmart, Costco, Macy’s and Safeway – that it competes with other large retailers, not just ‘online’ retailers. On that basis, Amazon’s market is ‘retail’ and its market share in the US is between 5% and 10%.

On the other hand, if you’re a book publisher, then Amazon definitely looks like a monopoly with more than half of all book sales and probably three-quarters of all ebook sales. The moral for regulators, therefore, is that if you want to go after a monopolist then choose the market carefully. And this is what the commission has done, because in Amazon’s own online “marketplace”, where third parties sell stuff on its platform, it very definitely is a monopoly. And, according to the US House of Representatives recent inquiry, it is abusing its power in that particular marketplace. The EU inquiry will be into whether that is also happening in Europe.

The traditional response to such charges is that if people want to trade in Amazon’s hyper-efficient online marketplace then they have to play by Amazon’s rules. After all, nobody’s forcing them to be there. (The same argument is made about Apple’s app store.) That might work if there were dozens of alternative marketplaces, but network effects have led to a situation where a winner has taken all. In the online world, Amazon is a giant while all others are minnows. And the pandemic has further reinforced its dominance. So it really matters if the company is indeed abusing its monopoly in its own marketplace. What makes it worse is that Amazon is both a player in that marketplace and the adjudicator of complaints about its behaviour. Judge and jury and all that.

Breaking Amazon up is unlikely to be an effective remedy to this kind of problem. What is probably needed are laws that regulate behaviour in online marketplaces, which, for example, make it illegal both to run a market and trade in it on your own account. That’s not to say that break-up might not be appropriate in some cases. Maybe Facebook should be forced to disgorge Instagram and WhatsApp and Google to liberate YouTube. Even then, though, history provides some cautionary tales.

Take AT&T, for example, which for many decades was a lightly regulated monopoly with total control over the US telephone network. This had benefits, in the sense that the country had a pretty good analogue phone system. But it also had grievous downsides, because it meant that AT&T controlled the pace of innovation on communications technology, which effectively gave it the power to apply the brakes to the future. The company rejected the idea of packet-switching (the underpinning technology of the internet), for example, when it was first proposed in the early 1960s. Worse still, in the mid-1930s, after a researcher at Bell Labs invented a method of recording audio signals on to magnetised wire reels, he was forced to stop the research and lock away his notebooks because AT&T feared that it would damage the telephone business. So a technology that proved essential for the digital computing industry was hidden away for 20-plus years.

Eventually, though, the “break ’em up” mania took hold, and in the early 1980s AT&T was dismantled into seven companies – the “baby bells”. You can guess what happened: some of the babies grew and grew and swallowed up others, with the result that there are now two giant corporations – AT&T and Verizon. So even if WhatsApp, YouTube and Instagram were liberated from their existing parents, network effects and capitalist concentration will make them into a new generation of tech giants and we will be back here in 20 years wondering how to regulate them. The truth is that regulation is hard and focused and intelligent regulation is even harder. So maybe the way the EU is going about it is the path to follow.

[A version of this post appeared in The Observer, 15.11.2020]

Democratizing digital sovereignty: an impossible task?

Julia Rone

The concept of digital sovereignty has increasingly gained traction in the last decade. A study by the Canadian scholars Stephan Couture and Sophie Toupin in the ProQuest database has shown that while the term appeared only 6 times in general publications before 2008, it was used almost 240 times between 2015-2018. As every new trendy term, “digital sovereignty” has been used in a variety of fields in multiple often conflicting ways. It has been “mobilized by a diversity of actors, from heads of states to indigenous scholars, to grassroots movements, and anarchist-oriented “tech collectives,” with very diverse conceptualizations, to promote goals as diverse as state protectionism, multistakeholder Internet governance or protection against state surveillance”.

Within the EU, Germany has been a champion of “digital sovereignty” — promoted in domestic discourse as a panacea, a magic solution that can at the same time increase the competitiveness of German digital industries, allow individuals to control their data and give power to the state to manage vulnerabilities in critical infrastructures. As Daniel Lambach and Kai Opperman have found, German domestic players have used the term in very vague ways, which has made it easier to organize coalitions around it to apply for funding or push for particular policies. Furthermore, the German Federal Foreign Office has made considerable efforts to promote the term in European policy debates. It has been more cautious at the international scene, where the US has promoted an open Internet (which completely suits its economic and geopolitical interests, one must add) and has been very suspicious of notions of digital sovereignty, associated with Chinese and Russian doctrines above all. Attempting to avoid qualifications of sovereignty as necessarily authoritarian, French President Emmanuel Macron proposed in a 2018 speech at the Internet Governance Forum a vision of the return of the democratic state in Internet governance, as different from both the Chinese model of control and the Californian model of private self-regulation. This unfortunately turned out to be easier said than done.

What all of this comes to show is that beyond the fact that more and more political and economic players talk about “digital sovereignty”, the term itself is up for grabs and there is no single accepted meaning for it. This might seem confusing but I argue it is liberating since it allows us to imagine digital sovereignty as how we want it to be rather than encountering it as a stable, ossified reality. Drawing on a recent discussion on conflicts of sovereignty in the European Union, I claim that discussions about digital sovereignty have been dominated by the same tension as more general discussions on sovereignty – namely the tension between national and supranational sovereignty. Yet, as Brack, Crespy and Coman convincingly argue, the more important sovereignty conflicts in recent European Union politics have in fact been between the people and parliaments, as bearers of democratic sovereignty, on the one hand, and executives at both the national and supranational level, on the other. The demand for “real democracy now” that informed the Spanish Indignados protests reverberated strongly across Europe and in a decade of protests against both austerity and free trade protesters and civil society alike made strong claims for democratic deepening. Sovereignty is ultimately bound with the question of “who rules” and since the French Revolution in Europe the answer to this question at least normatively has been “the people”. Of course, how do “the people” rule and who constitutes “the people” are questions that have sparked both theoretical and practical, sometimes extremely violent, debates over centuries. Yet, the democratic impulse behind the contemporary notion of sovereignty remains there and has become increasingly prominent in the aftermath of the 2008 financial crisis in which the insulation of markets from democratic control has become painfully visible.

What is remarkable is that none of these debates on sovereignty as, ultimately, democratic sovereignty has reached the field of digital policy. Talk about digital sovereignty in policy circles has often presupposed either an authoritarian omnipotent state — as evidenced in Russian and Chinese doctrines of digital sovereignty — or a democratic state but where all decisions are made by the executive, as in Macron’s vision of the ‘return of the state’ in Internet policy. Yet, almost all interesting issues of Internet regulation are issues that deserve a proper democratic debate and participation. States such as France attempting to regulate disinformation without even a basic consultation with citizens have rightly been accused of censorship and stifling political speech.

Who can decide what constitutes disinformation, hate speech or online harms? There is no easy answer to this question but certainly greater democratic involvement and discussion in decisions about silencing political messages would be appropriate. This democratic involvement can take the form of parliamentary debates, hearing and resolutions. But it can also take the form of debates at democratic neighbourhood assemblies or organized mini-publics events. It can take place at the European level with more involvement of the European Parliament and innovative uses of so-far ‘blunt’ instruments such as online public consultations or the European Citizen Initiative. Or it can take place at the national level, with parliaments even of small EU member states building up their capacity to monitor and debate Internet policy proposals. National citizens can also get involved in debates on Internet policy through petitions, referenda, and public consultations. Such type of initiatives will not only promote awareness about specific digital policies but will also increase their legitimacy and potentially their effectiveness if citizens have a sense of “ownership” with regard to new laws and regulations and have taken place in coining them.

Some of this might sound utopian. Some of it might sound painstakingly banal and obvious. But the truth is that while our democracies are struggling with the challenges posed by big tech, a lot of proposals for regulation have been shaped by the presence and power of private companies themselves or have been put forward by illiberal leaders with authoritarian tendencies. In such a context, demands for more digital democratic sovereignty could emancipate us from excessive private and executive power and allow us to reimagine digital content, data and infrastructures as something that is collectively owned and governed.

The early years of the Internet were marked by the techno-deterministic promise that digital tech would democratise politics. What happened instead was the immense concentration of power and influence in the hands of a few tech giants. The solution to this is not to take power from the private companies and give it back to powerful states acting as Big Brothers but instead to democratize both. We can use democracy as a technology, or what the ancient Greeks would call techne, to make both private corporations and states more open, participative and accountable. This is certainly not what Putin, Macron or Merkel would mean when they talk about digital sovereignty. But it is something that we as citizens should push for. Is it possible to democratize digital sovereignty? Or is such a vision bound to end up as the toothless reality of an occasional public consultation whose results decision makers ignore? This is ultimately a political question not a conceptual one. The notion of “digital sovereignty” is up for grabs. So is our democratic future.