Some lessons of Trump’s short career as a blogger

By John Naughton

The same unaccountable power that deprived Donald J. Trump of his online megaphones could easily be deployed to silence other prominent figures, including those of whom liberals approve.

‘From the Desk of Donald J. Trump’ lasted just 29 days. It’s tempting to gloat over this humiliating failure of a politician hitherto regarded as an omnipotent master of the online universe.

Tempting but unwise, because Trump’s failure should alert us to a couple of unpalatable realities.

The first is that the eerie silence that descended after the former President was ‘deplatformed’ by Twitter and Facebook provided conclusive evidence of the power of these two private companies to control the networked public sphere.

Those who loathed Trump celebrate his silencing because they regarded him — rightly — as a threat to democracy.

But on the other hand nearly half of the American electorate voted for him. And the same unaccountable power that deprived him of his online megaphones could easily be deployed to silence other prominent figures, including those of whom liberals approve.

The other unpalatable reality is that Trump’s failure to build an online base from scratch should alert us to the way the utopian promise of the early Internet — that it would be the death of the ‘couch potato’, the archetypal passive media consumer — has not been realised. Trump, remember, had 88.9m followers on Twitter and over 33m fans on Facebook.

“The failure of Trump’s blog is not just a confirmation of the unaccountable power of those who own and control social media, but also a reflection of the way Internet users enthusiastically embraced the ‘push’ model of the Web over the ‘pull’ model that we techno-utopians once hoped might be the network’s future.”

Yet when he started his own blog they didn’t flock to it. In fact they were nowhere to be seen. Forbes reported that the blog had “less traffic than pet adoption site Petfinder and food site Eat This Not That.” And it was reported that he had shuttered it because “low readership made him look small and irrelevant”. Which it did.

What does this tell us? The answer, says Philip Napoli in an insightful essay in Wired,

“lies in the inescapable dynamics of how today’s online media ecosystem operates and how audiences have come to engage with content online. Many of us who study media have long distinguished between “push” media and “pull” media.

“Traditional broadcast television is a classic “push” medium, in which multiple content streams are delivered to a user’s device with very little effort required on the user’s part, beyond flipping the channels. In contrast, the web was initially the quintessential “pull” medium, where a user frequently needed to actively search to locate content interesting to them.

“Search engines and knowing how to navigate them effectively were central to locating the most relevant content online. Whereas TV was a “lean-back” medium for “passive” users, the web, we were told, was a “lean-forward” medium, where users were “active.” Though these generalizations no longer hold up, the distinction is instructive for thinking about why Trump’s blog failed so spectacularly.

“In the highly fragmented web landscape, with millions of sites to choose from, generating traffic is challenging. This is why early web startups spent millions of dollars on splashy Super Bowl ads on tired, old broadcast TV, essentially leveraging the push medium to inform and encourage people to pull their online content.

“Then social media helped to transform the web from a pull medium to a push medium...”

adem-ay-Tk9m_HP4rgQ-unsplash

Credit: Adem AY for Unsplash

This theme was nicely developed by Cory Doctorow in a recent essay, “Recommendation engines and ‘lean-back’ media”.  The optimism of the early Internet era, he mused, was indeed best summarized in that taxonomy.

“Lean-forward media was intensely sociable: not just because of the distributed conversation that consisted of blog-reblog-reply, but also thanks to user reviews and fannish message-board analysis and recommendations.

“I remember the thrill of being in a hotel room years after I’d left my hometown, using Napster to grab rare live recordings of a band I’d grown up seeing in clubs, and striking up a chat with the node’s proprietor that ranged fondly and widely over the shows we’d both seen.

“But that sociability was markedly different from the “social” in social media. From the earliest days of Myspace and Facebook, it was clear that this was a sea-change, though it was hard to say exactly what was changing and how.

“Around the time Rupert Murdoch bought Myspace, a close friend had a blazing argument with a TV executive who insisted that the internet was just a passing fad: that the day would come when all these online kids grew up, got beaten down by work and just wanted to lean back.

“To collapse on the sofa and consume media that someone else had programmed for them, anaesthetizing themselves with passive media that didn’t make them think too hard.

“This guy was obviously wrong – the internet didn’t disappear – but he was also right about the resurgence of passive, linear media.”

This passive media, however, wasn’t the “must-see TV” of the 80s and 90s.  Rather, it was the passivity of the recommendation algorithm, which created a per-user linear media feed, coupled with mechanisms like “endless scroll” and “autoplay,” that obliterated any trace of an active role for the  aptly-named Web “consumer”.

As Napoli puts it,

“Social media helped to transform the web from a pull medium to a push medium. As platforms like Twitter and Facebook generated massive user bases, introduced scrolling news feeds, and developed increasingly sophisticated algorithmic systems for curating and recommending content in these news feeds, they became a vital means by which online attention could be aggregated.

“Users evolved, or devolved, from active searchers to passive scrollers, clicking on whatever content that their friends, family, and the platforms’ news feed algorithms put in front of them. This gave rise to the still-relevant refrain “If the news is important, it will find me.” Ironically, on what had begun as the quintessential pull medium, social media users had reached a perhaps unprecedented degree of passivity in their media consumption. The leaned-back “couch potato” morphed into the hunched-over “smartphone zombie.””

So the failure of Trump’s blog is not just a confirmation of the unaccountable power of those who own and control social media, but also a reflection of the way Internet users enthusiastically embraced the ‘push’ model of the Web over the ‘pull’ model that we techno-utopians once hoped might be the network’s future.

In Review: Bellingcat and the unstoppable Mr Higgins

By John Naughton

Review of We are Bellingcat: An Intelligence Agency for the People, by Eliot Higgins, Bloomsbury, 255pp

On the face of it, this book tells an implausible story. It’s about how an ordinary guy – a bored administrator in Leicester, to be precise – becomes a skilled Internet sleuth solving puzzles and crimes which appear to defeat some of the world’s intelligence agencies. And yet it’s true. Eliot Higgins was indeed a bored administrator, out of a job and looking after his young daughter in 2011 while his wife went out to work. He was an avid watcher of YouTube videos, especially of those emanating from the Syrian civil war, and one day had an epiphany: “If you searched online you could find facts that neither the press nor the experts knew.”

Higgins realised that one reason why mainstream media were ignoring the torrent of material from the war zone that was being uploaded to YouTube and other social media channels was that these outlets were unable to verify or corroborate it. So he started a blog — the Brown Moses blog — and discovered that a smattering of other people had had a similar realisation, which was the seed crystal for the emergence of an online community that converged around news events that had left clues on YouTube, Facebook, Twitter and elsewhere.

This community of sleuths now sails under the flag of Bellingcat, a name taken from the children’s story about the ingenious mice who twig that the key to obtaining early warning of a cat’s approach is to put a bell round its neck. This has led to careless journalists calling members of the community “Bellingcats” — which leads them indignantly to point out that they are the mice, not the predators!

The engaging name belies a formidable little operation which has had a series of impressive scoops. One of the earliest involved confirming Russian involvement in the downing of MH17, the Malaysia Airlines aircraft brought down by a missile when flying over Ukraine. Other impressive scoops included identification of the Russian FSB agents responsible for the Skripal poisonings and finding the FSB operative who tried to assassinate Alexai Navalny, the Russian democratic campaigner and Putin opponent who is now imprisoned — and, reportedly, seriously ill — in a Russian gaol.

‘We are Bellingcat’ is a low-key account of how this remarkable outfit evolved and of the role that Mr Higgins played in its development. The deadpan style reflects the author’s desire to project himself as an ordinary Joe who stumbled on something significant and worked at it in collaboration with others. This level of understatement is admirable but not entirely persuasive for the simple reason that Higgins is no ordinary Joe. After all, one doesn’t make the transition from a bored, low-level administrator to become a Research Fellow at U.C. Berkeley’s Human Rights Center and a member of the International Criminal Court’s Technology Advisory Board without having some exceptional qualities.

“One could say that the most seminal contribution Bellingcat has made so far is to explore and disseminate the tools needed to convert user-generated content into more credible information — and maybe, sometimes, into the first draft of history.”

One of the most striking things about Bellingcat’s success is that — at least up to this stage — its investigative methodology is (to use a cliché) not rocket science. It’s a combination of determination, stamina, cooperation, Internet-saviness, geolocation (where did something happen?), chronolocation (when did it happen?) and an inexhaustible appetite for social-media-trawling. There is, in other words, a Bellingcat methodology — and any journalist can learn it, provided his or her employer is prepared to provide the time and opportunity to do so. In response, Bellingcat has been doing ‘boot camps’ for journalists — first in Germany, Britain and France and — hopefully — in the US. And the good news is that some mainstream news outlets, including the New York Times, the Wall Street Journal and the BBC, have been setting up journalistic units working in similar ways.

In the heady days of the so-called ‘Arab spring’ there was a lot of excited hype about the way the smartphone had launched a new age of ‘Citizen Journalism’. This was a kind of category error which confused user-generated content badged as ‘witnessing’ with the scepticism, corroboration, verification, etc. that professional journalism requires. So in that sense one could say that the most seminal contribution Bellingcat has made so far is to explore and disseminate the tools needed to convert user-generated content into more credible information — and maybe, sometimes, into the first draft of history.

Mr Higgins makes continuous use of the phrase “open source” to describe information that he and his colleagues find online, when what he really means is that the information — because it is available online — is in the public domain. It is not ‘open source’ in the sense that the term is used in the computer industry, but I guess making that distinction is now a lost cause because mainstream media have re-versioned the phrase.

The great irony of the Bellingcat story is that the business model that finances the ‘free’ services (YouTube, Twitter, Facebook, Reddit, Instagram et al) that are polluting the public sphere and undermining democracy is also what provides Mr Higgins and his colleagues with the raw material from which their methodology extracts so many scoops and revelations. Mr Higgins doesn’t have much time for those of us who are hyper-critical of the tech industry. He sees it as a gift horse whose teeth should not be too carefully examined. And I suppose that, in his position, I might think the same.

Forthcoming in British Journalism Review, vol. 32, No 2, June 2021.

Worried about data overload or AI overlords? Here’s how the CDH Social Data School can help

By Anne Alexander

Ahead of the CDH Social Data School application Q&A on May 4, Dr Anne Alexander, Director of Learning at Cambridge Digital Humanities (CDH), explains how the programme provides the digital research tools necessary for the data-driven world.

The world we live in has long been shaped by the proliferation of data – companies, governments and even our fellow citizens all collect and create data about us every day of our lives.

Much of our communications are relayed digitally, the buildings we live in and the urban spaces we pass through have been turned into sensors, we work, play and even sleep with our digital devices. Particularly over the past year, as the pandemic has dramatically reduced in-person interactions for many, the data overload has come to seem overwhelming. 

The CDH Social Data School (June 16-29) which Cambridge Digital Humanities is organising in collaboration with the Minderoo Centre for Technology and Democracy is aimed at people working with data in the media, NGOs and civil society organisations and in education who want to equip themselves with new skills in designing and carrying out digital research projects, but who don’t enjoy easy access to education in data collection, management and analysis.

We want to make available the methods of inquiry and the technical skills we teach to students and staff at the University of Cambridge to a much wider audience. 

This year’s CDH Social Data School will include modules exploring the ethical and societal implications of new applications in Machine Learning, with a specific focus on the problems of structural injustice which permeate the computer vision techniques underpinning technologies such as facial recognition and image-based demographic profiling. 

We are keen to hear from participants whose work supports public interest journalism, human rights advocacy, trade unionism and campaigns for social justice, environmental sustainability and the decolonisation of education. 

Although criticism of the deployment of these technologies is now much more widespread than in the past, it often focuses on the problems with specific use cases rather than more general principles.

In the CDH Social Data School we will take a “bottom-up” approach by providing an accessible introduction to the technical fundamentals of machine learning systems, in order to equip participants with a better understanding of what can (and usually does) go wrong when such systems are deployed in wider society. 

We will also engage with these ideas through an experimental approach to learning, giving participants access to easy-to-use tools and methods allowing them to pose the questions which are most relevant to their own work. 

Participants are not expected to have any prior knowledge of programming to take part – familiarity with working with basic office tools such as spreadsheets will be helpful. We will be using free or open source software to reduce barriers to participation. 

We are particularly interested in applications from participants from countries, communities and groups which suffer from under-resourcing, marginalization and discrimination.

We are keen to hear from participants whose work supports public interest journalism, human rights advocacy, trade unionism and campaigns for social justice, environmental sustainability and the decolonisation of education. 

The CDH Social Data School will run online from June 16-29.

Apply now for the CDH Social Data School 2021

Please join us for a Q&A session with the teaching team:

Tuesday 4 May 2 – 2.45pm BST

Registration essential: Sign up here

Read more on the background and apply for your place at the School here.

In Review: Democracy, law and controlling tech platforms

By John Naughton

Notes on a discussion of two readings

  1. Paul Nemitz: “Constitutional democracy and technology in the age of artificial intelligence”, Philosophical Transactions of the Royal Society A, 15 October 2018. https://doi.org/10.1098/rsta.2018.0089
  2. Daniel Hanley: “How Antitrust Lost its Bite”, Slate, April 6, 2021 – https://tinyurl.com/2ht4h8wf

I had proposed these readings because (a) Nemitz’s provided a vigorous argument for resisting the ‘ethics-theatre’ currently being orchestrated by the tech industry as a pre-emptive strike against regulation by law; and (b) the Hanley article argued the need for firm rules in antitrust legislation rather than the latitude currently offered to US judges by the so-called “rule of reason”.

Most of the discussion revolved around the Nemitz article. Here are my notes of the conversation, using the Chatham House Rule as a reporting principle.

  • Nemitz’s assertion that “The Internet and its failures have thrived on a culture of lawlessness and irresponsibility” was challenged as an “un-nuanced and uncritical view of how law operates in the platform economy”. The point was that platform companies do of course ignore and evade law as and when it suits them, but they also at a corporate level rely on it and use it as both ‘a sword and a shield’; law has as a result played a major role in structuring the internet that now exists and producing the dominant platform companies we have today and has been leveraged very successfully to their advantage. Even the egregious abuse of personal data (which may seem closest to being “lawless”) largely still runs within the law’s overly permissive framework. Where it doesn’t, it generally tries to evade the law by skirt around gaps created within the law, so even this seemingly extra-legal processing is itself shaped by the law (and cannot therefore be “lawless”). So any respect for the law that they profess is indeed, as you say, disingenuous, but describing the internet as a “lawless” space – as Nemitz does – misses a huge part of the dynamic that got us here and is a real problem if we’re going to talk about the potential role of law in getting us out. Legal reform is needed, but if it’s going to work then we have to be aware of and account for these things.
  • This critique stemmed from the view that law is both produced by society and in turn reproduces society, and in that sense always functions essentially as an instrument of power — so it has historically been (and remains) a tool of dominance, of hierarchy, of exclusion and marginalisation, of capital and of colonialism. In that sense, the embryonic Silicon Valley giants slotted neatly into that paradigm. And so, could Nemitz’s insistence on the rule of law — without a critical understanding of what that actually means — itself be a problem?

“They [tech companies] employ the law when it suits them and do so very strategically – as both a ‘sword’ and a ‘shield’ – and that’s played a major role in getting the platform ecosystem to where it is now.”

  • On the one hand, laws are the basic tools that liberal democracies have available for bringing companies under democratic (i.e. accountable) control. On the other hand, large companies have always been adept (and, in liberal democracies, very successful) at using the law to further their interests and cement their power.
  • This point is particularly relevant to tech companies. They’ve used law to bring users within their terms of service and thereby to hold on to assets (e.g. exabytes of user data) that they probably wouldn’t have been able to do otherwise. They use law to enable the pretence that click-through EULAs are, in fact, contracts. So they employ the law when it suits them and do so very strategically — as both a ‘sword’ and a ‘shield’ — and that’s played a major role in getting the platform ecosystem to where it is now.
  • Also, law plays a big role in driving and shaping technological development. Technologies don’t emerge in a vacuum, they’re a product of their context and law is a major part of that context. So the platform business models and what’s happening on the internet aren’t outside of the law; they’re constructed through, and depend upon, it. So it’s misleading when people argue (like Nemitz??) that we need to use law to change things — as if the law isn’t there already and may actually be partially enabling things that are societally damaging. So unless we properly understand the rule of law in getting us to our current problematique, talking about how law can help us is like talking about using a tool to fix a problem without realising that the tool is itself is part of the problem.

“It’s the primacy of democracy, not of law that’s crucial.”

  • There was quite a lot of critical discussion of the GDPR on two fronts — its ‘neoliberal’ emphasis on individual rights; and things that are missing from it. Those omissions and gaps are not necessarily mistakes; they may be the result of political choices.
  • One question is whether there is a deficit of law around who owns property in the cloud. If you upload a photo to Facebook or whatever it’s unclear if you have property rights over or if the cloud-computing provider does. General consensus seems to be that that’s a tricky question! (Questions about who owns your data generally are.)
  • Even if laws exist, enforcement looks like a serious problem. Sometimes legal coercion of companies is necessary but difficult. And because of the ‘placelessness’ of the internet, it seems possible that a corporation or an entity could operate in a place where there’s no nexus to coerce it. Years ago Tim Wu and Jack Goldsmith’s book recounted how Yahoo discovered that they couldn’t just do whatever they wanted in France because they had assets in that jurisdiction and France seized them. Would that be the case that with say, Facebook, now? (Just think of why all of the tech giants have their European HQs in Ireland.)
  • It’s the primacy of democracy, not of law that’s crucial. If the main argument of the Nemitz paper is interpreted as the view that law will solve our problems, that’s untenable. But if we take as the main argument that we need to democratically discuss what the laws are, then we all agree with this. (But isn’t that just vacuous motherhood and apple pie?)
  • More on GDPR… it sets up a legal framework in which we can regulate the consenting person that is, that’s a good thing that most people can agree on. But the way that GDPR is constructed is extremely individualistic. For example, it disempowers data subjects in even in the name of giving them rights because it individualises them. So even the way that it’s constructed actually goes some way towards undermining its good effects. It’s based on the assumption that if we give people rights then everything will be fine. (Shades of the so-called “Right to be Forgotten”.)

As for the much-criticised GDPR, one could see it as an example of ‘trickle-down’ regulation, in that GDPR has become a kind of gold standard for other jurisdictions.

  • Why hasn’t academic law been a more critical discipline in these areas? The answer seems to be that legal academia (at least in the UK, with some honourable exceptions) seems exceptionally uncritical of tech, and any kind of critical thinking is relatively marginalised within the discipline compared to other social sciences. Also most students want to go into legal practice, so legal teaching and scholarship tends to be closely tied to law as a profession and, accordingly, the academy tends to be oriented around ‘producing’ practising lawyers.
  • There was some dissent from the tenor of the preceding discourse about the centrality of law and especially about the improbability of overturning such a deeply embedded cognitive and professional system. This has echoes of a venerable strand in political thinking which says that in order to change anything you have to change everything and it’s worse to change a little bit than it is to change everything — which means nothing actually changes. This is the doctrine that it’s quite impossible to do any good at all unless you do the ultimate good, which is to change everything. (Which meant, capitalism and colonialism and original sin, basically!) On the other hand, there is pragmatic work — making tweaks and adjustments — which though limited in scope might be beneficial and appeal to liberal reformers (and are correspondingly disdained by lofty adherents to the Big Picture).
  • There were some interesting perspectives based on the Daniel article. Conversations with people across disciplines show that technologists seem to suggest a technical solution for everything (solutionism rules OK?), while lawyers view the law as a solution for everything. But discussions with political scientists and sociologists mostly involve “fishing for ideas” which is a feature, not a bug, because it suggests that minds are not set in silos — yet. But one of the problems with the current discourse — and with these two articles — is that the law currently seems to be filling the political void. And the discourse seems to reflect public approval of the judicial approach compared with the trade-offs implicit in Congress. But the Slate article shows the pernicious influence or even interference of an over-politicised judiciary in politics and policy enforcement. (The influence of Robert Bork’s 1978 book and the Chicago School is still astonishing to contemplate.)
  • The Slate piece seems to suffer from a kind of ‘neocolonial governance syndrome’ — the West and the Rest. We all know section 230 by heart. And now it’s the “rule of reason” and the consumer welfare criterion of Bork. It’s important to understand the US legal and political context. But we should also understand: the active role of the US administration; what happened recently in Australia (where the government intervened, both through diplomatic means and directly on behalf of the Facebook platform); and in Ireland (where the government went to the European Court to oppose a ruling that Apple had underpaid tax to the tune of 13 billion Euros). So the obsession with the US doesn’t say much about the rest of the world’s capacity to intervene and dictate the rules of the game. And yet China, India and Turkey have been busy in this space recently.
  • And as for the much-criticised GDPR, one could see it as an example of ‘trickle-down’ regulation, in that GDPR has become a kind of gold standard for other jurisdictions. Something like 12 countries have adopted GDPR-like legislation, and this includes many countries in Latin America Chile. Chile, Brazil, South Africa and South Africa, South Africa, Japan, Canada and so on so forth.

The flight from WhatsApp

John Naughton:

Not surprisingly, Signal has been staggering under the load of refugees from WhatsApp following Facebook’s ultimatum about sharing their data with other companies in its group. According to data from Sensor Tower Signal was downloaded 8.8m times worldwide in the week after the WhatsApp changes were first announced on January 4. Compare that with 246,000 downloads the week before and you get some idea of the step-change. I guess the tweet — “Use Signal” — from Elon Musk on January 7 probably also added a spike.

In contrast, WhatsApp downloads during the period showed the reverse pattern — 9.7m downloads in the week after the announcement, compared with 11.3m before, a 14 per cent decrease.

This isn’t a crisis for Facebook — yet. But it’s a more serious challenge than the June 2020 advertising boycott. Evidence that Zuckerberg & Co are taking it seriously comes from announcements that Facebook has cancelled the February 8 deadline in its ultimatum to users. It now says that it will instead “go to people gradually to review the policy at their own pace before new business options are available on May 15.”  As Charles Arthur has pointed out, the contrast between the leisurely pace at which Facebook has moved on questions of hate speech posted by alt-right outfits and it’s lightning response to the exodus from WhatsApp is instructive.  It shows what really matters to the top brass.

Signal seems an interesting outfit, incidentally, and not just because of its technology. It’s a not-for-profit organisation, for one thing. Its software is open source — which means it can be independently assessed. And it’s been created by interesting people. Brian Acton, for example, is one of the two co-founders of WhatsApp, which Facebook bought in 2014 for $19B. He pumped $50m of that into Signal, and no doubt there’s a lot more where that came from. And Moxie Marlinspike, the CEO, is not only a cryptographer but also a hacker, a shipwright, and a licensed mariner. The New Yorker had a nice profile of him a while back.

Silencing Trump and authoritarian tech power

John Naughton:

It was eerily quiet on social media last week. That’s because Trump and his cultists had been “deplatformed”. By banning him, Twitter effectively took away the megaphone he’s been masterfully deploying since he ran for president. The shock of the 6 January assault on the Capitol was seismic enough to convince even Mark Zuckerberg that the plug finally had to be pulled. And so it was, even to the point of Amazon Web Services terminating the hosting of Parler, a Twitter alternative for alt-right extremists.

The deafening silence that followed these measures was, however, offset by an explosion of commentary about their implications for freedom, democracy and the future of civilisation as we know it. Wading knee-deep through such a torrent of opinion about the first amendment, free speech, censorship, tech power and “accountability” (whatever that might mean), it was sometimes hard to keep one’s bearings. But what came to mind continually was H L Mencken’s astute insight that “for every complex problem there is an answer that is clear, simple and wrong”. The air was filled with people touting such answers.

In the midst of the discursive chaos, though, some general themes could be discerned. The first highlighted cultural differences, especially between the US with its sacred first amendment on the one hand and European and other societies, which have more ambivalent histories of moderating speech. The obvious problem with this line of discussion is that the first amendment is about government regulation of speech and has nothing whatsoever to do with tech companies, which are free to do as they like on their platforms.

A second theme viewed the root cause of the problem as the lax regulatory climate in the US over the last three decades, which led to the emergence of a few giant tech companies that effectively became the hosts for much of the public sphere. If there were many Facebooks, YouTubes and Twitters, so the counter-argument runs, then censorship would be less effective and problematic because anyone denied a platform could always go elsewhere.

Then there were arguments about power and accountability. In a democracy, those who make decisions about which speech is acceptable and which isn’t ought to be democratically accountable. “The fact that a CEO can pull the plug on Potus’s loudspeaker without any checks and balances,” fumed EU commissioner Thierry Breton, “is not only confirmation of the power of these platforms, but it also displays deep weaknesses in the way our society is organised in the digital space.” Or, to put it another way, who elected the bosses of Facebook, Google, YouTube and Twitter?

What was missing from the discourse was any consideration of whether the problem exposed by the sudden deplatforming of Trump and his associates and camp followers is actually soluble – at least in the way it has been framed until now. The paradox that the internet is a global system but law is territorial (and culture-specific) has traditionally been a way of stopping conversations about how to get the technology under democratic control. And it was running through the discussion all week like a length of barbed wire that snagged anyone trying to make progress through the morass.

All of which suggests that it’d be worth trying to reframe the problem in more productive ways. One interesting suggestion for how to do that came last week in a thoughtful Twitter thread by Blayne Haggart, a Canadian political scientist. Forget about speech for a moment, he suggests, and think about an analogous problem in another sphere – banking. “Different societies have different tolerances for financial risk,” he writes, “with different regulatory regimes to match. Just like countries are free to set their own banking rules, they should be free to set strong conditions, including ownership rules, on how platforms operate in their territory. Decisions by a company in one country should not be binding on citizens in another country.”

In those terms, HSBC may be a “global” bank, but when it’s operating in the UK it has to obey British regulations. Similarly, when operating in the US, it follows that jurisdiction’s rules. Translating that to the tech sphere, it suggests that the time has come to stop accepting the tech giant’s claims to be hyper-global corporations, whereas in fact they are US companies operating in many jurisdictions across the globe, paying as little local tax as possible and resisting local regulation with all the lobbying resources they can muster. Facebook, YouTube, Google and Twitter can bleat as sanctimoniously as they like about freedom of speech and the first amendment in the US, but when they operate here, as Facebook UK, say, then they’re merely British subsidiaries of an American corporation incorporated in California. And these subsidiaries obey British laws on defamation, hate speech and other statutes that have nothing to do with the first amendment. Oh, and they should also pay taxes on their local revenues.

Great expectations: the role of digital media for protest diffusion in the 2010s

The decade after the 2008 economic crisis started with great expectations about the empowering potential of digital media for social movements. The wave of contention that started from Iceland and the MENA countries swept also Europe, where hundreds of thousands Spanish protesters took part in the Indignados protests in 2011 and a smaller but dedicated group organized Occupy London – the British version of the US Occupy movement that shook US politics for years to come. Protesters during the Arab Spring were often carrying posters and placards with the logos and names of Facebook, Twitter and similar platforms or were even spraying them as graffiti on walls.

It was a period of ubiquitous enthusiasm with some scholars even claiming that the Internet is a necessary and sufficient condition for democratizaton. What is more, a number of scholars saw in the rise of digital platforms a great opportunity for the diffusion of protests within nations and transnationally at an unprecedented speed – leading political journalists and researchers noted that digital media had a key role in ‘Occupy protests spreading like wildfire’ and in spreading information during the Arab Spring.

Photo by Essam Sharaf

Already back in the early 2010s, the beginning of this techno-utopian decade, researchers emphasized that in Egypt, protests and information about them in fact spread in more traditional ways – through the interpersonal networks of cab drivers, labour unions, and football hooligans, among others. What is more, protests in the aftermath of the 2008 economic crisis spread much more slowly than the 1848 Spring of the Peoples protests due to the need of laborious cultural translation from one region to another. Ultimately, in spite of the major promises of social media, most protest mobilization and diffusion still depends on face-to-face interactions and established protest traditions.

Yet, the trend of expecting too much from digital media is countered by an equally dangerous trend – claiming they haven’t changed anything in the world of mobilization. The media ecology approach of Emilano Trere and Alice Mattoni escapes the pitfalls of both approaches by studying how activists use digital media in combination and interaction with a number of other types of media in hybrid media ecologies.

In a book that I just published, I apply the media ecology approach to study the diffusion of Green and left-wing protests against austerity and free trade in the EU after 2008. One of the greatest things about trying to focus on other media beyond Facebook and Twitter is the multiple unexpected angles it gives to events we all thought we knew well. While both activists and researchers alike have been fascinated with the promise of digital media, looking at the empirical material with unbiased eyes revealed so much about the key role of other types of media for protest diffusion.

To begin with: books! The very name of the Indignados protests came from the title of Stéphane Hessel’s book “Indignez-vous!”. But the books by authors such as Joseph Stiglitz, Wolfgang Streeck, Ernesto Laclau and Yannis Varoufakis have been no less important for spreading ideas and informing protesters across the EU. In his recent book “Translating the Crisis”, Spanish scholar Fruela Fernandez notes the boom of publishing houses translating political books in Spain in the period surrounding the birth and eruption into public space of the Indignados movement.

Similarly, mainstream media have been of crucial importance for spreading information on protests, protest ideas and tactics across the EU in the last decade. Mainstream media such as The Guardian, BBC, El País, etc. reported in much detail on the use of digital media by social movements such as Occupy or Indignados, even sharing Twitter and Facebook hashtags, links to Facebook groups and live-streams in articles. Mainstream media thus popularized the message (and media practices) of protesters further than they could have possibly imagined. In fact, mainstream media’s fascination with the digital practices of new social movements goes a long way to explain their largely favourable attitude to the protests of the early 2010s, Such a favorable coverage by mainstream media indeed contradicts most expectations of social movement scholars that media would largely ignore or misrepresent protesters.

Another type of protest diffusion that has remained woefully neglected but played a key role in the spread of progressive economic protests in the EU was face-to-face communication and, as simple as it may sound, walking! During the Spanish Indignados protests hundreds of protesters marched from all parts of Spain to gather in Madrid. A small part of them continued marching to Brussels where they staged theater plays and discussions and then headed to Greece. These marches took weeks and involved protesters stopping in villages and cities on the way and engaging local people in discussions. Sharing a physical space and sharing food have been among the most efficient ways to diffuse a message and reach more people with it. Of course, the marchers kept live blogs and diaries of their journeys (which in themselves constitute rich materials for future research), but it is the combination of diffusion through traveling, meeting people in person, and using digital media which is the truly interesting combination.

In my book, I give many more examples of how progressive protesters used various types of media to spread protest. Beyond providing a richer and more accurate picture of progressive economic protests in the 2010s, the book can hopefully serve also as a useful reminder for researchers of the radical right. The 2010s that started with research on social movements and democratization end with a major academic trend for studying the far right, and especially the way the far right has blossomed in the digital sphere.

If there is one thing to be learned from my book, it is that digital media are not the only tool activists use to spread protest. Thus, if one needs to understand the diffusion of far right campaigns and ideas, one needs to focus also on the blossoming of far right publishing houses, the increasing mainstreaming of far right ideas in mainstream press, and last but not least, the ways in which far right activists make inroads into civil society organizations and travel to share experiences – it is well-known, for example, that during the refugee crisis far right activists from Western Europe made several joint actions with activists from Eastern Europe to patrol borders together.

Understanding how protests, protest ideas and repertoires diffuse is crucial for activists who want to help spread progressive causes, but also for those who are worried about the spread of dangerous and anti-democratic ideas. After a decade of great expectations about the potential of digital media to democratize our societies, we find ourselves politically in an era of backlash. Yet, at least analytically we are now past the naive enthusiasm of the early 2010s and have a much better instrumentarium to understand how protest diffusion works. To rephrase Gramsci, we are now entering a period of pessimism of the will and optimism of the intellect.

It is not what we wished for. But shedding our illusions and utopian expectations about the potential of digital media is an important step for moving beyond techno-fetishism and understanding better the processes of mobilization that currently define our society.

Seeing Like a Social Media Site

The Anarchist’s Approach to Facebook

When John Perry Barlow published “A Declaration of the Independence of Cyberspace” nearly twenty-five years ago, he was expressing an idea that seemed almost obvious at the time: the internet was going to be a powerful tool to subvert state control. As Barlow explained to the “governments of the Industrial World,” those “weary giants of flesh and steel”—cyberspace does not lie within your borders. Cyberspace was a “civilization of the mind.” States might be able to control individuals’ bodies, but their weakness lay in their inability to capture minds.

In retrospect, this is a rather peculiar perspective of states’ real weakness, which has always been space. Literal, physical space—the endlessly vast terrain of the physical world—has historically been the friend of those attempting to avoid the state. As scholar James Scott documented in The Art of Not Being Governed, in early stages of state formation if the central government got too overbearing the population simply could—and often did—move. Similarly, John Torpey noted inThe Invention of the Passport that individuals wanting to avoid eighteenth-century France’s system of passes could simply walk from town to town, and passes were often “lost” (or, indeed, actually lost). As Richard Cobb noted, “there is no one more difficult to control than the pedestrian.” More technologically-savvy ways of traveling—the bus, the boat, the airplane—actually made it easier for the state to track and control movement.

Cyberspace may be the easiest place to track of all. It is, by definition, a mediated space. To visit, you must be in possession of hardware, which must be connected to a network, which is connected to other hardware, and other networks, and so on and so forth. Every single thing in the digital world is owned, controlled or monitored by someone else. It is impossible to be a pedestrian in cyberspace—you never walk alone. 

States have always attempted to make their populations more trackable, and thus more controllable. Scott calls this the process of making things “legible.” It includes “the creation of permanent last names, the standardization of weights and measures, the establishment of cadastral surveys and population registers, the invention of freehold tenure, the standardization of language and legal discourse, the design of cities, and the organization of transportation.” These things make previously complicated, complex and unstandardized facts knowable to the center, and thus more easy to administrate. If the state knows who you are, and where you are, then it can design systems to control you. What is legible is manipulable.

Cyberspace—and the associated processing of data—offers exciting new possibilities for the administrative center to make individuals more legible precisely because, as Barlow noted, it is “a space of the mind.” Only now, it’s not just states that have the capacity to do this—but sites. As Shoshana Zuboff documented in her book The Age of Surveillance Capitalism, sites like Facebook collect data about us in an attempt to make us more legible and, thus, more manipulatable. This is not, however, the first time that “technologically brilliant” centralized administrators have attempted to engineer society. 

Scott use the term “high modernism” to characterize schemes—attempted by planners across the political spectrum—that possess a “self-confidence about scientific and technical progress, the expansion of production, the growing satisfaction of human needs, the mastery of nature (including human nature), and, above all, the rational design of social order commensurate with the scientific understanding of natural laws.” In Seeing Like a State, Scott examines a number of these “high modernist” attempts to engineer forests in eighteenth-century Prussia and Saxony, urban cities in Paris and Brasilia, rural populations in ujamaa villages, and agricultural production in Soviet collective farms (to name a few). Each time, central administrators attempted to make complex, complicated processes—from people to nature—legible, and then engineer them into rational, organized systems based on scientific principles. It usually ended up going disastrously wrong—or, at least, not at all the way central authorities had planned it. 

The problem, Scott explained, is that “certain forms of knowledge and control require a narrowing of vision. . . designed or planned social order is necessarily schematic; it always ignores essential features of any real, functioning social order.” For example, mono-cropped forests became more vulnerable to disease and depleted soil structure—not to mention destroyed the diversity of the flora, insect, mammal, and bird populations which took generations to restore. The streets of Brasilia had not been designed with any local, community spaces where neighbors might interact; and, anyway, they forgot—ironically—to plan for construction workers, who subsequently founded their own settlement on the outskirts of the city, organized to defend their land and demanded urban services and secure titles. By 1980, Scott explained, “seventy-five percent of the population of Brasilia lived in settlements that had never been anticipated, while the planned city had reached less than half of its projected population of 557,000.” Contrary to Zuboff’s assertion that we are losing “the right to a future tense,” individuals and organic social processes have shown a remarkable capacity to resist and subvert otherwise brilliant plans to control them.

And yet this high-modernism characterizes most approaches to “regulating” social media, whether self-regulatory or state-imposed. And, precisely because cyberspace is so mediated, it is more difficult for users to resist or subvert the centrally-controlled processes imposed upon them. Misinformation on Facebook proliferates—and so the central administrators of Facebook try to engineer better algorithms, or hire legions of content moderators, or make centralized decisions about labeling posts, or simply kick off users. It is, in other words, a classic high-modernist approach to socially engineer the space of Facebook, and all it does is result in the platforms’ ruler—Mark Zuckerberg—consolidating more power. (Coincidentally, fellow Power-Shift contributor Jennifer Cobbe argued something quite similar in her recent article about the power of algorithmic censorship). Like previous attempts to engineer society, this one probably will not work well in practice—and there may be disastrous, authoritarian consequences as a result.

So what is the anarchist approach to social media? Consider this description of an urban community by twentieth-century activist Jane Jacobs, as recounted by Scott:

“The public peace-the sidewalk and street peace-of cities . . . is kept by an intricate, almost unconscious network of voluntary controls and standards among the people themselves, and enforced by the people themselves. . . . [an] incident that occurred on [Jacobs’] mixed-used street in Manhattan when an older man seemed to be trying to cajole an eight or nine-year-old girl to go with him. As Jacobs watched this from her second-floor window, wondering if she should intervene, the butcher’s wife appeared on the sidewalk, as did the owner of the deli, two patrons of a bar, a fruit vendor, and a laundryman, and several other people watched openly from their tenement windows, ready to frustrate a possible abduction. No “peace officer” appeared or was necessary. . . . There are no formal public or voluntary organizations of urban order here—no police, no private guards or neighborhood watch, no formal meetings or officeholders. Instead, the order is embedded in the logic of daily practice.”

How do we make social media sites more like Jacobs’ Manhattan, where people—not police or administrators—on “sidewalk terms” are empowered to shape their own cyber spaces? 

There may already be one example: Wikipedia. 

Wikipedia is not often thought of as an example of a social media site—but, as many librarians will tell you, it is not an encyclopedia. Yet Wikipedia is not only a remarkable repository of user-generated content, it also has been incredibly resilient to misinformation and extremist content. Indeed, as debates around Facebook wonder whether the site has eroded public discourse to such an extent that democracy itself has been undermined, debates around Wikipedia center around whether it is as accurate as the expert-generated content of Encyclopedia Britannica. (Encyclopedia Britannica says no; Wikipedia says it’s close.)

The difference is that Wikipedia empowers users. Anyone, absolutely anyone, can update Wikipedia. Everyone can see who has edited what, allowing users to self-regulate—and how users identified that suspected Russian agent Maria Butina was probably changing her own Wikipedia page, and changed it back. This radical transparency and empowerment produces organic social processes where, much like in the Manhattan street, individuals collectively mediate their own space. And, most importantly, it is dynamic—Wikipedia changes all the time. Instead of a static ruling (such as Facebook’s determination that the iconic photo of Napalm Girl would be banned for child nudity), Wikipedia’s process produces dialogue and deliberation, where communities constantly socially construct meaning and knowledge. Finally, because cyberspace is ultimately mediated space—individuals cannot just “walk” or “wander” across sidewalks, like in the real world—Wikipedia is mission-driven. It does not have the amorphous goal of “connecting the global community”, but rather “to create a world in which everyone can freely share in the sum of all knowledge.”

This suggests that no amount of design updates or changes to terms of service will ever “fix” Facebook—whether they are imposed by the US government, Mark Zuckerberg or Facebook’s Oversight Board. Instead, it is the high-modernism that is the problem. The anarchist’s approach would prioritize building designs that empower people and communities—so why not adopt the wiki-approach to the public square functions that social media currently serves, like wiki-newspapers or wiki-newsfeeds?

It might be better to take the anarchist’s approach. No algorithms are needed.

by Alina Utrata

Review: ‘The Social Dilemma’ – Take #1

The Social Dilemma is an interesting — and much-discussed — docudrama about the impact of social media on society.  We thought it’d be interesting to have a series in which we gather different takes on the film.  Here’s Take#1…

Spool forward a couple of centuries. A small group of social historians drawn from the survivors of climate catastrophe are picking through the documentary records of what we are currently pleased to call our civilisation, and they come across a couple of old movies. When they’ve managed to find a device on which they can view them, it dawns on them that these two films might provide an insight into a great puzzle: how and why did the prosperous, apparently peaceful societies of the early 21st century implode?

The two movies are The Social Network, which tells the story of how a po-faced Harvard dropout named Mark Zuckerberg created a powerful and highly profitable company; and The Social Dilemma, which is about how the business model of this company – as ruthlessly deployed by its po-faced founder – turned out to be an existential threat to the democracy that 21st-century humans once enjoyed.

Both movies are instructive and entertaining, but the second one leaves one wanting more. Its goal is admirably ambitious: to provide a compelling, graphic account of what the business model of a handful of companies is doing to us and to our societies. The intention of the director, Jeff Orlowski, is clear from the outset: to reuse the strategy deployed in his two previous documentaries on climate change – nicely summarised by one critic as “bring compelling new insight to a familiar topic while also scaring the absolute shit out of you”.

For those of us who have for years been trying – without notable success – to spark public concern about what’s going on in tech, it’s fascinating to watch how a talented movie director goes about the task. Orlowski adopts a two-track approach. In the first, he assembles a squad of engineers and executives – people who built the addiction-machines of social media but have now repented – to talk openly about their feelings of guilt about the harms they inadvertently inflicted on society, and explain some of the details of their algorithmic perversions.

They are, as you might expect, almost all males of a certain age and type. The writer Maria Farrell, in a memorable essay, describes them as examples of the prodigal techbro – tech executives who experience a sort of religious awakening and “suddenly see their former employers as toxic, and reinvent themselves as experts on taming the tech giants. They were lost and are now found.”

Biblical scholars will recognise the reference from Luke 15. The prodigal son returns having “devoured his living with harlots” and is welcomed with open arms by his old dad, much to the dismay of his more dutiful brother. Farrell is not so welcoming. “These ‘I was lost but now I’m found, please come to my Ted Talk’ accounts,” she writes, “typically miss most of the actual journey, yet claim the moral authority of one who’s ‘been there’ but came back. It’s a teleportation machine, but for ethics.”

It is, but Orlowski welcomes these techbros with open arms because they suit his purpose – which is to explain to viewers the terrible things that the surveillance capitalist companies such as Facebook and Google do to their users. And the problem with that is that when he gets to the point where we need ideas about how to undo that damage, the boys turn out to be a bit – how shall I put it? – incoherent.

The second expository track in the film – which is interwoven with the documentary strand – is a fictional account of a perfectly normal American family whose kids are manipulated and ruined by their addiction to social media. This is Orlowski’s way of persuading non-tech-savvy viewers that the documentary stuff is not only real, but is inflicting tangible harm on their teenagers. It’s a way of saying: Pay attention: this stuff really matters!

And it works, up to a point. The fictional strand is necessary because the biggest difficulty facing critics of an industry that treats users as lab rats is that of explaining to the rats what’s happening to them while they are continually diverted by the treats (in this case dopamine highs) being delivered by the smartphones that the experimenters control.

Where the movie fails is in its inability to accurately explain the engine driving this industry that harnesses applied psychology to exploit human weaknesses and vulnerabilities.

A few times it wheels on Prof Shoshana Zuboff, the scholar who gave this activity a name – “surveillance capitalism”, a mutant form of our economic system that mines human experience (as logged in our data trails) in order to produce marketable predictions about what we will do/read/buy/believe next. Most people seem to have twigged the “surveillance” part of the term, but overlooked the second word. Which is a pity because the business model of social media is not really a mutant version of capitalism: it’s just capitalism doing its thing – finding and exploiting resources from which profit can be extracted. Having looted, plundered and denuded the natural world, it has now turned to extracting and exploiting what’s inside our heads. And the great mystery is why we continue to allow it to do so.

John Naughton

Should you have a right to a Facebook account?

Alina Utrata

Now that the 2020 US presidential election has concluded, the post-mortem evaluation of how well social media platforms performed will begin. Since the content moderation debate has mostly focused on platforms’ willing or unwillingness to remove content or accounts, the post-election coverage will almost inevitably center around who and what was removed or labeled.

In 2019, the New York Times published a feature story about individuals who had had their Facebook accounts suspended—possibly because they had been misidentified as fake accounts in a general security sweep. However, these users do not know for certain. Individuals were only told that their accounts had been disabled because of “suspicious activity”—the appeal process to restore suspended Facebook accounts is not a transparent one, and cases frequently drag on for extended periods with no resolution. (Facebook, as the article documents, has quite sophisticated techniques for catching individuals attempting to make multiple accounts, foreclosing the solution of a “second Facebook account” for suspended users.)

Facebook CEO Mark Zuckerberg has frequently said that he does not want the platform to become the “arbiter of free speech.” Free speech, however, is a restriction that only applies to governments. As the existence of these suspended accounts show, in reality Facebook can limit speech or ban users for almost any reason it cares to put in its terms of service. It is a private corporation, not a government. 

The problem of the exclusionary policies of private corporations might be less acute in a competitive marketplace. For example, it could be inconvenient if a personal feud with your local corner store leads you to being banned from the shop; but it is always possible to buy milk from another store down the road. It is different, of course, if you happen to live in Lawton, Oklahoma—or one of the hundreds of communities across the US where Walmart is the dominant monopoly, capturing 70% or more of the grocery store market. Being banned from Walmart (either for using their electric scooters while intoxicated or violating the store’s policy on not carrying guns) might be far more significant for your life and livelihood.

Facebook’s form of monopoly power means that being banned from the platform can have significant consequences for individuals’ lives: loss of the data hosted on the platform (like photos or old messages), the ability to use messenger to connect to friends and family, or participate in professional or social groups only organized on Facebook. Some people depend on Facebook for their livelihoods, communicating with customers or selling on Facebook’s marketplace; or for political campaigns, reaching out to voters in a run for local city council, for example. The same dynamics are true for other digital monopolies, like Amazon. The recent House Judiciary report found that Amazon can, and often does, arbitrarily lower third-party sellers’ products in their search ranking, lengthen their shipping times, or kick them off the site entirely. About 850,000 businesses, or 37% of third-party sellers on Amazon, rely on Amazon as their sole source of income. Monopolies can be, as Zephyr Teachout argues, a form of economic tyranny.

There are two general approaches floated to remedying this monopoly power. The first is to “break them up.” Facebook or Amazon’s policies might be less important if there are many e-commerce or social networking sites in town—and perhaps their policies would improve if they had to compete with other platforms for users or sellers. On the other hand, Facebook might argue that the value of social networking sites are the fact that they are consolidated. As the sudden surge in popularity of the app Parler may soon demonstrate: there’s very little point in being on a social networking site if the people you want to reach aren’t there too. Alternative social networking sites may simply be complementary, rather than competitive. Similarly, Amazon might argue that it is convenient, and beneficial, to both consumers and sellers that e-commerce is located all in one place. Instead of searching online (by using another monopoly, Google) through hundreds of webpages with no guide as to quality, you can go to one portal at Amazon and find exactly what you want.

A second approach to tackling monopolies is regulation. For example, the state can and does get involved if a private corporation excludes you on the basis of a protected identity, such a race or sexual orientation. US Senator Elizabeth Warren’s calls for Amazon, Apple or Google to choose whether they want to be “market platforms” or “market participators” is another example of the state’s attempt to impose regulations in order to make sure that these monopolies are more fair. The government also gets involved when it involves product safety. For example, the Forum on Information and Democracy just published a report outlining recommending principles for regulating quality standards for digital platforms, in the same way that governments might require standards for food or medicine sold on the market. In this approach, the state imposes limits or controls on corporations to try and curb or reform their power over consumers. However, this approach requires active government enforcement and involvement. As the House Judiciary report documented, even though they are equipped with anti-trust laws, many US regulatory agencies have been slow or unwilling to take on the Big Tech monopolies. Corporations also point out that government involvement can stifle innovative and entrepreneurship.

However, there might be a third approach: democratization. Mark Zuckerberg has said that, “in a lot of ways Facebook is more like a government than a traditional company.” If that is the case, then it has been a long time since the United States tolerated a government with the kind of absolute power Mark Zuckerberg exerts over Facebook (as CEO and founder, Zuckerberg retains majority voting shares). So could we democratize Facebook, and make it a company ruled by consent of the governed rather than fiat of the king? Could Facebook users appoint representatives to a “Constitutional Convention” to draft Facebook’s terms of service, or adopt a Bill of Rights to guide design and algorithmic principles? Facebook’s Oversight Board has already been compared to a Supreme Court, so why not add a legislative branch too? Could we have elections on representatives to a Facebook legislature, which would pass “laws” about how the online community should be governed? (A Facebook legislature would arguably be more effective than the referendum process Facebook tried last time it experimented with democratization.) 

Crucially, however, any democratization process would have to be coupled with genuine democratic reform of Facebook’s corporate governance: a Facebook Parliament in name only wouldn’t achieve much if Mark Zuckerberg retained absolute control of the company. True democratization would require in a change not just in who we think represents Facebook, but who owns Facebook—or, rather, who ought to own Facebook. Mark Zuckerberg? Or we, its users? If the answer is Mark Zuckerberg, a Facebook account will always be a privilege, not a right.

Create your website with WordPress.com
Get started