Some lessons of Trump’s short career as a blogger

By John Naughton

The same unaccountable power that deprived Donald J. Trump of his online megaphones could easily be deployed to silence other prominent figures, including those of whom liberals approve.

‘From the Desk of Donald J. Trump’ lasted just 29 days. It’s tempting to gloat over this humiliating failure of a politician hitherto regarded as an omnipotent master of the online universe.

Tempting but unwise, because Trump’s failure should alert us to a couple of unpalatable realities.

The first is that the eerie silence that descended after the former President was ‘deplatformed’ by Twitter and Facebook provided conclusive evidence of the power of these two private companies to control the networked public sphere.

Those who loathed Trump celebrate his silencing because they regarded him — rightly — as a threat to democracy.

But on the other hand nearly half of the American electorate voted for him. And the same unaccountable power that deprived him of his online megaphones could easily be deployed to silence other prominent figures, including those of whom liberals approve.

The other unpalatable reality is that Trump’s failure to build an online base from scratch should alert us to the way the utopian promise of the early Internet — that it would be the death of the ‘couch potato’, the archetypal passive media consumer — has not been realised. Trump, remember, had 88.9m followers on Twitter and over 33m fans on Facebook.

“The failure of Trump’s blog is not just a confirmation of the unaccountable power of those who own and control social media, but also a reflection of the way Internet users enthusiastically embraced the ‘push’ model of the Web over the ‘pull’ model that we techno-utopians once hoped might be the network’s future.”

Yet when he started his own blog they didn’t flock to it. In fact they were nowhere to be seen. Forbes reported that the blog had “less traffic than pet adoption site Petfinder and food site Eat This Not That.” And it was reported that he had shuttered it because “low readership made him look small and irrelevant”. Which it did.

What does this tell us? The answer, says Philip Napoli in an insightful essay in Wired,

“lies in the inescapable dynamics of how today’s online media ecosystem operates and how audiences have come to engage with content online. Many of us who study media have long distinguished between “push” media and “pull” media.

“Traditional broadcast television is a classic “push” medium, in which multiple content streams are delivered to a user’s device with very little effort required on the user’s part, beyond flipping the channels. In contrast, the web was initially the quintessential “pull” medium, where a user frequently needed to actively search to locate content interesting to them.

“Search engines and knowing how to navigate them effectively were central to locating the most relevant content online. Whereas TV was a “lean-back” medium for “passive” users, the web, we were told, was a “lean-forward” medium, where users were “active.” Though these generalizations no longer hold up, the distinction is instructive for thinking about why Trump’s blog failed so spectacularly.

“In the highly fragmented web landscape, with millions of sites to choose from, generating traffic is challenging. This is why early web startups spent millions of dollars on splashy Super Bowl ads on tired, old broadcast TV, essentially leveraging the push medium to inform and encourage people to pull their online content.

“Then social media helped to transform the web from a pull medium to a push medium...”

adem-ay-Tk9m_HP4rgQ-unsplash

Credit: Adem AY for Unsplash

This theme was nicely developed by Cory Doctorow in a recent essay, “Recommendation engines and ‘lean-back’ media”.  The optimism of the early Internet era, he mused, was indeed best summarized in that taxonomy.

“Lean-forward media was intensely sociable: not just because of the distributed conversation that consisted of blog-reblog-reply, but also thanks to user reviews and fannish message-board analysis and recommendations.

“I remember the thrill of being in a hotel room years after I’d left my hometown, using Napster to grab rare live recordings of a band I’d grown up seeing in clubs, and striking up a chat with the node’s proprietor that ranged fondly and widely over the shows we’d both seen.

“But that sociability was markedly different from the “social” in social media. From the earliest days of Myspace and Facebook, it was clear that this was a sea-change, though it was hard to say exactly what was changing and how.

“Around the time Rupert Murdoch bought Myspace, a close friend had a blazing argument with a TV executive who insisted that the internet was just a passing fad: that the day would come when all these online kids grew up, got beaten down by work and just wanted to lean back.

“To collapse on the sofa and consume media that someone else had programmed for them, anaesthetizing themselves with passive media that didn’t make them think too hard.

“This guy was obviously wrong – the internet didn’t disappear – but he was also right about the resurgence of passive, linear media.”

This passive media, however, wasn’t the “must-see TV” of the 80s and 90s.  Rather, it was the passivity of the recommendation algorithm, which created a per-user linear media feed, coupled with mechanisms like “endless scroll” and “autoplay,” that obliterated any trace of an active role for the  aptly-named Web “consumer”.

As Napoli puts it,

“Social media helped to transform the web from a pull medium to a push medium. As platforms like Twitter and Facebook generated massive user bases, introduced scrolling news feeds, and developed increasingly sophisticated algorithmic systems for curating and recommending content in these news feeds, they became a vital means by which online attention could be aggregated.

“Users evolved, or devolved, from active searchers to passive scrollers, clicking on whatever content that their friends, family, and the platforms’ news feed algorithms put in front of them. This gave rise to the still-relevant refrain “If the news is important, it will find me.” Ironically, on what had begun as the quintessential pull medium, social media users had reached a perhaps unprecedented degree of passivity in their media consumption. The leaned-back “couch potato” morphed into the hunched-over “smartphone zombie.””

So the failure of Trump’s blog is not just a confirmation of the unaccountable power of those who own and control social media, but also a reflection of the way Internet users enthusiastically embraced the ‘push’ model of the Web over the ‘pull’ model that we techno-utopians once hoped might be the network’s future.

Apple clearly has power, but it isn’t accountable

By John Naughton

The only body that has, to date, been able to exert real control over the data-tracking industry is a giant private company which itself is subject to serious concerns about its monopolistic behaviour. Where is democracy in all this?

A few weeks ago, Apple dropped its long-promised bombshell on the data-tracking industry.

The latest version (14.5) of iOS — the operating system of the iPhone — included a provision that required app users explicitly to confirm that they wished to be tracked across the Internet in their online activities.

At the heart of the switch is a code known as “the identity for advertisers” or IDFA. It turns out that every iPhone comes with one of these identifiers, the object of which is to provide participants in the hidden real-time bidding system with aggregate data about the user’s interests.

For years, iPhone users had had the option to switch it off by digging into the privacy settings of their devices; but, because they’re human, very few had bothered to do that.

From 14.5 onwards, however, they couldn’t avoid making a decision, and you didn’t have to be a Nobel laureate to guess that most iPhone users would opt out.

Which explains why those who profit from the data-tracking racket had for months been terminally anxious about Apple’s perfidy.

Some of the defensive PR mounted on their behalf — for example Facebook’s weeping about the impact on small, defenceless businesses — defied parody.

“We have evidence of its [real-time bidding] illegitimacy, and a powerful law on the statute book which in principle could bring it under control — but which we appear unable to enforce.”

Other counter-offensives included attacks on Apple’s monopolistic control over its Apps store, plus charges of rank hypocrisy – that changes in version 14.5 were not motivated by Apple’s concerns for users’ privacy but by its own plans to enter the advertising business. And so on.

It’ll be a while until we know for sure whether the apocalyptic fears of the data-trackers were accurate.

It takes time for most iPhone users to install operating system updates, and so these are still relatively early days. But the first figures are promising. One data-analytics company, for example, has found that in the early weeks the daily opt-out rate for American users has been around 94 percent.

This is much higher than surveys conducted in the run-up to the change had suggested — one had estimated an opt-out rate closer to 60 per cent.

If the opt-out rate is as high as we’ve seen so far, then it’s bad news for the data-tracking racket and good news for humanity. And if you think that description of what the Financial Times estimates to be a $350B industry is unduly harsh, then a glance at a dictionary may be helpful.

Merriam-Webster, for example, defines ‘racket’ as “a fraudulent scheme, enterprise, or activity” or “a usually illegitimate enterprise made workable by bribery or intimidation”.

It’s not clear whether the computerised, high-speed auction system in which online ads are traded benefits from ‘bribery or intimidation’, but it is certainly illegal — and currently unregulated.

That is the conclusion of a remarkable recent investigation by two legal scholars, Michael Veale and Frederik Zuiderveen Borgesius, who set out to examine whether this ‘real-time bidding’ (RTB) system conforms to European data-protection law.

“The irony in this particular case is that there’s no need for such an overhaul: Europe already has the law in place.”

They asked whether RTB complies with three rules of the GDPR (General Data Protection Regulation) — the requirement for a legal basis, transparency, and security. They showed that for each of the requirements, most RTB practices do not comply. “Indeed”, they wrote, “it seems close to impossible to make RTB comply”. So, they concluded, it needs to be regulated.

It does.

Often the problem with tech regulation is that our legal systems need to be overhauled to deal with digital technology. But the irony in this particular case is that there’s no need for such an overhaul: Europe already has the law in place.

It’s the GDPR, which is part of the legal code of every EU country and has provision for swingeing punishments of infringers. The problem is it’s not being effectively enforced.

Why not? The answer is that the EU delegates regulatory power to the relevant institutions — in this case Data Protection Authorities — of its member states. And these local outfits are overwhelmed by the scale of the task – and are lamentably under-resourced for it.

Half of Europe’s DPAs have only five technical experts or fewer. And the Irish Data Protection Authority, on whose patch most of the tech giants have their European HQs, has the heaviest enforcement workload in Europe and is clearly swamped.

So here’s where we are: an illegal online system has been running wild for years, generating billions of profits for its participants.

We have evidence of its illegitimacy, and a powerful law on the statute book which in principle could bring it under control — but which we appear unable to enforce.

And the only body that has, to date, been able to exert real control over the aforementioned racket is… a giant private company which itself is subject to serious concerns about its monopolistic behaviour. And the question for today: where is democracy in all this? You only have to ask to know the answer.


A version of this post appeared in The Observer on 23 May, 2021.

In Review: Bellingcat and the unstoppable Mr Higgins

By John Naughton

Review of We are Bellingcat: An Intelligence Agency for the People, by Eliot Higgins, Bloomsbury, 255pp

On the face of it, this book tells an implausible story. It’s about how an ordinary guy – a bored administrator in Leicester, to be precise – becomes a skilled Internet sleuth solving puzzles and crimes which appear to defeat some of the world’s intelligence agencies. And yet it’s true. Eliot Higgins was indeed a bored administrator, out of a job and looking after his young daughter in 2011 while his wife went out to work. He was an avid watcher of YouTube videos, especially of those emanating from the Syrian civil war, and one day had an epiphany: “If you searched online you could find facts that neither the press nor the experts knew.”

Higgins realised that one reason why mainstream media were ignoring the torrent of material from the war zone that was being uploaded to YouTube and other social media channels was that these outlets were unable to verify or corroborate it. So he started a blog — the Brown Moses blog — and discovered that a smattering of other people had had a similar realisation, which was the seed crystal for the emergence of an online community that converged around news events that had left clues on YouTube, Facebook, Twitter and elsewhere.

This community of sleuths now sails under the flag of Bellingcat, a name taken from the children’s story about the ingenious mice who twig that the key to obtaining early warning of a cat’s approach is to put a bell round its neck. This has led to careless journalists calling members of the community “Bellingcats” — which leads them indignantly to point out that they are the mice, not the predators!

The engaging name belies a formidable little operation which has had a series of impressive scoops. One of the earliest involved confirming Russian involvement in the downing of MH17, the Malaysia Airlines aircraft brought down by a missile when flying over Ukraine. Other impressive scoops included identification of the Russian FSB agents responsible for the Skripal poisonings and finding the FSB operative who tried to assassinate Alexai Navalny, the Russian democratic campaigner and Putin opponent who is now imprisoned — and, reportedly, seriously ill — in a Russian gaol.

‘We are Bellingcat’ is a low-key account of how this remarkable outfit evolved and of the role that Mr Higgins played in its development. The deadpan style reflects the author’s desire to project himself as an ordinary Joe who stumbled on something significant and worked at it in collaboration with others. This level of understatement is admirable but not entirely persuasive for the simple reason that Higgins is no ordinary Joe. After all, one doesn’t make the transition from a bored, low-level administrator to become a Research Fellow at U.C. Berkeley’s Human Rights Center and a member of the International Criminal Court’s Technology Advisory Board without having some exceptional qualities.

“One could say that the most seminal contribution Bellingcat has made so far is to explore and disseminate the tools needed to convert user-generated content into more credible information — and maybe, sometimes, into the first draft of history.”

One of the most striking things about Bellingcat’s success is that — at least up to this stage — its investigative methodology is (to use a cliché) not rocket science. It’s a combination of determination, stamina, cooperation, Internet-saviness, geolocation (where did something happen?), chronolocation (when did it happen?) and an inexhaustible appetite for social-media-trawling. There is, in other words, a Bellingcat methodology — and any journalist can learn it, provided his or her employer is prepared to provide the time and opportunity to do so. In response, Bellingcat has been doing ‘boot camps’ for journalists — first in Germany, Britain and France and — hopefully — in the US. And the good news is that some mainstream news outlets, including the New York Times, the Wall Street Journal and the BBC, have been setting up journalistic units working in similar ways.

In the heady days of the so-called ‘Arab spring’ there was a lot of excited hype about the way the smartphone had launched a new age of ‘Citizen Journalism’. This was a kind of category error which confused user-generated content badged as ‘witnessing’ with the scepticism, corroboration, verification, etc. that professional journalism requires. So in that sense one could say that the most seminal contribution Bellingcat has made so far is to explore and disseminate the tools needed to convert user-generated content into more credible information — and maybe, sometimes, into the first draft of history.

Mr Higgins makes continuous use of the phrase “open source” to describe information that he and his colleagues find online, when what he really means is that the information — because it is available online — is in the public domain. It is not ‘open source’ in the sense that the term is used in the computer industry, but I guess making that distinction is now a lost cause because mainstream media have re-versioned the phrase.

The great irony of the Bellingcat story is that the business model that finances the ‘free’ services (YouTube, Twitter, Facebook, Reddit, Instagram et al) that are polluting the public sphere and undermining democracy is also what provides Mr Higgins and his colleagues with the raw material from which their methodology extracts so many scoops and revelations. Mr Higgins doesn’t have much time for those of us who are hyper-critical of the tech industry. He sees it as a gift horse whose teeth should not be too carefully examined. And I suppose that, in his position, I might think the same.

Forthcoming in British Journalism Review, vol. 32, No 2, June 2021.

In Review: Democracy, law and controlling tech platforms

By John Naughton

Notes on a discussion of two readings

  1. Paul Nemitz: “Constitutional democracy and technology in the age of artificial intelligence”, Philosophical Transactions of the Royal Society A, 15 October 2018. https://doi.org/10.1098/rsta.2018.0089
  2. Daniel Hanley: “How Antitrust Lost its Bite”, Slate, April 6, 2021 – https://tinyurl.com/2ht4h8wf

I had proposed these readings because (a) Nemitz’s provided a vigorous argument for resisting the ‘ethics-theatre’ currently being orchestrated by the tech industry as a pre-emptive strike against regulation by law; and (b) the Hanley article argued the need for firm rules in antitrust legislation rather than the latitude currently offered to US judges by the so-called “rule of reason”.

Most of the discussion revolved around the Nemitz article. Here are my notes of the conversation, using the Chatham House Rule as a reporting principle.

  • Nemitz’s assertion that “The Internet and its failures have thrived on a culture of lawlessness and irresponsibility” was challenged as an “un-nuanced and uncritical view of how law operates in the platform economy”. The point was that platform companies do of course ignore and evade law as and when it suits them, but they also at a corporate level rely on it and use it as both ‘a sword and a shield’; law has as a result played a major role in structuring the internet that now exists and producing the dominant platform companies we have today and has been leveraged very successfully to their advantage. Even the egregious abuse of personal data (which may seem closest to being “lawless”) largely still runs within the law’s overly permissive framework. Where it doesn’t, it generally tries to evade the law by skirt around gaps created within the law, so even this seemingly extra-legal processing is itself shaped by the law (and cannot therefore be “lawless”). So any respect for the law that they profess is indeed, as you say, disingenuous, but describing the internet as a “lawless” space – as Nemitz does – misses a huge part of the dynamic that got us here and is a real problem if we’re going to talk about the potential role of law in getting us out. Legal reform is needed, but if it’s going to work then we have to be aware of and account for these things.
  • This critique stemmed from the view that law is both produced by society and in turn reproduces society, and in that sense always functions essentially as an instrument of power — so it has historically been (and remains) a tool of dominance, of hierarchy, of exclusion and marginalisation, of capital and of colonialism. In that sense, the embryonic Silicon Valley giants slotted neatly into that paradigm. And so, could Nemitz’s insistence on the rule of law — without a critical understanding of what that actually means — itself be a problem?

“They [tech companies] employ the law when it suits them and do so very strategically – as both a ‘sword’ and a ‘shield’ – and that’s played a major role in getting the platform ecosystem to where it is now.”

  • On the one hand, laws are the basic tools that liberal democracies have available for bringing companies under democratic (i.e. accountable) control. On the other hand, large companies have always been adept (and, in liberal democracies, very successful) at using the law to further their interests and cement their power.
  • This point is particularly relevant to tech companies. They’ve used law to bring users within their terms of service and thereby to hold on to assets (e.g. exabytes of user data) that they probably wouldn’t have been able to do otherwise. They use law to enable the pretence that click-through EULAs are, in fact, contracts. So they employ the law when it suits them and do so very strategically — as both a ‘sword’ and a ‘shield’ — and that’s played a major role in getting the platform ecosystem to where it is now.
  • Also, law plays a big role in driving and shaping technological development. Technologies don’t emerge in a vacuum, they’re a product of their context and law is a major part of that context. So the platform business models and what’s happening on the internet aren’t outside of the law; they’re constructed through, and depend upon, it. So it’s misleading when people argue (like Nemitz??) that we need to use law to change things — as if the law isn’t there already and may actually be partially enabling things that are societally damaging. So unless we properly understand the rule of law in getting us to our current problematique, talking about how law can help us is like talking about using a tool to fix a problem without realising that the tool is itself is part of the problem.

“It’s the primacy of democracy, not of law that’s crucial.”

  • There was quite a lot of critical discussion of the GDPR on two fronts — its ‘neoliberal’ emphasis on individual rights; and things that are missing from it. Those omissions and gaps are not necessarily mistakes; they may be the result of political choices.
  • One question is whether there is a deficit of law around who owns property in the cloud. If you upload a photo to Facebook or whatever it’s unclear if you have property rights over or if the cloud-computing provider does. General consensus seems to be that that’s a tricky question! (Questions about who owns your data generally are.)
  • Even if laws exist, enforcement looks like a serious problem. Sometimes legal coercion of companies is necessary but difficult. And because of the ‘placelessness’ of the internet, it seems possible that a corporation or an entity could operate in a place where there’s no nexus to coerce it. Years ago Tim Wu and Jack Goldsmith’s book recounted how Yahoo discovered that they couldn’t just do whatever they wanted in France because they had assets in that jurisdiction and France seized them. Would that be the case that with say, Facebook, now? (Just think of why all of the tech giants have their European HQs in Ireland.)
  • It’s the primacy of democracy, not of law that’s crucial. If the main argument of the Nemitz paper is interpreted as the view that law will solve our problems, that’s untenable. But if we take as the main argument that we need to democratically discuss what the laws are, then we all agree with this. (But isn’t that just vacuous motherhood and apple pie?)
  • More on GDPR… it sets up a legal framework in which we can regulate the consenting person that is, that’s a good thing that most people can agree on. But the way that GDPR is constructed is extremely individualistic. For example, it disempowers data subjects in even in the name of giving them rights because it individualises them. So even the way that it’s constructed actually goes some way towards undermining its good effects. It’s based on the assumption that if we give people rights then everything will be fine. (Shades of the so-called “Right to be Forgotten”.)

As for the much-criticised GDPR, one could see it as an example of ‘trickle-down’ regulation, in that GDPR has become a kind of gold standard for other jurisdictions.

  • Why hasn’t academic law been a more critical discipline in these areas? The answer seems to be that legal academia (at least in the UK, with some honourable exceptions) seems exceptionally uncritical of tech, and any kind of critical thinking is relatively marginalised within the discipline compared to other social sciences. Also most students want to go into legal practice, so legal teaching and scholarship tends to be closely tied to law as a profession and, accordingly, the academy tends to be oriented around ‘producing’ practising lawyers.
  • There was some dissent from the tenor of the preceding discourse about the centrality of law and especially about the improbability of overturning such a deeply embedded cognitive and professional system. This has echoes of a venerable strand in political thinking which says that in order to change anything you have to change everything and it’s worse to change a little bit than it is to change everything — which means nothing actually changes. This is the doctrine that it’s quite impossible to do any good at all unless you do the ultimate good, which is to change everything. (Which meant, capitalism and colonialism and original sin, basically!) On the other hand, there is pragmatic work — making tweaks and adjustments — which though limited in scope might be beneficial and appeal to liberal reformers (and are correspondingly disdained by lofty adherents to the Big Picture).
  • There were some interesting perspectives based on the Daniel article. Conversations with people across disciplines show that technologists seem to suggest a technical solution for everything (solutionism rules OK?), while lawyers view the law as a solution for everything. But discussions with political scientists and sociologists mostly involve “fishing for ideas” which is a feature, not a bug, because it suggests that minds are not set in silos — yet. But one of the problems with the current discourse — and with these two articles — is that the law currently seems to be filling the political void. And the discourse seems to reflect public approval of the judicial approach compared with the trade-offs implicit in Congress. But the Slate article shows the pernicious influence or even interference of an over-politicised judiciary in politics and policy enforcement. (The influence of Robert Bork’s 1978 book and the Chicago School is still astonishing to contemplate.)
  • The Slate piece seems to suffer from a kind of ‘neocolonial governance syndrome’ — the West and the Rest. We all know section 230 by heart. And now it’s the “rule of reason” and the consumer welfare criterion of Bork. It’s important to understand the US legal and political context. But we should also understand: the active role of the US administration; what happened recently in Australia (where the government intervened, both through diplomatic means and directly on behalf of the Facebook platform); and in Ireland (where the government went to the European Court to oppose a ruling that Apple had underpaid tax to the tune of 13 billion Euros). So the obsession with the US doesn’t say much about the rest of the world’s capacity to intervene and dictate the rules of the game. And yet China, India and Turkey have been busy in this space recently.
  • And as for the much-criticised GDPR, one could see it as an example of ‘trickle-down’ regulation, in that GDPR has become a kind of gold standard for other jurisdictions. Something like 12 countries have adopted GDPR-like legislation, and this includes many countries in Latin America Chile. Chile, Brazil, South Africa and South Africa, South Africa, Japan, Canada and so on so forth.

Mail-In Voter Fraud: Anatomy of a Disinformation Campaign

John Naughton:

Yochai Benkler and a team from the Berkman-Klein Centre have published an interesting study which comes to conclusions that challenge conventional wisdom about the power of social media.

“Contrary to the focus of most contemporary work on disinformation”, they write,

our findings suggest that this highly effective disinformation campaign, with potentially profound effects for both participation in and the legitimacy of the 2020 election, was an elite-driven, mass-media led process. Social media played only a secondary and supportive role. This chimes with the study on networked propaganda that Yochai, Robert Faris and Hal Roberts conducted in 2015-16 and published in 2018 in  Network Propaganda: Manipulation, Disinformation, and Radicalization in American Politics. They argued that the right-wing media ecosystem in the US operates fundamentally differently than the rest of the media environment. Their view was that longstanding institutional, political, and cultural patterns in American politics interacted with technological change since the 1970s to create a propaganda feedback loop in American conservative media. This dynamic has, they thought, marginalised centre-right media and politicians, radicalised the right wing ecosystem, and rendered it susceptible to propaganda efforts, foreign and domestic.

The key insight in both studies is that we are dealing with an ecosystem, not a machine, which is why focussing exclusively on social media as a prime explanation for the political upheavals of the last decade is unduly reductionist. In that sense, much of the public (and academic) commentary on social media’s role brings to mind the cartoon of the drunk looking for his car keys under a lamppost, not because he lost them there, but because at least there’s light. Because social media are relatively new arrivals on the scene, it’s (too) tempting to over-estimate their impact. Media-ecology provides a better analytical lens because it means being alert to factors like diversity, symbiosis, feedback loops and parasitism rather than to uni-causal explanations.

(Footnote: there’s a whole chapter on this — with case-studies — in my book From Gutenberg to Zuckerberg — published way back in 2012!)

The flight from WhatsApp

John Naughton:

Not surprisingly, Signal has been staggering under the load of refugees from WhatsApp following Facebook’s ultimatum about sharing their data with other companies in its group. According to data from Sensor Tower Signal was downloaded 8.8m times worldwide in the week after the WhatsApp changes were first announced on January 4. Compare that with 246,000 downloads the week before and you get some idea of the step-change. I guess the tweet — “Use Signal” — from Elon Musk on January 7 probably also added a spike.

In contrast, WhatsApp downloads during the period showed the reverse pattern — 9.7m downloads in the week after the announcement, compared with 11.3m before, a 14 per cent decrease.

This isn’t a crisis for Facebook — yet. But it’s a more serious challenge than the June 2020 advertising boycott. Evidence that Zuckerberg & Co are taking it seriously comes from announcements that Facebook has cancelled the February 8 deadline in its ultimatum to users. It now says that it will instead “go to people gradually to review the policy at their own pace before new business options are available on May 15.”  As Charles Arthur has pointed out, the contrast between the leisurely pace at which Facebook has moved on questions of hate speech posted by alt-right outfits and it’s lightning response to the exodus from WhatsApp is instructive.  It shows what really matters to the top brass.

Signal seems an interesting outfit, incidentally, and not just because of its technology. It’s a not-for-profit organisation, for one thing. Its software is open source — which means it can be independently assessed. And it’s been created by interesting people. Brian Acton, for example, is one of the two co-founders of WhatsApp, which Facebook bought in 2014 for $19B. He pumped $50m of that into Signal, and no doubt there’s a lot more where that came from. And Moxie Marlinspike, the CEO, is not only a cryptographer but also a hacker, a shipwright, and a licensed mariner. The New Yorker had a nice profile of him a while back.

Silencing Trump and authoritarian tech power

John Naughton:

It was eerily quiet on social media last week. That’s because Trump and his cultists had been “deplatformed”. By banning him, Twitter effectively took away the megaphone he’s been masterfully deploying since he ran for president. The shock of the 6 January assault on the Capitol was seismic enough to convince even Mark Zuckerberg that the plug finally had to be pulled. And so it was, even to the point of Amazon Web Services terminating the hosting of Parler, a Twitter alternative for alt-right extremists.

The deafening silence that followed these measures was, however, offset by an explosion of commentary about their implications for freedom, democracy and the future of civilisation as we know it. Wading knee-deep through such a torrent of opinion about the first amendment, free speech, censorship, tech power and “accountability” (whatever that might mean), it was sometimes hard to keep one’s bearings. But what came to mind continually was H L Mencken’s astute insight that “for every complex problem there is an answer that is clear, simple and wrong”. The air was filled with people touting such answers.

In the midst of the discursive chaos, though, some general themes could be discerned. The first highlighted cultural differences, especially between the US with its sacred first amendment on the one hand and European and other societies, which have more ambivalent histories of moderating speech. The obvious problem with this line of discussion is that the first amendment is about government regulation of speech and has nothing whatsoever to do with tech companies, which are free to do as they like on their platforms.

A second theme viewed the root cause of the problem as the lax regulatory climate in the US over the last three decades, which led to the emergence of a few giant tech companies that effectively became the hosts for much of the public sphere. If there were many Facebooks, YouTubes and Twitters, so the counter-argument runs, then censorship would be less effective and problematic because anyone denied a platform could always go elsewhere.

Then there were arguments about power and accountability. In a democracy, those who make decisions about which speech is acceptable and which isn’t ought to be democratically accountable. “The fact that a CEO can pull the plug on Potus’s loudspeaker without any checks and balances,” fumed EU commissioner Thierry Breton, “is not only confirmation of the power of these platforms, but it also displays deep weaknesses in the way our society is organised in the digital space.” Or, to put it another way, who elected the bosses of Facebook, Google, YouTube and Twitter?

What was missing from the discourse was any consideration of whether the problem exposed by the sudden deplatforming of Trump and his associates and camp followers is actually soluble – at least in the way it has been framed until now. The paradox that the internet is a global system but law is territorial (and culture-specific) has traditionally been a way of stopping conversations about how to get the technology under democratic control. And it was running through the discussion all week like a length of barbed wire that snagged anyone trying to make progress through the morass.

All of which suggests that it’d be worth trying to reframe the problem in more productive ways. One interesting suggestion for how to do that came last week in a thoughtful Twitter thread by Blayne Haggart, a Canadian political scientist. Forget about speech for a moment, he suggests, and think about an analogous problem in another sphere – banking. “Different societies have different tolerances for financial risk,” he writes, “with different regulatory regimes to match. Just like countries are free to set their own banking rules, they should be free to set strong conditions, including ownership rules, on how platforms operate in their territory. Decisions by a company in one country should not be binding on citizens in another country.”

In those terms, HSBC may be a “global” bank, but when it’s operating in the UK it has to obey British regulations. Similarly, when operating in the US, it follows that jurisdiction’s rules. Translating that to the tech sphere, it suggests that the time has come to stop accepting the tech giant’s claims to be hyper-global corporations, whereas in fact they are US companies operating in many jurisdictions across the globe, paying as little local tax as possible and resisting local regulation with all the lobbying resources they can muster. Facebook, YouTube, Google and Twitter can bleat as sanctimoniously as they like about freedom of speech and the first amendment in the US, but when they operate here, as Facebook UK, say, then they’re merely British subsidiaries of an American corporation incorporated in California. And these subsidiaries obey British laws on defamation, hate speech and other statutes that have nothing to do with the first amendment. Oh, and they should also pay taxes on their local revenues.

Is the UK really going to innovate in regulation of Big Tech?

On Tuesday last week the UK Competition and Markets Authority (CMA) outlined plans for an innovative way of regulating powerful tech firms in a way that overcomes the procedural treacle-wading implicit in competition law that had been designed for an analogue era.

The proposals emerged from an urgent investigation by the Digital Markets Taskforce, an ad-hoc body set up in March and led by the CMA with inputs from the Information Commissioner’s Office and OFCOM, the telecommunications and media regulator. The Taskforce was charged with providing advice to the government on the design and implementation of a pro-competition regime for digital markets. It was set up following the publication of the Treasury’s Furman Review on ‘Unlocking digital competition’ which reported in March 2019 and drew on evidence from the CMA’s previous market study into online platforms and digital advertising.

This is an intriguing development in many ways. First of all it seems genuinely innovative. Hitherto, competition laws have been framed to cover market domination or monopolistic abuse without mentioning any particular company, but the new UK approach for tech companies could set specific rules for named companies — Facebook and Google, say. More importantly, the approach bypasses the sterile arguments we have had for years about whether antique conceptions of ‘monopoly’ actually apply to firms which adroitly argue that they don’t meet the definition — while at the same time patently functioning as monopolies. Witness the disputes about whether Amazon really is a monopoly in retailing.

Rather than being lured down that particular rabbit-hole, the CMA proposes instead to focus attention on firms with what it calls ‘Strategic Market Status’ (SMS), i.e. firms with dominant presences in digital markets where there’s not much actual competition. That is to say, markets where difficulty of entry or expansion by potential rivals is effectively undermined by factors like network effects, economies of scale, consumer passivity (i.e. learned helplessness), the power of default settings, unequal (and possibly illegal) access to user data, lack of transparency, vertical integration and conflicts of interest.

At the heart of the new proposals is the establishment of a powerful, statutory Digital Markets Unit (DMU) located within the Competition and Markets Authority. This would have the power to impose legally-enforceable Codes of Conduct on SMS firms. The codes would, according to the proposals, be based on relatively high-level principles like ‘fair trading’, ‘open choices’ and ‘trust and transparency’ — all of which are novel ideas for tech firms. Possible remedies for specific companies (think Facebook and Google) could include mandated data access and interoperability to address Facebook’s dominance in social media or Google’s market power in general search.

It would be odd if, in due course, Amazon, Apple and Microsoft don’t also fall into the SMS category of “strategic”. Indeed it’s inconceivable that Amazon would not, given that it has morphed into critical infrastructure for many locked-down economies.

The government says that it going to consult on these radical proposals early next year and will then legislate to put the DMU on a statutory basis “when Parliamentary time allows”.

Accordingly, we can now look forward to a period of intensive corporate lobbying from Facebook & Co as they seek to derail or emasculate the proposals. Given recent history and the behaviour of which these outfits are capable, it would be prudent for journalists and civil society organisations to keep their guard up until this stuff is on the statute book.

The day after the CMA proposals were published (and after a prolonged legal battle) the Bureau of Investigative Journalists were finally able to publish the Minutes of a secret meeting that Matt Hancock had with the Facebook boss, Mark Zuckerberg, in May 2018. Hancock was at that time Secretary of State for DCMS, the department charged with combating digital harms. According to the Bureau’s report, he had sought “increased dialogue” with Zuckerberg, so he could “bring forward the message that he has support from Facebook at the highest level”. The meeting took place at the VivaTech conference in Paris. It was arranged “after several days of wrangling” by Matthew Gould, the former culture department civil servant that Hancock later made chief executive of NHS X. Civil servants had to give Zuckerberg “explicit assurances” that the meeting would be positive and Hancock would not simply demand that the Facebook boss attend the DCMS Select Committee inquiry into the Cambridge Analytica scandal (which he had refused to do).

The following month Hancock had a follow-up meeting with Elliot Schrage, Facebook’s top lobbyist, who afterwards wrote to the minister thanking him for setting out his thinking on “how we can work together on building a model for sensible co-regulation on online safety issues”.

Now that the UK government is intent on demonstrating its independence from foreign domination, perhaps the time has come to explain to tech companies a couple of novel ideas. Sovereign nations do regulation, not ‘co-regulation’; and companies obey the law.

……………………..

A version of this post was published in the Observer on Sunday, 13 December, 2020.

Review: ‘The Social Dilemma’ — Take #2

In “More than tools: who is responsible for the social dilemma?, Microsoft researcher Niall Docherty has an original take on the thinking that underpins the film. If we are to pursue more productive discussions of the issues raised by the film, he argues, we need to re-frame social media as something more than a mere “tool”. After all, “when have human beings ever been fully and perfectly in control of the technologies around them? Is it not rather the case that technologies, far from being separate from human will, are intrinsically involved in its activation?”

French philosopher Bruno Latour famously uses the example of the gun to advance this idea, which he calls mediation. We are all aware of the platitude, “Guns don’t kill people, people kill people”. In its logic, the gun is simply a tool that allows the person, as the primary agent, to kill another. The gun exists only as an object, through which the person’s desire of killing flows. For Latour, this view is deeply misleading.

Instead, Latour draws our attention to the way the gun, in translating a human desire for killing into action, materializes that desire in the world: “you are a different person with the gun in your hand”, and the gun, by being in your hand, is different than if it were left snuggly in its rack. Only when the human intention and the capacities of the gun are brought together can a shooting, as an observably autonomous action, actually take place. It is impossible to neatly distinguish the primary agents of the scene. Responsibility of the shooting, which can only occur through the combination of human and gun, and by proxy, those who produced and provided it, is thus shared.

With this in mind, we must question how useful it is to think about social media in terms of manipulation and control. Social media, far from being a malicious yet inanimate object (like a weapon) is something more profound and complex: a generator of human will. Our interactions on social media platforms, our likes, our shares, our comments, are not raw resources to be mined – they simply could not have occurred without their technical mediation. Neither are they mere expressions of our autonomy, or, conversely, manipulation: the user does not, and cannot, act alone.

The implication of taking this Latourian view is that “neither human individuals, nor the manipulative design of platforms, seductive they may be, can be the sole causes of the psychological and political harm of social media”. Rather, it is the coming together of human users and user-interfaces, in specific historical settings, that co-produce the activity that occurs upon them. We, as users, as much as the technology itself, therefore, share responsibility for the issues that rage online today.

A ‘New School’ for geeks?

Posted by JN:

Interesting and novel. An online course for techies to get them thinking about the kind of world they are creating. Runs for twelve weeks from March 8 through May 31, 2021.

Its theme of “creative protest” covers issues of justice in tech through a multitude of approaches — whether it’s organizing in the workplace, contributing to an existing data visualization project or worker-owned platform, or building a new platform to nurture creative activism. Participants will work towards a final project as part of the program. Logic School meets for a live, two-hour session each week. Before a session, participants are expected to complete self-guided work: listening, reading, reflecting, and making progress on their final projects.

It’s free (funded by the Omidyar Network) and requires a commitment of five hours a week.

Design a site like this with WordPress.com
Get started