Design a site like this with WordPress.com
Get started

Review: What Tech Calls Reading

A Review of FSG x Logic Series

by Alina Utrata


Publisher Farrar, Straus and Giroux (FSG) and the tech magazine Logic teamed up to produce four books that capture “technology in all its contradictions and innovation, across borders and socioeconomic divisions, from history through the future, beyond platitudes and PR hype, and past doom and gloom.” In that, the FSG x Logic series succeeded beyond its wildest imagination. These books are some of the most well-researched, thought-provoking and—dare I say it—innovative takes on how technology is shaping our world. 

Here’s my review of three of the four—Blockchain Chicken Farm, Subprime Attention Crisis and What Tech Calls Thinking—but I highly recommend you read them all. (They average 200 pages each, so you could probably get through the whole series in the time it takes to finish Shoshana Zuboff’s Surveillance Capitalism.)


Blockchain Chicken Farm: And Other Stories of Tech in China’s Countryside

Xiaowei Wang

“Famine has its own vocabulary,” Xiaowei Wang writes, “a hungry language that haunts and lingers. My ninety-year-old great-uncle understands famine’s words well.” Wang writes as beautifully as they think, effortlessly weaving between ruminations on Chinese history, personal and family anecdotes, modern political and economic theory and first-hand research into the technological revolution sweeping rural China. Contradiction is a watchword in this book, as is contrast—they describe the difference between rural and urban life, of the East and the West, of family and the globe, of history and the present and the potential future. And yet, it all seems familiar. Wang invites us to think slowly about an industry that wants us to think fast—about whether any of this is actually about technology, or whether it is about capitalism, about globalization, about our politics and our communities—or, perhaps, about what it means to live a good life.

On blockchain chicken farms:

“The GoGoChicken project is a partnership between the village government and Lianmo Technology, a company that applies blockchain to physical objects, with a focus on provenance use cases—that is, tracking where something originates from. When falsified records and sprawling supply chains lead to issues of contamination and food safety, blockchain seems like a clear, logical solution. . . These chickens are delivered to consumers’ doors, butchered and vacuum sealed, with the ankle bracelet still attached, so customers can scan the QR code before preparing the chicken . . .”

On a Blockchain Chicken Farm in the Middle of Nowhere, pg 40

“A system of record keeping used to be textual, readable, and understandable to everyone. The technical component behind it was as simple as paper and pencil. That system was prone to falsification, but it was widely legible. Under governance by blockchain, records are tamperproof, but the technical systems are legible only to a select few. . . blockchain has yet to answer the question: If it takes power away from a central authority, can it truly put power back in the hands of the people, and not just a select group of people? Will it serve as an infrastructure that amplifies trust, rather than increasing both mistrust and a singular reliance on technical infrastructure? Will it provide ways to materially organize and enrich a community, rather than further accelerating financial systems that serve a select few?”

On a Blockchain Chicken Farm in the Middle of Nowhere, pg 48

On AI pig farming:

“In these large-scale farms, pigs are stamped with a unique identity mark on their bodies, similar to a QR code. That data is fed into a model made by Alibaba, and the model has the information it needs to monitor the pigs in real time, using video, temperature, and sound sensors. It’s through these channels that the model detects any sudden signs of fever or disease, or if pigs are crushing one another in their pens. If something does happen, the system recognizes the unique identifier on the pig’s body and gives an alert.”

When AI Farms Pigs, pg 63

“Like so many AI projects, ET Agricultural Brain naively assumes that the work of a farmer is to simply produce food for people in cities, and to make the food cheap and available. In this closed system, feeding humans is no different from feeding swaths of pigs on large farms. The project neglects the real work of smallholder farmers throughout the world. For thousands of years, the work of these farmers has been stewarding and maintaining the earth, rather than optimizing agricultural production. They use practices that yield nutrient-dense food, laying a foundation for healthy soils and rich ecology in an uncertain future. Their work is born out of commitment and responsibility: to their communities, to local ecology, to the land. Unlike machines, these farmers accept the responsibility of their actions with the land. They commit to the path of uncertainty.”

When AI Farms Pigs, pg 72

“After all, life is defined not by uncertainty itself but by a commitment to living despite it. In a time of economic and technological anxiety, the questions we ask cannot center on the inevitability of a closed system built by AI, and how to simply make those closed systems more rational or “fair.” What we face are the more difficult questions about the meaning of work, and the ways we commit, communicate, and exist in relation to each other. Answering these questions means looking beyond the rhetoric sold to us by tech companies. What we stand to gain is nothing short of true pleasure, a recognition that we are not isolated individuals, floating in a closed world.”

When AI Farms Pigs, pg 72

Subprime Attention Crisis: Advertising and the Time Bomb at the Heart of the Internet

Tim Hwang

Subprime Attention Crisis

In Subprime Attention Crisis, Tim Hwang argues that the terrifying thing about digital platforms is not how effective they are at manipulating behavior—it’s that they might not be very effective at all. Hwang documents, with precise and technical detail, how digital advertising markets work and how tech giants may be deliberately attempting to inflate their value, even as the actual effectiveness of online ads declines. If you think you’ve seen this film before, Hwang draws parallels to the subprime mortgages and financial systems that triggered the 2008 financial crash. He makes a compelling case that, sooner or later, the digital advertising bubble may burst—and the business model of the internet will explode overnight (not to mention all the things tech money subsidizes, from philanthropy to navigation maps to test and trace). Are Google and Facebook too big to fail? 

On potential systems breakdown:

“Whether underwriting a massive effort to scan the world’s books or enabling the purchase of leading robotics companies, Google’s revenue from programmatic advertising has, in effect, reshaped other industries. Major scientific breakthroughs, like recent advances in artificial intelligence and machine learning, have largely been made possible by a handful of corporations, many of which derive the vast majority of their wealth from online programmatic advertising. The fact that these invisible, silent programmatic marketplaces are critical to the continued functioning of the internet—and the solvency of so much more—begs a somewhat morbid thought experiment: What would a crisis in this elaborately designed system look like?”

The Plumbing, pg 25

“Intense dysfunction in the online advertising markets would threaten to create a structural breakdown of the classic bargain at the core of the information economy: services can be provided for free online to consumers, insofar as they are subsidized by the revenue generated from advertising. Companies would be forced to shift their business models in the face of a large and growing revenue gap, necessitating the rollout of models that require the consumer to pay directly for services. Paywalls, paid tiers of content, and subscription models would become more commonplace. Within the various properties owned by the dominant online platforms, services subsidized by advertising that are otherwise unprofitable might be shut down. How much would you be willing to pay for these services? What would you shell out for, and what would you leave behind? The ripple effects of a crisis in online advertising would fundamentally change how we consume and navigate the web.”

The Plumbing, pg 27

On fraud in digital advertising:

“One striking illustration is the subject of an ongoing lawsuit around claims that Facebook made in 2015 promoting the attractiveness of video advertising on its platform. At the time, the company was touting online video—and the advertising that could be sold alongside it—as the future of the platform, noting that it was “increasingly seeing a shift towards visual content on Facebook.” . . . But it turned out that Facebook overstated the level of attention being directed to its platform on the order of 60 to 80 percent. By undercounting the viewers of videos on Facebook, the platform overstated the average time users spent watching videos. . . . These inconsistencies have led some to claim that Facebook deliberately misled the advertising industry, a claim that Facebook has denied. Plaintiffs in a lawsuit against Facebook say that, in some cases, the company inflated its numbers by as much as 900 percent. Whatever the reasons for these errors in measurement, the “pivot to video” is a sharp illustration of how the modern advertising marketplace can leave buyers and sellers beholden to dominant platform decisions about what data to make available.”

Opacity, pg 70

On specific types of ad fraud:

“Click fraud is a widespread practice that uses automated scripts or armies of paid humans in “click farms” to deliver click-throughs on an ad. The result is that the advertising captures no real attention for the marketer. It is shown either to a human who was hired to click on the ad or to no one at all. The scale of this problem is enormous. A study conducted by Adobe in 2018 concluded that about 28 percent of website traffic showed “non-human signals,” indicating that it originated in automated scripts or in click farms. One study predicted that the advertising industry would lose $19 billion to click fraud in 2018—a loss of about $51 million per day. Some place this loss even higher. One estimate claims that $1 of every $3 spent on digital advertising is lost to click fraud.”

Subprime Attention, 85

What Tech Calls Thinking: An Inquiry into the Intellectual Bedrock of Silicon Valley

Adrian Daub

What Tech Calls Thinking

What Tech Calls Thinking is “about the history of ideas in a place that likes to pretend its ideas don’t have any history.” Daub has good reason to know this, as a professor of comparative literature at Stanford University (I never took a class with him, a fact I regretted more and more as the book went on). His turns of phrase do have the lyricism one associates with a literature seminar—e.g. “old motifs playing dress-up in a hoodie”—as he explores the ideas that run amok in Silicon Valley. He exposes delightful contradictions: thought leaders who engage only superficially with thoughts. CEOs who reject the university (drop out!), then build corporate campuses that look just like the university. As Daub explains the ideas of thinkers such as Abraham Maslow, Rene Girard, Ayn Rand, Jurgen Habermas, Karl Marx, Marshall McLuhan and Samuel Beckett, you get the sense, as Daub says, that these ideas “aren’t dangerous ideas in themselves. Their danger lies in the fact that they will probably lead to bad thinking.” The book is a compelling rejection of the pseudo-philosophy that has underpinned much of the Valley’s techno-determinism. “Quite frequently,” Daub explains, “these technologies are truly novel—but the companies that pioneer them use that novelty to suggest that traditional categories of understanding don’t do them justice, when in fact standard analytic tools largely apply just fine.” Daub’s analysis demonstrates the point well. 

On tech drop outs:

“You draw a regular salary and know what you’re doing with your life earlier than your peers, but you subsist on Snickers and Soylent far longer. You are prematurely self-directed and at the same time infantilized in ways that resemble college life for much longer than almost anyone in your age cohort. . . .  Dropping out is still understood as a rejection of a certain elite. But it is an anti-elitism whose very point is to usher you as quickly as possible into another elite—the elite of those who are sufficiently tuned in, the elite of those who get it, the ones who see through the world that the squares are happy to inhabit . . .  All of this seems to define the way tech practices dropping out of college: It’s a gesture of risk-taking that’s actually largely drained of risk. It’s a gesture of rejection that seems stuck on the very thing it’s supposedly rejecting.”

Dropping Out, pg 37

On platforms versus content creation:

“The idea that content is in a strange way secondary, even though the platforms Silicon Valley keeps inventing depend on it, is deeply ingrained. . . . To create content is to be distracted. To create the “platform” is to focus on the true structure of reality. Shaping media is better than shaping the content of such media. It is the person who makes the “platform” who becomes a billionaire. The person who provides the content—be it reviews on Yelp, self-published books on Amazon, your own car and waking hours through Uber—is a rube distracted by a glittering but pointless object.”

Content, pg 47

On gendered labor:

“Cartoonists, sex workers, mommy bloggers, book reviewers: there’s a pretty clear gender dimension to this division of labor. The programmers at Yelp are predominantly men. Its reviewers are mostly female . . . The problem isn’t that the act of providing content is ignored or uncompensated but rather that it isn’t recognized as labor. It is praised as essential, applauded as a form of civic engagement. Remunerated it is not. . . . And deciding what is and isn’t work has a long and ignominious history in the United States. They are “passionate,” “supportive” volunteers who want to help other people. These excuses are scripts, in other words, developed around domestic, especially female, labor. To explain why being a mom isn’t “real” work. To explain why women aren’t worth hiring, or promoting, or paying, or paying as much.”

Content, pg 51

On gendered data:

“There is the idea that running a company resembles being a sexual predator. But there is also the idea that data—resistant, squirrelly, but ultimately compliant—is a feminine resource to be seized, to be made to yield by a masculine force. . . .To grab data, to dispose of it, to make oneself its “boss”—the constant onslaught of highly publicized data breaches may well be a downstream effect of this kind of thinking. There isn’t very much of a care ethic when it comes to our data on the internet or in the cloud. Companies accumulate data and then withdraw from it, acting as though they have no responsibility for it—until the moment an evil hacker threatens said data. Which sounds, in other words, not too different from the heavily gendered imagery relied on by Snowflake. There is no sense of stewardship or responsibility for the data that you have “grabbed,” and the platform stays at a cool remove from the creaturely things that folks get up to when they go online and, wittingly or unwittingly, generate data.”

Content, pg 55

On disruption:

“There is an odd tension in the concept of “disruption,” and you can sense it here: disruption acts as though it thoroughly disrespects whatever existed previously, but in truth it often seeks to simply rearrange whatever exists. It is possessed of a deep fealty to whatever is already given. It seeks to make it more efficient, more exciting, more something, but it never wants to dispense altogether with what’s out there. This is why its gestures are always radical but its effects never really upset the apple cart: Uber claims to have “revolutionized” the experience of hailing a cab, but really that experience has stayed largely the same. What it managed to get rid of were steady jobs, unions, and anyone other than Uber’s making money on the whole enterprise.”

Desire, pg 104
Advertisement

Public networks instead of social networks?

We need state-owned, interoperable, democratically governed online public networks. From the people for the people.

posted by Julia Rone

The conversation so far

The following comments on Trump being banned from Twitter/ the removal of Parler from Android and iOS stores were, somewhat aptly, inspired by two threads on Twitter itself: the first by the British-Canadian blogger Cory Doctorow and the other by Canadian scholar Blayne Haggart. The point of this post ideally is to start the conversation from where Doctorow and Haggart have left it and involve more people from our team. Ideally, nobody will be censored in the process :p

Doctorow insists that the big problem with Apple and Android removing Parler is not so much censorship – ultimately different app stores can have different rules and this should be the case – but rather the fact that there are no alternative app stores. Thus, the core of his argument is that the US needs to enforce anti-trust laws that would allow for a fair competition between a number of competitors. The same argument can be extended to breaking up social media monopolists such as Facebook and Twitter. What we need is more competition.

Haggart attacks this argument in three ways:

First, he reminds that “market regulation of the type that @doctorow wants requires perfect competition. This is unlikely to happen for a number of reasons (e.g, low consumer understanding of platform issues, tendency to natural monopoly)”. Thus, the most likely outcome becomes the establishment of “a few more corporate oligarchs”. This basically leaves the state as a key regulator – much to the disappointment of cyber-libertarians who have argued against state regulation for decades.

The problem is, and this is Haggart’s second key point, that “as a non-American, it’s beyond frustrating that this debate (like so many internet policy debates) basically amounts to Americans arguing with other Americans about how to run the world. Other countries need to assert their standing in this debate” . This point had been made years ago also in Martin Hardie’s great paper “Foreigner in a free land” in which he noticed how most debates about copyright law focused on the US. Even progressive people such as Larry Lessig built their whole argumentation on the basis of references to the US constitution. But what about all of us – the poor souls from the rest of the world who don’t live in the US?

Of course, Facebook, Twitter, Alphabet, Amazon, etc. are all US tech companies. But they do operate globally. So even if the US states interferes in regulating them, the regulation it imposes might not chime well with people in France or Germany, let’s say. The famous American prudence with nudity is the oft quoted example of different standards when it comes to content regulation. No French person would be horrified by the sight of a bare breast (at least if we believe stereotypes) so why should nude photos be removed from the French social media. If we want platform governance to be truly democratic, the people affected by it should “have a say in that decision”. But as Haggart notes “This cannot happen so long as platforms are global, or decisions about them are made only in DC”.

So what does Haggart offer? Simple: break social media giants not along market lines but along national lines. Well, maybe not that simple…

If we take the idea of breaking up monopolies along national lines seriously…

This post starts from Haggart’s proposal to break up social media along national lines, assuming it is a good proposal. In fact I do this not for rhetorical purposes or for the sake of setting a straw man but because I actually think it is a good proposal. So the following lines aim to take the proposal seriously and consider different aspects of it discussing what potential drawbacks/problems should we keep in mind.

How to do this??

The first key problem is: who on Earth, can convince companies such as Facebook/Twitter to “break along national lines”. These companies spend fortunes on lobbying the US government and they are US national champions. Why would the US support breaking them up along national lines? (As a matter of fact, the question of how is also a notable problem in Deibert’s “Reset” – his idea that hacktivism, civil disobedience, and whistleblowers’ pressure can make private monopolists exercise restraint is very much wishful thinking). There are historical precedents for nationalization of companies but they seem to have involved either a violent revolution or a massive indebtedness of these companies making it necessary for the state to step in and save them with public money. Are there any precedents for nationalizing a company and then revealing how it operates to other states in order to make these states create their respective national versions of it? Maybe. But it seems highly unlikely that anyone in the US would want to do this.

Which leaves us with the rather utopian option two: all big democratic states get together and develop interoperable social media. The project is such a success that people fed up with Facebook and Google decide to join and the undue influence of private monopolists finally comes to an end. But this utopian vision itself opens up a series of new questions.

Okay, assuming we can have state platforms operating along national lines..

Inscribing values in design is not always as straight-forward as it seems, as discussed in the fascinating conversation between Solon Barocas, Seda Gurses, Arvind Narayanan and Vincent Toubiana on decentralized personal data architectures. But, assuming that states can build and maintain (or hire someone to build and maintain) such platforms that don’t crash, are not easy to hack and are user friendly, the next question is: who is going to own the infrastructure and the data?

Who will own the infrastructure and the data?

One option would be for each individual citizen to own their data but this might be too risky and unpractical. Another option would be to treat the data as public data – the same way we treat data from surveys and national statistics. The personal data from current social media platforms is used for online advertising/ training machine learning. If states own their citizens’ data, we might go back to a stage in which the best research was done by state bodies and universities rather than what we have now – the most cutting edge research is done in private companies, often in secret from the public. Mike Savage described this process of increased privatization of research in his brilliant piece The Coming Crisis of Empirical sociology. If anything, the recent case with Google firing AI researcher Timnit Gebru reveals the need to have independent public research that is not in-house research by social media giants or funded by them. It would be naive to think such independent academics can do such research in the current situation when the bulk of interesting data to be analysed is privately owned.

How to prevent authoritarian censorship and surveillance?

Finally, if we assume that states will own their own online public networks – fulfilling the same functions such as Facebook, but without the advertising, the one million dollar question is how to prevent censorship, overreach and surveillance. As Ron Deibert discusses in “Reset”, most states are currently involved in some sort of hacking and surveillance operations of foreign but also domestic citizens. What can be done about this? Here Haggart’s argument about the need for democratic accountability reveals its true importance and relevance. State-owned online public networks would have to abide by standards that have been democratically discussed and to be accountable to the public.

But what Hagart means when discussing democratic accountability should be expanded. Democracy and satisfaction with it have been declining in many Western nations with more and more decision-making power delegated to technocratic bodies. Yet, what the protests from 2010s in the US and the EU clearly showed is that people are dissatisfied with democracy not because they want authoritarianism but because they want more democracy, that is democratic deepening. Or in the words of the Spanish Indignados protesters:

“Real democracy, now”

Thus, to bring to conclusion the utopia of state public networks, the decisions about their governance should be made not by technocratic bodies or with “democratic accountability” used as a form of window-dressing which sadly is often the case now. Instead, policy decisions should be discussed broadly through a combination of public consultations, assemblies and in already existing national and regional assemblies in order to ensure people have ownership of the policies decided. State public networks should be not only democratically accountable but also democratically governed. Such a scenario would be one of what I call “democratic digital sovereignty” that goes beyond the arbitrariness of decisions by private CEOs but also escapes the pitfalls of state censorship and authoritarianism.

To sum up: we need state-owned interoperable online public networks. Citizen data gathered from the use of these media would be owned by the state and would be available for public academic research (which would be open access in order to encourage both transparency and innovation). The moderation policies of these public platforms would be democratically discussed and decided. In short, these will be platforms of the people and for the people. Nothing more, nothing less.

Is the UK really going to innovate in regulation of Big Tech?

On Tuesday last week the UK Competition and Markets Authority (CMA) outlined plans for an innovative way of regulating powerful tech firms in a way that overcomes the procedural treacle-wading implicit in competition law that had been designed for an analogue era.

The proposals emerged from an urgent investigation by the Digital Markets Taskforce, an ad-hoc body set up in March and led by the CMA with inputs from the Information Commissioner’s Office and OFCOM, the telecommunications and media regulator. The Taskforce was charged with providing advice to the government on the design and implementation of a pro-competition regime for digital markets. It was set up following the publication of the Treasury’s Furman Review on ‘Unlocking digital competition’ which reported in March 2019 and drew on evidence from the CMA’s previous market study into online platforms and digital advertising.

This is an intriguing development in many ways. First of all it seems genuinely innovative. Hitherto, competition laws have been framed to cover market domination or monopolistic abuse without mentioning any particular company, but the new UK approach for tech companies could set specific rules for named companies — Facebook and Google, say. More importantly, the approach bypasses the sterile arguments we have had for years about whether antique conceptions of ‘monopoly’ actually apply to firms which adroitly argue that they don’t meet the definition — while at the same time patently functioning as monopolies. Witness the disputes about whether Amazon really is a monopoly in retailing.

Rather than being lured down that particular rabbit-hole, the CMA proposes instead to focus attention on firms with what it calls ‘Strategic Market Status’ (SMS), i.e. firms with dominant presences in digital markets where there’s not much actual competition. That is to say, markets where difficulty of entry or expansion by potential rivals is effectively undermined by factors like network effects, economies of scale, consumer passivity (i.e. learned helplessness), the power of default settings, unequal (and possibly illegal) access to user data, lack of transparency, vertical integration and conflicts of interest.

At the heart of the new proposals is the establishment of a powerful, statutory Digital Markets Unit (DMU) located within the Competition and Markets Authority. This would have the power to impose legally-enforceable Codes of Conduct on SMS firms. The codes would, according to the proposals, be based on relatively high-level principles like ‘fair trading’, ‘open choices’ and ‘trust and transparency’ — all of which are novel ideas for tech firms. Possible remedies for specific companies (think Facebook and Google) could include mandated data access and interoperability to address Facebook’s dominance in social media or Google’s market power in general search.

It would be odd if, in due course, Amazon, Apple and Microsoft don’t also fall into the SMS category of “strategic”. Indeed it’s inconceivable that Amazon would not, given that it has morphed into critical infrastructure for many locked-down economies.

The government says that it going to consult on these radical proposals early next year and will then legislate to put the DMU on a statutory basis “when Parliamentary time allows”.

Accordingly, we can now look forward to a period of intensive corporate lobbying from Facebook & Co as they seek to derail or emasculate the proposals. Given recent history and the behaviour of which these outfits are capable, it would be prudent for journalists and civil society organisations to keep their guard up until this stuff is on the statute book.

The day after the CMA proposals were published (and after a prolonged legal battle) the Bureau of Investigative Journalists were finally able to publish the Minutes of a secret meeting that Matt Hancock had with the Facebook boss, Mark Zuckerberg, in May 2018. Hancock was at that time Secretary of State for DCMS, the department charged with combating digital harms. According to the Bureau’s report, he had sought “increased dialogue” with Zuckerberg, so he could “bring forward the message that he has support from Facebook at the highest level”. The meeting took place at the VivaTech conference in Paris. It was arranged “after several days of wrangling” by Matthew Gould, the former culture department civil servant that Hancock later made chief executive of NHS X. Civil servants had to give Zuckerberg “explicit assurances” that the meeting would be positive and Hancock would not simply demand that the Facebook boss attend the DCMS Select Committee inquiry into the Cambridge Analytica scandal (which he had refused to do).

The following month Hancock had a follow-up meeting with Elliot Schrage, Facebook’s top lobbyist, who afterwards wrote to the minister thanking him for setting out his thinking on “how we can work together on building a model for sensible co-regulation on online safety issues”.

Now that the UK government is intent on demonstrating its independence from foreign domination, perhaps the time has come to explain to tech companies a couple of novel ideas. Sovereign nations do regulation, not ‘co-regulation’; and companies obey the law.

……………………..

A version of this post was published in the Observer on Sunday, 13 December, 2020.

Great expectations: the role of digital media for protest diffusion in the 2010s

The decade after the 2008 economic crisis started with great expectations about the empowering potential of digital media for social movements. The wave of contention that started from Iceland and the MENA countries swept also Europe, where hundreds of thousands Spanish protesters took part in the Indignados protests in 2011 and a smaller but dedicated group organized Occupy London – the British version of the US Occupy movement that shook US politics for years to come. Protesters during the Arab Spring were often carrying posters and placards with the logos and names of Facebook, Twitter and similar platforms or were even spraying them as graffiti on walls.

It was a period of ubiquitous enthusiasm with some scholars even claiming that the Internet is a necessary and sufficient condition for democratizaton. What is more, a number of scholars saw in the rise of digital platforms a great opportunity for the diffusion of protests within nations and transnationally at an unprecedented speed – leading political journalists and researchers noted that digital media had a key role in ‘Occupy protests spreading like wildfire’ and in spreading information during the Arab Spring.

Photo by Essam Sharaf

Already back in the early 2010s, the beginning of this techno-utopian decade, researchers emphasized that in Egypt, protests and information about them in fact spread in more traditional ways – through the interpersonal networks of cab drivers, labour unions, and football hooligans, among others. What is more, protests in the aftermath of the 2008 economic crisis spread much more slowly than the 1848 Spring of the Peoples protests due to the need of laborious cultural translation from one region to another. Ultimately, in spite of the major promises of social media, most protest mobilization and diffusion still depends on face-to-face interactions and established protest traditions.

Yet, the trend of expecting too much from digital media is countered by an equally dangerous trend – claiming they haven’t changed anything in the world of mobilization. The media ecology approach of Emilano Trere and Alice Mattoni escapes the pitfalls of both approaches by studying how activists use digital media in combination and interaction with a number of other types of media in hybrid media ecologies.

In a book that I just published, I apply the media ecology approach to study the diffusion of Green and left-wing protests against austerity and free trade in the EU after 2008. One of the greatest things about trying to focus on other media beyond Facebook and Twitter is the multiple unexpected angles it gives to events we all thought we knew well. While both activists and researchers alike have been fascinated with the promise of digital media, looking at the empirical material with unbiased eyes revealed so much about the key role of other types of media for protest diffusion.

To begin with: books! The very name of the Indignados protests came from the title of Stéphane Hessel’s book “Indignez-vous!”. But the books by authors such as Joseph Stiglitz, Wolfgang Streeck, Ernesto Laclau and Yannis Varoufakis have been no less important for spreading ideas and informing protesters across the EU. In his recent book “Translating the Crisis”, Spanish scholar Fruela Fernandez notes the boom of publishing houses translating political books in Spain in the period surrounding the birth and eruption into public space of the Indignados movement.

Similarly, mainstream media have been of crucial importance for spreading information on protests, protest ideas and tactics across the EU in the last decade. Mainstream media such as The Guardian, BBC, El País, etc. reported in much detail on the use of digital media by social movements such as Occupy or Indignados, even sharing Twitter and Facebook hashtags, links to Facebook groups and live-streams in articles. Mainstream media thus popularized the message (and media practices) of protesters further than they could have possibly imagined. In fact, mainstream media’s fascination with the digital practices of new social movements goes a long way to explain their largely favourable attitude to the protests of the early 2010s, Such a favorable coverage by mainstream media indeed contradicts most expectations of social movement scholars that media would largely ignore or misrepresent protesters.

Another type of protest diffusion that has remained woefully neglected but played a key role in the spread of progressive economic protests in the EU was face-to-face communication and, as simple as it may sound, walking! During the Spanish Indignados protests hundreds of protesters marched from all parts of Spain to gather in Madrid. A small part of them continued marching to Brussels where they staged theater plays and discussions and then headed to Greece. These marches took weeks and involved protesters stopping in villages and cities on the way and engaging local people in discussions. Sharing a physical space and sharing food have been among the most efficient ways to diffuse a message and reach more people with it. Of course, the marchers kept live blogs and diaries of their journeys (which in themselves constitute rich materials for future research), but it is the combination of diffusion through traveling, meeting people in person, and using digital media which is the truly interesting combination.

In my book, I give many more examples of how progressive protesters used various types of media to spread protest. Beyond providing a richer and more accurate picture of progressive economic protests in the 2010s, the book can hopefully serve also as a useful reminder for researchers of the radical right. The 2010s that started with research on social movements and democratization end with a major academic trend for studying the far right, and especially the way the far right has blossomed in the digital sphere.

If there is one thing to be learned from my book, it is that digital media are not the only tool activists use to spread protest. Thus, if one needs to understand the diffusion of far right campaigns and ideas, one needs to focus also on the blossoming of far right publishing houses, the increasing mainstreaming of far right ideas in mainstream press, and last but not least, the ways in which far right activists make inroads into civil society organizations and travel to share experiences – it is well-known, for example, that during the refugee crisis far right activists from Western Europe made several joint actions with activists from Eastern Europe to patrol borders together.

Understanding how protests, protest ideas and repertoires diffuse is crucial for activists who want to help spread progressive causes, but also for those who are worried about the spread of dangerous and anti-democratic ideas. After a decade of great expectations about the potential of digital media to democratize our societies, we find ourselves politically in an era of backlash. Yet, at least analytically we are now past the naive enthusiasm of the early 2010s and have a much better instrumentarium to understand how protest diffusion works. To rephrase Gramsci, we are now entering a period of pessimism of the will and optimism of the intellect.

It is not what we wished for. But shedding our illusions and utopian expectations about the potential of digital media is an important step for moving beyond techno-fetishism and understanding better the processes of mobilization that currently define our society.

Seeing Like a Social Media Site

The Anarchist’s Approach to Facebook

When John Perry Barlow published “A Declaration of the Independence of Cyberspace” nearly twenty-five years ago, he was expressing an idea that seemed almost obvious at the time: the internet was going to be a powerful tool to subvert state control. As Barlow explained to the “governments of the Industrial World,” those “weary giants of flesh and steel”—cyberspace does not lie within your borders. Cyberspace was a “civilization of the mind.” States might be able to control individuals’ bodies, but their weakness lay in their inability to capture minds.

In retrospect, this is a rather peculiar perspective of states’ real weakness, which has always been space. Literal, physical space—the endlessly vast terrain of the physical world—has historically been the friend of those attempting to avoid the state. As scholar James Scott documented in The Art of Not Being Governed, in early stages of state formation if the central government got too overbearing the population simply could—and often did—move. Similarly, John Torpey noted inThe Invention of the Passport that individuals wanting to avoid eighteenth-century France’s system of passes could simply walk from town to town, and passes were often “lost” (or, indeed, actually lost). As Richard Cobb noted, “there is no one more difficult to control than the pedestrian.” More technologically-savvy ways of traveling—the bus, the boat, the airplane—actually made it easier for the state to track and control movement.

Cyberspace may be the easiest place to track of all. It is, by definition, a mediated space. To visit, you must be in possession of hardware, which must be connected to a network, which is connected to other hardware, and other networks, and so on and so forth. Every single thing in the digital world is owned, controlled or monitored by someone else. It is impossible to be a pedestrian in cyberspace—you never walk alone. 

States have always attempted to make their populations more trackable, and thus more controllable. Scott calls this the process of making things “legible.” It includes “the creation of permanent last names, the standardization of weights and measures, the establishment of cadastral surveys and population registers, the invention of freehold tenure, the standardization of language and legal discourse, the design of cities, and the organization of transportation.” These things make previously complicated, complex and unstandardized facts knowable to the center, and thus more easy to administrate. If the state knows who you are, and where you are, then it can design systems to control you. What is legible is manipulable.

Cyberspace—and the associated processing of data—offers exciting new possibilities for the administrative center to make individuals more legible precisely because, as Barlow noted, it is “a space of the mind.” Only now, it’s not just states that have the capacity to do this—but sites. As Shoshana Zuboff documented in her book The Age of Surveillance Capitalism, sites like Facebook collect data about us in an attempt to make us more legible and, thus, more manipulatable. This is not, however, the first time that “technologically brilliant” centralized administrators have attempted to engineer society. 

Scott use the term “high modernism” to characterize schemes—attempted by planners across the political spectrum—that possess a “self-confidence about scientific and technical progress, the expansion of production, the growing satisfaction of human needs, the mastery of nature (including human nature), and, above all, the rational design of social order commensurate with the scientific understanding of natural laws.” In Seeing Like a State, Scott examines a number of these “high modernist” attempts to engineer forests in eighteenth-century Prussia and Saxony, urban cities in Paris and Brasilia, rural populations in ujamaa villages, and agricultural production in Soviet collective farms (to name a few). Each time, central administrators attempted to make complex, complicated processes—from people to nature—legible, and then engineer them into rational, organized systems based on scientific principles. It usually ended up going disastrously wrong—or, at least, not at all the way central authorities had planned it. 

The problem, Scott explained, is that “certain forms of knowledge and control require a narrowing of vision. . . designed or planned social order is necessarily schematic; it always ignores essential features of any real, functioning social order.” For example, mono-cropped forests became more vulnerable to disease and depleted soil structure—not to mention destroyed the diversity of the flora, insect, mammal, and bird populations which took generations to restore. The streets of Brasilia had not been designed with any local, community spaces where neighbors might interact; and, anyway, they forgot—ironically—to plan for construction workers, who subsequently founded their own settlement on the outskirts of the city, organized to defend their land and demanded urban services and secure titles. By 1980, Scott explained, “seventy-five percent of the population of Brasilia lived in settlements that had never been anticipated, while the planned city had reached less than half of its projected population of 557,000.” Contrary to Zuboff’s assertion that we are losing “the right to a future tense,” individuals and organic social processes have shown a remarkable capacity to resist and subvert otherwise brilliant plans to control them.

And yet this high-modernism characterizes most approaches to “regulating” social media, whether self-regulatory or state-imposed. And, precisely because cyberspace is so mediated, it is more difficult for users to resist or subvert the centrally-controlled processes imposed upon them. Misinformation on Facebook proliferates—and so the central administrators of Facebook try to engineer better algorithms, or hire legions of content moderators, or make centralized decisions about labeling posts, or simply kick off users. It is, in other words, a classic high-modernist approach to socially engineer the space of Facebook, and all it does is result in the platforms’ ruler—Mark Zuckerberg—consolidating more power. (Coincidentally, fellow Power-Shift contributor Jennifer Cobbe argued something quite similar in her recent article about the power of algorithmic censorship). Like previous attempts to engineer society, this one probably will not work well in practice—and there may be disastrous, authoritarian consequences as a result.

So what is the anarchist approach to social media? Consider this description of an urban community by twentieth-century activist Jane Jacobs, as recounted by Scott:

“The public peace-the sidewalk and street peace-of cities . . . is kept by an intricate, almost unconscious network of voluntary controls and standards among the people themselves, and enforced by the people themselves. . . . [an] incident that occurred on [Jacobs’] mixed-used street in Manhattan when an older man seemed to be trying to cajole an eight or nine-year-old girl to go with him. As Jacobs watched this from her second-floor window, wondering if she should intervene, the butcher’s wife appeared on the sidewalk, as did the owner of the deli, two patrons of a bar, a fruit vendor, and a laundryman, and several other people watched openly from their tenement windows, ready to frustrate a possible abduction. No “peace officer” appeared or was necessary. . . . There are no formal public or voluntary organizations of urban order here—no police, no private guards or neighborhood watch, no formal meetings or officeholders. Instead, the order is embedded in the logic of daily practice.”

How do we make social media sites more like Jacobs’ Manhattan, where people—not police or administrators—on “sidewalk terms” are empowered to shape their own cyber spaces? 

There may already be one example: Wikipedia. 

Wikipedia is not often thought of as an example of a social media site—but, as many librarians will tell you, it is not an encyclopedia. Yet Wikipedia is not only a remarkable repository of user-generated content, it also has been incredibly resilient to misinformation and extremist content. Indeed, as debates around Facebook wonder whether the site has eroded public discourse to such an extent that democracy itself has been undermined, debates around Wikipedia center around whether it is as accurate as the expert-generated content of Encyclopedia Britannica. (Encyclopedia Britannica says no; Wikipedia says it’s close.)

The difference is that Wikipedia empowers users. Anyone, absolutely anyone, can update Wikipedia. Everyone can see who has edited what, allowing users to self-regulate—and how users identified that suspected Russian agent Maria Butina was probably changing her own Wikipedia page, and changed it back. This radical transparency and empowerment produces organic social processes where, much like in the Manhattan street, individuals collectively mediate their own space. And, most importantly, it is dynamic—Wikipedia changes all the time. Instead of a static ruling (such as Facebook’s determination that the iconic photo of Napalm Girl would be banned for child nudity), Wikipedia’s process produces dialogue and deliberation, where communities constantly socially construct meaning and knowledge. Finally, because cyberspace is ultimately mediated space—individuals cannot just “walk” or “wander” across sidewalks, like in the real world—Wikipedia is mission-driven. It does not have the amorphous goal of “connecting the global community”, but rather “to create a world in which everyone can freely share in the sum of all knowledge.”

This suggests that no amount of design updates or changes to terms of service will ever “fix” Facebook—whether they are imposed by the US government, Mark Zuckerberg or Facebook’s Oversight Board. Instead, it is the high-modernism that is the problem. The anarchist’s approach would prioritize building designs that empower people and communities—so why not adopt the wiki-approach to the public square functions that social media currently serves, like wiki-newspapers or wiki-newsfeeds?

It might be better to take the anarchist’s approach. No algorithms are needed.

by Alina Utrata

Review: ‘The Social Dilemma’ — Take #2

In “More than tools: who is responsible for the social dilemma?, Microsoft researcher Niall Docherty has an original take on the thinking that underpins the film. If we are to pursue more productive discussions of the issues raised by the film, he argues, we need to re-frame social media as something more than a mere “tool”. After all, “when have human beings ever been fully and perfectly in control of the technologies around them? Is it not rather the case that technologies, far from being separate from human will, are intrinsically involved in its activation?”

French philosopher Bruno Latour famously uses the example of the gun to advance this idea, which he calls mediation. We are all aware of the platitude, “Guns don’t kill people, people kill people”. In its logic, the gun is simply a tool that allows the person, as the primary agent, to kill another. The gun exists only as an object, through which the person’s desire of killing flows. For Latour, this view is deeply misleading.

Instead, Latour draws our attention to the way the gun, in translating a human desire for killing into action, materializes that desire in the world: “you are a different person with the gun in your hand”, and the gun, by being in your hand, is different than if it were left snuggly in its rack. Only when the human intention and the capacities of the gun are brought together can a shooting, as an observably autonomous action, actually take place. It is impossible to neatly distinguish the primary agents of the scene. Responsibility of the shooting, which can only occur through the combination of human and gun, and by proxy, those who produced and provided it, is thus shared.

With this in mind, we must question how useful it is to think about social media in terms of manipulation and control. Social media, far from being a malicious yet inanimate object (like a weapon) is something more profound and complex: a generator of human will. Our interactions on social media platforms, our likes, our shares, our comments, are not raw resources to be mined – they simply could not have occurred without their technical mediation. Neither are they mere expressions of our autonomy, or, conversely, manipulation: the user does not, and cannot, act alone.

The implication of taking this Latourian view is that “neither human individuals, nor the manipulative design of platforms, seductive they may be, can be the sole causes of the psychological and political harm of social media”. Rather, it is the coming together of human users and user-interfaces, in specific historical settings, that co-produce the activity that occurs upon them. We, as users, as much as the technology itself, therefore, share responsibility for the issues that rage online today.

A ‘New School’ for geeks?

Posted by JN:

Interesting and novel. An online course for techies to get them thinking about the kind of world they are creating. Runs for twelve weeks from March 8 through May 31, 2021.

Its theme of “creative protest” covers issues of justice in tech through a multitude of approaches — whether it’s organizing in the workplace, contributing to an existing data visualization project or worker-owned platform, or building a new platform to nurture creative activism. Participants will work towards a final project as part of the program. Logic School meets for a live, two-hour session each week. Before a session, participants are expected to complete self-guided work: listening, reading, reflecting, and making progress on their final projects.

It’s free (funded by the Omidyar Network) and requires a commitment of five hours a week.

Review: ‘The Social Dilemma’ – Take #1

The Social Dilemma is an interesting — and much-discussed — docudrama about the impact of social media on society.  We thought it’d be interesting to have a series in which we gather different takes on the film.  Here’s Take#1…

Spool forward a couple of centuries. A small group of social historians drawn from the survivors of climate catastrophe are picking through the documentary records of what we are currently pleased to call our civilisation, and they come across a couple of old movies. When they’ve managed to find a device on which they can view them, it dawns on them that these two films might provide an insight into a great puzzle: how and why did the prosperous, apparently peaceful societies of the early 21st century implode?

The two movies are The Social Network, which tells the story of how a po-faced Harvard dropout named Mark Zuckerberg created a powerful and highly profitable company; and The Social Dilemma, which is about how the business model of this company – as ruthlessly deployed by its po-faced founder – turned out to be an existential threat to the democracy that 21st-century humans once enjoyed.

Both movies are instructive and entertaining, but the second one leaves one wanting more. Its goal is admirably ambitious: to provide a compelling, graphic account of what the business model of a handful of companies is doing to us and to our societies. The intention of the director, Jeff Orlowski, is clear from the outset: to reuse the strategy deployed in his two previous documentaries on climate change – nicely summarised by one critic as “bring compelling new insight to a familiar topic while also scaring the absolute shit out of you”.

For those of us who have for years been trying – without notable success – to spark public concern about what’s going on in tech, it’s fascinating to watch how a talented movie director goes about the task. Orlowski adopts a two-track approach. In the first, he assembles a squad of engineers and executives – people who built the addiction-machines of social media but have now repented – to talk openly about their feelings of guilt about the harms they inadvertently inflicted on society, and explain some of the details of their algorithmic perversions.

They are, as you might expect, almost all males of a certain age and type. The writer Maria Farrell, in a memorable essay, describes them as examples of the prodigal techbro – tech executives who experience a sort of religious awakening and “suddenly see their former employers as toxic, and reinvent themselves as experts on taming the tech giants. They were lost and are now found.”

Biblical scholars will recognise the reference from Luke 15. The prodigal son returns having “devoured his living with harlots” and is welcomed with open arms by his old dad, much to the dismay of his more dutiful brother. Farrell is not so welcoming. “These ‘I was lost but now I’m found, please come to my Ted Talk’ accounts,” she writes, “typically miss most of the actual journey, yet claim the moral authority of one who’s ‘been there’ but came back. It’s a teleportation machine, but for ethics.”

It is, but Orlowski welcomes these techbros with open arms because they suit his purpose – which is to explain to viewers the terrible things that the surveillance capitalist companies such as Facebook and Google do to their users. And the problem with that is that when he gets to the point where we need ideas about how to undo that damage, the boys turn out to be a bit – how shall I put it? – incoherent.

The second expository track in the film – which is interwoven with the documentary strand – is a fictional account of a perfectly normal American family whose kids are manipulated and ruined by their addiction to social media. This is Orlowski’s way of persuading non-tech-savvy viewers that the documentary stuff is not only real, but is inflicting tangible harm on their teenagers. It’s a way of saying: Pay attention: this stuff really matters!

And it works, up to a point. The fictional strand is necessary because the biggest difficulty facing critics of an industry that treats users as lab rats is that of explaining to the rats what’s happening to them while they are continually diverted by the treats (in this case dopamine highs) being delivered by the smartphones that the experimenters control.

Where the movie fails is in its inability to accurately explain the engine driving this industry that harnesses applied psychology to exploit human weaknesses and vulnerabilities.

A few times it wheels on Prof Shoshana Zuboff, the scholar who gave this activity a name – “surveillance capitalism”, a mutant form of our economic system that mines human experience (as logged in our data trails) in order to produce marketable predictions about what we will do/read/buy/believe next. Most people seem to have twigged the “surveillance” part of the term, but overlooked the second word. Which is a pity because the business model of social media is not really a mutant version of capitalism: it’s just capitalism doing its thing – finding and exploiting resources from which profit can be extracted. Having looted, plundered and denuded the natural world, it has now turned to extracting and exploiting what’s inside our heads. And the great mystery is why we continue to allow it to do so.

John Naughton

Davids can sometimes really upset tech Goliaths

John Naughton

The leading David at the moment is Max Schrems, the Austrian activist and founder of the most formidable data-privacy campaigning organisation outside of the US.  As a student, he launched the campaign that eventually led to the Court of Justice of the European Union ruling that the ‘Safe Harbour’ agreement negotiated between the EU and the US to regulate data transfer between Europe and the US was invalid.  NOYB was established as a European non-profit that works on strategic litigation to ensure that the GDPR is upheld. It started with a concept, a website and a crowdfunding tool and within two months acquired thousands of “supporters” that has allowed it to begin operations with basic funding at €250,000 per year.   A quick survey of its website suggests that it’s been very busy.  And Schrems’s dispute with the Irish Data Protection Commissioner (DPC) about her failure to regulate Facebook’s handling of European users’ data has led to the Irish High Court ordering the  DPC to cover the costs of Schrems’s legal team in relation to the Court of Justice ruling on EU-US data transfers.

What’s interesting about this story is the way it challenges the “learned helplessness” that has characterised much of the public response to abuses of power by tech giants.  The right kind of strategic litigation, precisely targeted and properly researched can bring results.

The political arguments against digital monopolies in the House Judiciary Report

Alina Utrata

         The House Judiciary Committee’s report on digital monopolies (all 449 pages) was a meticulously-researched dossier of why the Big Four tech companies—Google, Apple, Amazon and Facebook—should be considered monopolies. However, leaving the nitty-gritty details aside, it’s worth examining how the report frames the political arguments for why monopolies are bad. 

         It’s important to distinguish economic and political anti-monopoly arguments, although they are related. Economically, the report has very strong reasoning. No doubt this is in part because one of its authors is Lina Khan, the brilliant lawyer whose innovative and compelling case for why Amazon should be considered a monopoly went viral in 2017, and built the legal argument this report was based on. The authors reason that monopolies are fundamentally anti-competitive, not conducive to entrepreneurship and innovation, and inevitably lead to fewer choices for consumers and worse quality in products and services, including a lack of privacy protections. In particular, it draws on Khan’s theory that anti-competitive behavior should not just be defined merely as resulting in high consumer prices (a la Bork), but through firms’ ability to use predatory pricing and reliance on their market infrastructure to harm competitors. 

         However, as former FTC chairman Robert Pitofsky pointed out, “It is bad history, bad policy, and bad law to exclude certain political values in interpreting the antitrust laws.”[1] The report explicitly acknowledges that monopolies do not just threaten the economy, stating, “Our economy and democracy are at stake.”[2] So what, politically, does the report say is the problem?

         Firstly, the affect that these digital platforms have on journalism. The report noted that, “a free and diverse press is essential to a vibrant democracy . . . independent journalism sustains our democracy by facilitating public discourse.” In particular, it points out the death of local news, and the fact that many communities effectively no longer have a fourth estate to hold local government accountable. The report also notes the power imbalance between the platforms and news organizations—the shift to content-aggregation, and the fact that most online traffic to digital publications is meditated through the platforms, means that small tweaks in algorithms can have major consequences for newspapers’ readership. While the report frames this in terms of newspapers’ bargaining power, it stops short of articulating the fundamental political issue at stake: unaccountable, private corporations have the power to determine what content we see and don’t see online.

         The second argument is that monopoly corporations infringe on the “economic liberty” of citizens. The report, both implicitly and explicitly, references the 1890 Congressional debates on anti-trust, in which US Senator Sherman proclaimed, “If we will not endure a king as a political power we should not endure a king over the production, transportation, and sale of any of the necessaries of life. If we would not submit to an emperor we should not submit to an autocrat of trade.”[3] This reasoning asserts that monopoly corporations exert a tyrannical power over individuals’ economic lives, directly analogous to the type of tyranny states exert over individuals’ political lives. Khan pointed out in a previous publication, in the 1890 debates, “what was at stake in keeping markets open—and keeping them free from industrial monarchs—was freedom.”[4]

         Repeatedly, the report notes that the committee had encountered a “prevalence of fear among market participants who depend on the dominant platforms.” It maintains that this was because of the economic dependence their monopoly power had created. For example, 37% of third-party sellers on Amazon—about 850,000 businesses—rely on Amazon as their sole source of income. Because of Amazon’s position as the gateway to e-commerce—Amazon controls about 65 to 70% of all U.S. online marketplace sales—it has the power to force sellers (or “internal competitors”) into arbitration. Amazon can kick sellers off the site, or lower the rankings of their products, or lengthen their shipping times—or, as happened to one third-party seller, refuse to release the products stored in Amazon warehouses, while still charging rent. Amazon forces sellers to give up their right to make a complaint in court as a condition for using its platform. Because of Amazon’s dominance, sellers cannot walk away. The report explicitly compares this marketplace power to the power of the state: 

“Because of the severe financial repercussions associated with suspension or delisting, many Amazon third-party sellers live in fear of the company. For sellers, Amazon functions as a “quasi-state,” and many “[s]ellers are more worried about a case being opened on Amazon than in actual court.” This is because Amazon’s internal dispute resolution system is characterized by uncertainty, unresponsiveness, and opaque decision-making processes.”[5]

          In this argument, monopolies are a threat to the economic liberty of individuals because they can use their dominance to subject those who depend on their markets to their own private law, as well as being able to pick “winners and losers.” The rise of this type of corporate law has been discussed before, specifically in reference to technology corporations. Frank Pasquale has predicted a shift from territorial to functional sovereignty, explaining, “in functional arenas from room-letting to transportation to commerce, persons will be increasingly subject to corporate, rather than democratic, control. For example: Who needs city housing regulators when AirBnB can use data-driven methods to effectively regulate room-letting, then house-letting, and eventually urban planning generally?”[6] Rory Van Loo wrote about the phenomenon more generally in Corporations as Courthousethe marketplace for dispute resolutions ranging from credit card companies to the Apple app store.[7]

         Finally, the report repeats Supreme Court Justice Louis Brandeis’s famous quote that, “We may have democracy, or we may have wealth concentrated in the hands of a few, but we cannot have both.” (Funnily enough, there is no documentation that Brandeis ever actually said that, although he certainly would have agreed with the sentiment.) It points out that “the growth in the platforms’ market power has coincided with an increase in their influence over the policymaking process.” The authors explicitly noted the corporations’ use of political lobbyists and their investments in think-tanks and non-profit advocacy groups to steer policy discussions. (Notably, Mohamed Abdalla and Moustafa Abdalla have just published a new paper entitled “The Grey Hoodie Project” about how Big Tech uses the strategies of Big Tobacco in order to influence academic research.) However, it’s not clear why monopolists’ power to influence the political process is any different from the ability of any wealthy individual or corporation. In fact, political theorist Rob Reich wrote a book Just Giving, arguing that philanthropy can subvert democratic processes. (An interesting real world example is when Facebook has donated $11 million dollars to the city of Menlo Park with the understanding that it would be used to establish and maintain a new police unit near Facebook’s headquarters.)

         A final political argument, not included in the report, comes from an unlikely source: Mark Zuckerberg (and, given his new role at Facebook, possibly former UK deputy prime minister Nick Clegg too).[8]  Zuckerberg argued during the committee hearings that breaking up companies like Facebook would allow other competitors, especially companies from China, to dominate the market in Facebook’s place. These companies, Zuckerberg claimed, don’t have the same values as the US—including democracy, competition, inclusion and free expression. Along with a dose of protectionism, the implicit argument is that it is better for private American corporations like Facebook to make decisions about who is allowed to say what online—and how to prioritize distributing that content—than it is to cede that power to authoritarian states. 

         The interesting thing is that Zuckerberg’s argument taps into a second strain of anti-monopoly political reasoning: that the state is scarier than corporations. Take the discourse around monopolies in 1950 during the debate on the Celler–Kefauver Act. As journalist Marquis Childs wrote, big corporations are “in reality collectivism—a kind of private socialism. . . [and] private socialism will sooner or later in a democracy become public socialism.”[9] In the shadow of the Cold War, the argument went that Big Firms will inevitably create a Big Government to regulate them, and Big Government will inevitably become fascism, communism, or other authoritarian forms of centralized state control. As Robert Pitofsky summed up, the argument asserts that “monopolies create economic conditions conducive to totalitarianism.”[10]  It’s the all-dominant state that citizens should be worried about, not necessarily the all-dominant corporation. (Tim Wu has written about this in the history of anti-monopoly in the US in his book, The Curse of Bigness: Antitrust in the New Gilded Age.

         To me, the most interesting thing to note is that the report did not mention the state’s reliance on these Big Tech corporations—particularly in new areas, like cloud computing. As the report documents, Amazon Web Services (AWS) dominates the cloud computing market, making up about half of global spending on cloud infrastructure services (and three times the market share of its closest competitor, Microsoft). An estimated 6,500 government agencies use AWS—including NASA and the CIA. If Target and Netflix are worried about using AWS, should the US government be worried about their dependency on Amazon Web Services? Does this type of consolidated infrastructure risk creating fragility in the system by becoming too big to fail?

         This question will continue to have relevance, especially as AWS and Microsoft’s Azure continue their battle for the Pentagon’s $10 billion cloud computing contract. Notably, US President Elect Joe Biden has appointed Mark Schwartz, an Enterprise Strategist at Amazon Web Services, to the Agency Review team for the critical and important Office of Management and Budget (along with a number of other individuals connected to Big Tech). Anti-trust and digital monopolies will certainly be a major issue for the future Biden Administration.


[1] Pitofsky, Robert. “Political Content of Antitrust.” University of Pennsylvania Law Review 127, no. 4 (January 1, 1979): 1051. 

[2] Emphasis added.

[3] 21 CONG. REC. 2459. Pitofsky, Robert. “Political Content of Antitrust.” University of Pennsylvania Law Review 127, no. 4 (January 1, 1979): 1051. 

[4] Khan, Lina. “Amazon’s Antitrust Paradox.” Yale Law Journal 126, no. 3 (January 1, 2016). https://digitalcommons.law.yale.edu/ylj/vol126/iss3/3.

[5] Subcommittee on Antitrust, Commercial and Administrative Law of the Committee on the Judiciary. “Investigation of Competition in Digital Markets: Majority Staff Report and Recommendations.” US House of Representatives, October 6, 2020. Emphasis added.

[6] Pasquale, Frank. “From Territorial to Functional Sovereignty: The Case of Amazon.” Open Democracy. January 5, 2018.

[7] Loo, Rory Van. “The Corporation as Courthouse.” Yale Journal on Regulation 33 (2016): 56.

[8] Many thanks to John Naughton for pointing this out to me.

[10] Pitofsky, Robert. “Political Content of Antitrust.” University of Pennsylvania Law Review 127, no. 4 (January 1, 1979): 1051.