Seeing Like a Social Media Site

The Anarchist’s Approach to Facebook

When John Perry Barlow published “A Declaration of the Independence of Cyberspace” nearly twenty-five years ago, he was expressing an idea that seemed almost obvious at the time: the internet was going to be a powerful tool to subvert state control. As Barlow explained to the “governments of the Industrial World,” those “weary giants of flesh and steel”—cyberspace does not lie within your borders. Cyberspace was a “civilization of the mind.” States might be able to control individuals’ bodies, but their weakness lay in their inability to capture minds.

In retrospect, this is a rather peculiar perspective of states’ real weakness, which has always been space. Literal, physical space—the endlessly vast terrain of the physical world—has historically been the friend of those attempting to avoid the state. As scholar James Scott documented in The Art of Not Being Governed, in early stages of state formation if the central government got too overbearing the population simply could—and often did—move. Similarly, John Torpey noted inThe Invention of the Passport that individuals wanting to avoid eighteenth-century France’s system of passes could simply walk from town to town, and passes were often “lost” (or, indeed, actually lost). As Richard Cobb noted, “there is no one more difficult to control than the pedestrian.” More technologically-savvy ways of traveling—the bus, the boat, the airplane—actually made it easier for the state to track and control movement.

Cyberspace may be the easiest place to track of all. It is, by definition, a mediated space. To visit, you must be in possession of hardware, which must be connected to a network, which is connected to other hardware, and other networks, and so on and so forth. Every single thing in the digital world is owned, controlled or monitored by someone else. It is impossible to be a pedestrian in cyberspace—you never walk alone. 

States have always attempted to make their populations more trackable, and thus more controllable. Scott calls this the process of making things “legible.” It includes “the creation of permanent last names, the standardization of weights and measures, the establishment of cadastral surveys and population registers, the invention of freehold tenure, the standardization of language and legal discourse, the design of cities, and the organization of transportation.” These things make previously complicated, complex and unstandardized facts knowable to the center, and thus more easy to administrate. If the state knows who you are, and where you are, then it can design systems to control you. What is legible is manipulable.

Cyberspace—and the associated processing of data—offers exciting new possibilities for the administrative center to make individuals more legible precisely because, as Barlow noted, it is “a space of the mind.” Only now, it’s not just states that have the capacity to do this—but sites. As Shoshana Zuboff documented in her book The Age of Surveillance Capitalism, sites like Facebook collect data about us in an attempt to make us more legible and, thus, more manipulatable. This is not, however, the first time that “technologically brilliant” centralized administrators have attempted to engineer society. 

Scott use the term “high modernism” to characterize schemes—attempted by planners across the political spectrum—that possess a “self-confidence about scientific and technical progress, the expansion of production, the growing satisfaction of human needs, the mastery of nature (including human nature), and, above all, the rational design of social order commensurate with the scientific understanding of natural laws.” In Seeing Like a State, Scott examines a number of these “high modernist” attempts to engineer forests in eighteenth-century Prussia and Saxony, urban cities in Paris and Brasilia, rural populations in ujamaa villages, and agricultural production in Soviet collective farms (to name a few). Each time, central administrators attempted to make complex, complicated processes—from people to nature—legible, and then engineer them into rational, organized systems based on scientific principles. It usually ended up going disastrously wrong—or, at least, not at all the way central authorities had planned it. 

The problem, Scott explained, is that “certain forms of knowledge and control require a narrowing of vision. . . designed or planned social order is necessarily schematic; it always ignores essential features of any real, functioning social order.” For example, mono-cropped forests became more vulnerable to disease and depleted soil structure—not to mention destroyed the diversity of the flora, insect, mammal, and bird populations which took generations to restore. The streets of Brasilia had not been designed with any local, community spaces where neighbors might interact; and, anyway, they forgot—ironically—to plan for construction workers, who subsequently founded their own settlement on the outskirts of the city, organized to defend their land and demanded urban services and secure titles. By 1980, Scott explained, “seventy-five percent of the population of Brasilia lived in settlements that had never been anticipated, while the planned city had reached less than half of its projected population of 557,000.” Contrary to Zuboff’s assertion that we are losing “the right to a future tense,” individuals and organic social processes have shown a remarkable capacity to resist and subvert otherwise brilliant plans to control them.

And yet this high-modernism characterizes most approaches to “regulating” social media, whether self-regulatory or state-imposed. And, precisely because cyberspace is so mediated, it is more difficult for users to resist or subvert the centrally-controlled processes imposed upon them. Misinformation on Facebook proliferates—and so the central administrators of Facebook try to engineer better algorithms, or hire legions of content moderators, or make centralized decisions about labeling posts, or simply kick off users. It is, in other words, a classic high-modernist approach to socially engineer the space of Facebook, and all it does is result in the platforms’ ruler—Mark Zuckerberg—consolidating more power. (Coincidentally, fellow Power-Shift contributor Jennifer Cobbe argued something quite similar in her recent article about the power of algorithmic censorship). Like previous attempts to engineer society, this one probably will not work well in practice—and there may be disastrous, authoritarian consequences as a result.

So what is the anarchist approach to social media? Consider this description of an urban community by twentieth-century activist Jane Jacobs, as recounted by Scott:

“The public peace-the sidewalk and street peace-of cities . . . is kept by an intricate, almost unconscious network of voluntary controls and standards among the people themselves, and enforced by the people themselves. . . . [an] incident that occurred on [Jacobs’] mixed-used street in Manhattan when an older man seemed to be trying to cajole an eight or nine-year-old girl to go with him. As Jacobs watched this from her second-floor window, wondering if she should intervene, the butcher’s wife appeared on the sidewalk, as did the owner of the deli, two patrons of a bar, a fruit vendor, and a laundryman, and several other people watched openly from their tenement windows, ready to frustrate a possible abduction. No “peace officer” appeared or was necessary. . . . There are no formal public or voluntary organizations of urban order here—no police, no private guards or neighborhood watch, no formal meetings or officeholders. Instead, the order is embedded in the logic of daily practice.”

How do we make social media sites more like Jacobs’ Manhattan, where people—not police or administrators—on “sidewalk terms” are empowered to shape their own cyber spaces? 

There may already be one example: Wikipedia. 

Wikipedia is not often thought of as an example of a social media site—but, as many librarians will tell you, it is not an encyclopedia. Yet Wikipedia is not only a remarkable repository of user-generated content, it also has been incredibly resilient to misinformation and extremist content. Indeed, as debates around Facebook wonder whether the site has eroded public discourse to such an extent that democracy itself has been undermined, debates around Wikipedia center around whether it is as accurate as the expert-generated content of Encyclopedia Britannica. (Encyclopedia Britannica says no; Wikipedia says it’s close.)

The difference is that Wikipedia empowers users. Anyone, absolutely anyone, can update Wikipedia. Everyone can see who has edited what, allowing users to self-regulate—and how users identified that suspected Russian agent Maria Butina was probably changing her own Wikipedia page, and changed it back. This radical transparency and empowerment produces organic social processes where, much like in the Manhattan street, individuals collectively mediate their own space. And, most importantly, it is dynamic—Wikipedia changes all the time. Instead of a static ruling (such as Facebook’s determination that the iconic photo of Napalm Girl would be banned for child nudity), Wikipedia’s process produces dialogue and deliberation, where communities constantly socially construct meaning and knowledge. Finally, because cyberspace is ultimately mediated space—individuals cannot just “walk” or “wander” across sidewalks, like in the real world—Wikipedia is mission-driven. It does not have the amorphous goal of “connecting the global community”, but rather “to create a world in which everyone can freely share in the sum of all knowledge.”

This suggests that no amount of design updates or changes to terms of service will ever “fix” Facebook—whether they are imposed by the US government, Mark Zuckerberg or Facebook’s Oversight Board. Instead, it is the high-modernism that is the problem. The anarchist’s approach would prioritize building designs that empower people and communities—so why not adopt the wiki-approach to the public square functions that social media currently serves, like wiki-newspapers or wiki-newsfeeds?

It might be better to take the anarchist’s approach. No algorithms are needed.

by Alina Utrata

Review: ‘The Social Dilemma’ — Take #2

In “More than tools: who is responsible for the social dilemma?, Microsoft researcher Niall Docherty has an original take on the thinking that underpins the film. If we are to pursue more productive discussions of the issues raised by the film, he argues, we need to re-frame social media as something more than a mere “tool”. After all, “when have human beings ever been fully and perfectly in control of the technologies around them? Is it not rather the case that technologies, far from being separate from human will, are intrinsically involved in its activation?”

French philosopher Bruno Latour famously uses the example of the gun to advance this idea, which he calls mediation. We are all aware of the platitude, “Guns don’t kill people, people kill people”. In its logic, the gun is simply a tool that allows the person, as the primary agent, to kill another. The gun exists only as an object, through which the person’s desire of killing flows. For Latour, this view is deeply misleading.

Instead, Latour draws our attention to the way the gun, in translating a human desire for killing into action, materializes that desire in the world: “you are a different person with the gun in your hand”, and the gun, by being in your hand, is different than if it were left snuggly in its rack. Only when the human intention and the capacities of the gun are brought together can a shooting, as an observably autonomous action, actually take place. It is impossible to neatly distinguish the primary agents of the scene. Responsibility of the shooting, which can only occur through the combination of human and gun, and by proxy, those who produced and provided it, is thus shared.

With this in mind, we must question how useful it is to think about social media in terms of manipulation and control. Social media, far from being a malicious yet inanimate object (like a weapon) is something more profound and complex: a generator of human will. Our interactions on social media platforms, our likes, our shares, our comments, are not raw resources to be mined – they simply could not have occurred without their technical mediation. Neither are they mere expressions of our autonomy, or, conversely, manipulation: the user does not, and cannot, act alone.

The implication of taking this Latourian view is that “neither human individuals, nor the manipulative design of platforms, seductive they may be, can be the sole causes of the psychological and political harm of social media”. Rather, it is the coming together of human users and user-interfaces, in specific historical settings, that co-produce the activity that occurs upon them. We, as users, as much as the technology itself, therefore, share responsibility for the issues that rage online today.

A ‘New School’ for geeks?

Posted by JN:

Interesting and novel. An online course for techies to get them thinking about the kind of world they are creating. Runs for twelve weeks from March 8 through May 31, 2021.

Its theme of “creative protest” covers issues of justice in tech through a multitude of approaches — whether it’s organizing in the workplace, contributing to an existing data visualization project or worker-owned platform, or building a new platform to nurture creative activism. Participants will work towards a final project as part of the program. Logic School meets for a live, two-hour session each week. Before a session, participants are expected to complete self-guided work: listening, reading, reflecting, and making progress on their final projects.

It’s free (funded by the Omidyar Network) and requires a commitment of five hours a week.

Trust in/distrust of public sector data repositories

Posted by JN

My eye was caught by an ad for a PhD internship in the Social Media Collective, an interesting group of scholars in Microsoft Research’s NYC lab.  What’s significant is the background they cite to the project.

Microsoft Research NYC is looking for an advanced PhD student to conduct an original research project on a topic under the rubric of “(dis)trust in public-sector data infrastructures.” MSR internships provide PhD students with an opportunity to work on an independent research project that advances their intellectual development while collaborating with a multi-disciplinary group of scholars. Interns typically relish the networks that they build through this program. This internship will be mentored by danah boyd; the intern will be part of both the NYC lab’s cohort and a member of the Social Media Collective. Applicants for this internship should be interested in conducting original research related to how trust in public-sector data infrastructures is formed and/or destroyed.

Substantive Context: In the United States, federal data infrastructures are under attack. Political interference has threatened the legitimacy of federal agencies and the data infrastructures they protect. Climate science relies on data collected by NOAA, the Department of Energy, NASA, and the Department of Agriculture. Yet, anti-science political rhetoric has restricted funding, undermined hiring, and pushed for the erasure of critical sources of data. And then there was Sharpie-gate. In the midst of a pandemic, policymakers in government and leaders in industry need to trust public health data to make informed decisions. Yet, the CDC has faced such severe attacks on its data infrastructure and organization that non-governmental groups have formed to create shadow sources of data. The census is democracy’s data infrastructure, yet it too has been plagued by political interference.

Data has long been a source of political power and state legitimacy, as well as a tool to argue for specific policies and defend core values. Yet, the history of public-sector data infrastructures is fraught, in no small part because state data has long been used to oppress, colonize, and control. Numbers have politics and politics has numbers.  Anti-colonial and anti-racist movements have long challenged what data the state collects, about whom, and for what purposes. Decades of public policy debates about privacy and power have shaped public-sector data infrastructures. Amidst these efforts to ensure that data is used to ensure equity — and not abuse — there have been a range of adversarial forces who have invested in polluting data for political, financial, or ideological purposes.

The legitimacy of public-sector data infrastructures is socially constructed. It is not driven by either the quality or quantity of data, but how the data — and the institution that uses its credibility to guarantee the data —  is perceived. When data are manipulated or political interests contort the appearance of data, data infrastructures are at risk. As with any type of infrastructure, data infrastructures must be maintained as sociotechnical systems. Data infrastructures are rendered visible when they break, but the cracks in the system should be negotiated long before the system has collapsed.

At the moment, I suspect that this is a problem that’s mostly confined to the US.  But the stresses of the pandemic and of alt-right disruption may mean that it’s coming to Europe (and elsewhere) soon.

Review: ‘The Social Dilemma’ – Take #1

The Social Dilemma is an interesting — and much-discussed — docudrama about the impact of social media on society.  We thought it’d be interesting to have a series in which we gather different takes on the film.  Here’s Take#1…

Spool forward a couple of centuries. A small group of social historians drawn from the survivors of climate catastrophe are picking through the documentary records of what we are currently pleased to call our civilisation, and they come across a couple of old movies. When they’ve managed to find a device on which they can view them, it dawns on them that these two films might provide an insight into a great puzzle: how and why did the prosperous, apparently peaceful societies of the early 21st century implode?

The two movies are The Social Network, which tells the story of how a po-faced Harvard dropout named Mark Zuckerberg created a powerful and highly profitable company; and The Social Dilemma, which is about how the business model of this company – as ruthlessly deployed by its po-faced founder – turned out to be an existential threat to the democracy that 21st-century humans once enjoyed.

Both movies are instructive and entertaining, but the second one leaves one wanting more. Its goal is admirably ambitious: to provide a compelling, graphic account of what the business model of a handful of companies is doing to us and to our societies. The intention of the director, Jeff Orlowski, is clear from the outset: to reuse the strategy deployed in his two previous documentaries on climate change – nicely summarised by one critic as “bring compelling new insight to a familiar topic while also scaring the absolute shit out of you”.

For those of us who have for years been trying – without notable success – to spark public concern about what’s going on in tech, it’s fascinating to watch how a talented movie director goes about the task. Orlowski adopts a two-track approach. In the first, he assembles a squad of engineers and executives – people who built the addiction-machines of social media but have now repented – to talk openly about their feelings of guilt about the harms they inadvertently inflicted on society, and explain some of the details of their algorithmic perversions.

They are, as you might expect, almost all males of a certain age and type. The writer Maria Farrell, in a memorable essay, describes them as examples of the prodigal techbro – tech executives who experience a sort of religious awakening and “suddenly see their former employers as toxic, and reinvent themselves as experts on taming the tech giants. They were lost and are now found.”

Biblical scholars will recognise the reference from Luke 15. The prodigal son returns having “devoured his living with harlots” and is welcomed with open arms by his old dad, much to the dismay of his more dutiful brother. Farrell is not so welcoming. “These ‘I was lost but now I’m found, please come to my Ted Talk’ accounts,” she writes, “typically miss most of the actual journey, yet claim the moral authority of one who’s ‘been there’ but came back. It’s a teleportation machine, but for ethics.”

It is, but Orlowski welcomes these techbros with open arms because they suit his purpose – which is to explain to viewers the terrible things that the surveillance capitalist companies such as Facebook and Google do to their users. And the problem with that is that when he gets to the point where we need ideas about how to undo that damage, the boys turn out to be a bit – how shall I put it? – incoherent.

The second expository track in the film – which is interwoven with the documentary strand – is a fictional account of a perfectly normal American family whose kids are manipulated and ruined by their addiction to social media. This is Orlowski’s way of persuading non-tech-savvy viewers that the documentary stuff is not only real, but is inflicting tangible harm on their teenagers. It’s a way of saying: Pay attention: this stuff really matters!

And it works, up to a point. The fictional strand is necessary because the biggest difficulty facing critics of an industry that treats users as lab rats is that of explaining to the rats what’s happening to them while they are continually diverted by the treats (in this case dopamine highs) being delivered by the smartphones that the experimenters control.

Where the movie fails is in its inability to accurately explain the engine driving this industry that harnesses applied psychology to exploit human weaknesses and vulnerabilities.

A few times it wheels on Prof Shoshana Zuboff, the scholar who gave this activity a name – “surveillance capitalism”, a mutant form of our economic system that mines human experience (as logged in our data trails) in order to produce marketable predictions about what we will do/read/buy/believe next. Most people seem to have twigged the “surveillance” part of the term, but overlooked the second word. Which is a pity because the business model of social media is not really a mutant version of capitalism: it’s just capitalism doing its thing – finding and exploiting resources from which profit can be extracted. Having looted, plundered and denuded the natural world, it has now turned to extracting and exploiting what’s inside our heads. And the great mystery is why we continue to allow it to do so.

John Naughton

Davids can sometimes really upset tech Goliaths

John Naughton

The leading David at the moment is Max Schrems, the Austrian activist and founder of the most formidable data-privacy campaigning organisation outside of the US.  As a student, he launched the campaign that eventually led to the Court of Justice of the European Union ruling that the ‘Safe Harbour’ agreement negotiated between the EU and the US to regulate data transfer between Europe and the US was invalid.  NOYB was established as a European non-profit that works on strategic litigation to ensure that the GDPR is upheld. It started with a concept, a website and a crowdfunding tool and within two months acquired thousands of “supporters” that has allowed it to begin operations with basic funding at €250,000 per year.   A quick survey of its website suggests that it’s been very busy.  And Schrems’s dispute with the Irish Data Protection Commissioner (DPC) about her failure to regulate Facebook’s handling of European users’ data has led to the Irish High Court ordering the  DPC to cover the costs of Schrems’s legal team in relation to the Court of Justice ruling on EU-US data transfers.

What’s interesting about this story is the way it challenges the “learned helplessness” that has characterised much of the public response to abuses of power by tech giants.  The right kind of strategic litigation, precisely targeted and properly researched can bring results.

The political arguments against digital monopolies in the House Judiciary Report

Alina Utrata

         The House Judiciary Committee’s report on digital monopolies (all 449 pages) was a meticulously-researched dossier of why the Big Four tech companies—Google, Apple, Amazon and Facebook—should be considered monopolies. However, leaving the nitty-gritty details aside, it’s worth examining how the report frames the political arguments for why monopolies are bad. 

         It’s important to distinguish economic and political anti-monopoly arguments, although they are related. Economically, the report has very strong reasoning. No doubt this is in part because one of its authors is Lina Khan, the brilliant lawyer whose innovative and compelling case for why Amazon should be considered a monopoly went viral in 2017, and built the legal argument this report was based on. The authors reason that monopolies are fundamentally anti-competitive, not conducive to entrepreneurship and innovation, and inevitably lead to fewer choices for consumers and worse quality in products and services, including a lack of privacy protections. In particular, it draws on Khan’s theory that anti-competitive behavior should not just be defined merely as resulting in high consumer prices (a la Bork), but through firms’ ability to use predatory pricing and reliance on their market infrastructure to harm competitors. 

         However, as former FTC chairman Robert Pitofsky pointed out, “It is bad history, bad policy, and bad law to exclude certain political values in interpreting the antitrust laws.”[1] The report explicitly acknowledges that monopolies do not just threaten the economy, stating, “Our economy and democracy are at stake.”[2] So what, politically, does the report say is the problem?

         Firstly, the affect that these digital platforms have on journalism. The report noted that, “a free and diverse press is essential to a vibrant democracy . . . independent journalism sustains our democracy by facilitating public discourse.” In particular, it points out the death of local news, and the fact that many communities effectively no longer have a fourth estate to hold local government accountable. The report also notes the power imbalance between the platforms and news organizations—the shift to content-aggregation, and the fact that most online traffic to digital publications is meditated through the platforms, means that small tweaks in algorithms can have major consequences for newspapers’ readership. While the report frames this in terms of newspapers’ bargaining power, it stops short of articulating the fundamental political issue at stake: unaccountable, private corporations have the power to determine what content we see and don’t see online.

         The second argument is that monopoly corporations infringe on the “economic liberty” of citizens. The report, both implicitly and explicitly, references the 1890 Congressional debates on anti-trust, in which US Senator Sherman proclaimed, “If we will not endure a king as a political power we should not endure a king over the production, transportation, and sale of any of the necessaries of life. If we would not submit to an emperor we should not submit to an autocrat of trade.”[3] This reasoning asserts that monopoly corporations exert a tyrannical power over individuals’ economic lives, directly analogous to the type of tyranny states exert over individuals’ political lives. Khan pointed out in a previous publication, in the 1890 debates, “what was at stake in keeping markets open—and keeping them free from industrial monarchs—was freedom.”[4]

         Repeatedly, the report notes that the committee had encountered a “prevalence of fear among market participants who depend on the dominant platforms.” It maintains that this was because of the economic dependence their monopoly power had created. For example, 37% of third-party sellers on Amazon—about 850,000 businesses—rely on Amazon as their sole source of income. Because of Amazon’s position as the gateway to e-commerce—Amazon controls about 65 to 70% of all U.S. online marketplace sales—it has the power to force sellers (or “internal competitors”) into arbitration. Amazon can kick sellers off the site, or lower the rankings of their products, or lengthen their shipping times—or, as happened to one third-party seller, refuse to release the products stored in Amazon warehouses, while still charging rent. Amazon forces sellers to give up their right to make a complaint in court as a condition for using its platform. Because of Amazon’s dominance, sellers cannot walk away. The report explicitly compares this marketplace power to the power of the state: 

“Because of the severe financial repercussions associated with suspension or delisting, many Amazon third-party sellers live in fear of the company. For sellers, Amazon functions as a “quasi-state,” and many “[s]ellers are more worried about a case being opened on Amazon than in actual court.” This is because Amazon’s internal dispute resolution system is characterized by uncertainty, unresponsiveness, and opaque decision-making processes.”[5]

          In this argument, monopolies are a threat to the economic liberty of individuals because they can use their dominance to subject those who depend on their markets to their own private law, as well as being able to pick “winners and losers.” The rise of this type of corporate law has been discussed before, specifically in reference to technology corporations. Frank Pasquale has predicted a shift from territorial to functional sovereignty, explaining, “in functional arenas from room-letting to transportation to commerce, persons will be increasingly subject to corporate, rather than democratic, control. For example: Who needs city housing regulators when AirBnB can use data-driven methods to effectively regulate room-letting, then house-letting, and eventually urban planning generally?”[6] Rory Van Loo wrote about the phenomenon more generally in Corporations as Courthousethe marketplace for dispute resolutions ranging from credit card companies to the Apple app store.[7]

         Finally, the report repeats Supreme Court Justice Louis Brandeis’s famous quote that, “We may have democracy, or we may have wealth concentrated in the hands of a few, but we cannot have both.” (Funnily enough, there is no documentation that Brandeis ever actually said that, although he certainly would have agreed with the sentiment.) It points out that “the growth in the platforms’ market power has coincided with an increase in their influence over the policymaking process.” The authors explicitly noted the corporations’ use of political lobbyists and their investments in think-tanks and non-profit advocacy groups to steer policy discussions. (Notably, Mohamed Abdalla and Moustafa Abdalla have just published a new paper entitled “The Grey Hoodie Project” about how Big Tech uses the strategies of Big Tobacco in order to influence academic research.) However, it’s not clear why monopolists’ power to influence the political process is any different from the ability of any wealthy individual or corporation. In fact, political theorist Rob Reich wrote a book Just Giving, arguing that philanthropy can subvert democratic processes. (An interesting real world example is when Facebook has donated $11 million dollars to the city of Menlo Park with the understanding that it would be used to establish and maintain a new police unit near Facebook’s headquarters.)

         A final political argument, not included in the report, comes from an unlikely source: Mark Zuckerberg (and, given his new role at Facebook, possibly former UK deputy prime minister Nick Clegg too).[8]  Zuckerberg argued during the committee hearings that breaking up companies like Facebook would allow other competitors, especially companies from China, to dominate the market in Facebook’s place. These companies, Zuckerberg claimed, don’t have the same values as the US—including democracy, competition, inclusion and free expression. Along with a dose of protectionism, the implicit argument is that it is better for private American corporations like Facebook to make decisions about who is allowed to say what online—and how to prioritize distributing that content—than it is to cede that power to authoritarian states. 

         The interesting thing is that Zuckerberg’s argument taps into a second strain of anti-monopoly political reasoning: that the state is scarier than corporations. Take the discourse around monopolies in 1950 during the debate on the Celler–Kefauver Act. As journalist Marquis Childs wrote, big corporations are “in reality collectivism—a kind of private socialism. . . [and] private socialism will sooner or later in a democracy become public socialism.”[9] In the shadow of the Cold War, the argument went that Big Firms will inevitably create a Big Government to regulate them, and Big Government will inevitably become fascism, communism, or other authoritarian forms of centralized state control. As Robert Pitofsky summed up, the argument asserts that “monopolies create economic conditions conducive to totalitarianism.”[10]  It’s the all-dominant state that citizens should be worried about, not necessarily the all-dominant corporation. (Tim Wu has written about this in the history of anti-monopoly in the US in his book, The Curse of Bigness: Antitrust in the New Gilded Age.

         To me, the most interesting thing to note is that the report did not mention the state’s reliance on these Big Tech corporations—particularly in new areas, like cloud computing. As the report documents, Amazon Web Services (AWS) dominates the cloud computing market, making up about half of global spending on cloud infrastructure services (and three times the market share of its closest competitor, Microsoft). An estimated 6,500 government agencies use AWS—including NASA and the CIA. If Target and Netflix are worried about using AWS, should the US government be worried about their dependency on Amazon Web Services? Does this type of consolidated infrastructure risk creating fragility in the system by becoming too big to fail?

         This question will continue to have relevance, especially as AWS and Microsoft’s Azure continue their battle for the Pentagon’s $10 billion cloud computing contract. Notably, US President Elect Joe Biden has appointed Mark Schwartz, an Enterprise Strategist at Amazon Web Services, to the Agency Review team for the critical and important Office of Management and Budget (along with a number of other individuals connected to Big Tech). Anti-trust and digital monopolies will certainly be a major issue for the future Biden Administration.


[1] Pitofsky, Robert. “Political Content of Antitrust.” University of Pennsylvania Law Review 127, no. 4 (January 1, 1979): 1051. 

[2] Emphasis added.

[3] 21 CONG. REC. 2459. Pitofsky, Robert. “Political Content of Antitrust.” University of Pennsylvania Law Review 127, no. 4 (January 1, 1979): 1051. 

[4] Khan, Lina. “Amazon’s Antitrust Paradox.” Yale Law Journal 126, no. 3 (January 1, 2016). https://digitalcommons.law.yale.edu/ylj/vol126/iss3/3.

[5] Subcommittee on Antitrust, Commercial and Administrative Law of the Committee on the Judiciary. “Investigation of Competition in Digital Markets: Majority Staff Report and Recommendations.” US House of Representatives, October 6, 2020. Emphasis added.

[6] Pasquale, Frank. “From Territorial to Functional Sovereignty: The Case of Amazon.” Open Democracy. January 5, 2018.

[7] Loo, Rory Van. “The Corporation as Courthouse.” Yale Journal on Regulation 33 (2016): 56.

[8] Many thanks to John Naughton for pointing this out to me.

[10] Pitofsky, Robert. “Political Content of Antitrust.” University of Pennsylvania Law Review 127, no. 4 (January 1, 1979): 1051. 

 

The European Commission launches Amazon probe

John Naughton

The European commission has opened an antitrust investigation of Amazon, on the grounds that the company has breached EU antitrust rules against distorting competition in online retail markets. Amazon, says the commission, has been using its privileged access to non-public data of independent sellers who sell on its marketplace to benefit the parts of its own retail business that directly compete with those third-party sellers. The commission has also opened a second investigation into the possible preferential treatment of Amazon’s own retail offers compared with those of marketplace sellers that use Amazon’s logistics and delivery services.

The good news about this is not so much that the EU is taking action as that it is doing so in an intelligently targeted manner. Too much of the discourse about tech companies in the last two years has been about “breaking them up”. But “break ’em up” is a slogan, not a policy, and it has a kind of Trumpian ring to it. The commission is avoiding that.

It is also avoiding another trap – that of generally labelling Amazon as a “monopoly”. As the analyst Benedict Evans never tires of pointing out, a monopoly in what market, exactly? In the US, Amazon has about 40% of e-commerce. That looks like near dominance, in competitive terms. But e-commerce is only 16-20% of all retail. “So,” asks Evans, “does Amazon have 40% of e-commerce or 10% of retail? Amazon’s lawyers would argue, entirely reasonably, that Amazon competes with Walmart, Costco, Macy’s and Safeway – that it competes with other large retailers, not just ‘online’ retailers. On that basis, Amazon’s market is ‘retail’ and its market share in the US is between 5% and 10%.

On the other hand, if you’re a book publisher, then Amazon definitely looks like a monopoly with more than half of all book sales and probably three-quarters of all ebook sales. The moral for regulators, therefore, is that if you want to go after a monopolist then choose the market carefully. And this is what the commission has done, because in Amazon’s own online “marketplace”, where third parties sell stuff on its platform, it very definitely is a monopoly. And, according to the US House of Representatives recent inquiry, it is abusing its power in that particular marketplace. The EU inquiry will be into whether that is also happening in Europe.

The traditional response to such charges is that if people want to trade in Amazon’s hyper-efficient online marketplace then they have to play by Amazon’s rules. After all, nobody’s forcing them to be there. (The same argument is made about Apple’s app store.) That might work if there were dozens of alternative marketplaces, but network effects have led to a situation where a winner has taken all. In the online world, Amazon is a giant while all others are minnows. And the pandemic has further reinforced its dominance. So it really matters if the company is indeed abusing its monopoly in its own marketplace. What makes it worse is that Amazon is both a player in that marketplace and the adjudicator of complaints about its behaviour. Judge and jury and all that.

Breaking Amazon up is unlikely to be an effective remedy to this kind of problem. What is probably needed are laws that regulate behaviour in online marketplaces, which, for example, make it illegal both to run a market and trade in it on your own account. That’s not to say that break-up might not be appropriate in some cases. Maybe Facebook should be forced to disgorge Instagram and WhatsApp and Google to liberate YouTube. Even then, though, history provides some cautionary tales.

Take AT&T, for example, which for many decades was a lightly regulated monopoly with total control over the US telephone network. This had benefits, in the sense that the country had a pretty good analogue phone system. But it also had grievous downsides, because it meant that AT&T controlled the pace of innovation on communications technology, which effectively gave it the power to apply the brakes to the future. The company rejected the idea of packet-switching (the underpinning technology of the internet), for example, when it was first proposed in the early 1960s. Worse still, in the mid-1930s, after a researcher at Bell Labs invented a method of recording audio signals on to magnetised wire reels, he was forced to stop the research and lock away his notebooks because AT&T feared that it would damage the telephone business. So a technology that proved essential for the digital computing industry was hidden away for 20-plus years.

Eventually, though, the “break ’em up” mania took hold, and in the early 1980s AT&T was dismantled into seven companies – the “baby bells”. You can guess what happened: some of the babies grew and grew and swallowed up others, with the result that there are now two giant corporations – AT&T and Verizon. So even if WhatsApp, YouTube and Instagram were liberated from their existing parents, network effects and capitalist concentration will make them into a new generation of tech giants and we will be back here in 20 years wondering how to regulate them. The truth is that regulation is hard and focused and intelligent regulation is even harder. So maybe the way the EU is going about it is the path to follow.

[A version of this post appeared in The Observer, 15.11.2020]

Should you have a right to a Facebook account?

Alina Utrata

Now that the 2020 US presidential election has concluded, the post-mortem evaluation of how well social media platforms performed will begin. Since the content moderation debate has mostly focused on platforms’ willing or unwillingness to remove content or accounts, the post-election coverage will almost inevitably center around who and what was removed or labeled.

In 2019, the New York Times published a feature story about individuals who had had their Facebook accounts suspended—possibly because they had been misidentified as fake accounts in a general security sweep. However, these users do not know for certain. Individuals were only told that their accounts had been disabled because of “suspicious activity”—the appeal process to restore suspended Facebook accounts is not a transparent one, and cases frequently drag on for extended periods with no resolution. (Facebook, as the article documents, has quite sophisticated techniques for catching individuals attempting to make multiple accounts, foreclosing the solution of a “second Facebook account” for suspended users.)

Facebook CEO Mark Zuckerberg has frequently said that he does not want the platform to become the “arbiter of free speech.” Free speech, however, is a restriction that only applies to governments. As the existence of these suspended accounts show, in reality Facebook can limit speech or ban users for almost any reason it cares to put in its terms of service. It is a private corporation, not a government. 

The problem of the exclusionary policies of private corporations might be less acute in a competitive marketplace. For example, it could be inconvenient if a personal feud with your local corner store leads you to being banned from the shop; but it is always possible to buy milk from another store down the road. It is different, of course, if you happen to live in Lawton, Oklahoma—or one of the hundreds of communities across the US where Walmart is the dominant monopoly, capturing 70% or more of the grocery store market. Being banned from Walmart (either for using their electric scooters while intoxicated or violating the store’s policy on not carrying guns) might be far more significant for your life and livelihood.

Facebook’s form of monopoly power means that being banned from the platform can have significant consequences for individuals’ lives: loss of the data hosted on the platform (like photos or old messages), the ability to use messenger to connect to friends and family, or participate in professional or social groups only organized on Facebook. Some people depend on Facebook for their livelihoods, communicating with customers or selling on Facebook’s marketplace; or for political campaigns, reaching out to voters in a run for local city council, for example. The same dynamics are true for other digital monopolies, like Amazon. The recent House Judiciary report found that Amazon can, and often does, arbitrarily lower third-party sellers’ products in their search ranking, lengthen their shipping times, or kick them off the site entirely. About 850,000 businesses, or 37% of third-party sellers on Amazon, rely on Amazon as their sole source of income. Monopolies can be, as Zephyr Teachout argues, a form of economic tyranny.

There are two general approaches floated to remedying this monopoly power. The first is to “break them up.” Facebook or Amazon’s policies might be less important if there are many e-commerce or social networking sites in town—and perhaps their policies would improve if they had to compete with other platforms for users or sellers. On the other hand, Facebook might argue that the value of social networking sites are the fact that they are consolidated. As the sudden surge in popularity of the app Parler may soon demonstrate: there’s very little point in being on a social networking site if the people you want to reach aren’t there too. Alternative social networking sites may simply be complementary, rather than competitive. Similarly, Amazon might argue that it is convenient, and beneficial, to both consumers and sellers that e-commerce is located all in one place. Instead of searching online (by using another monopoly, Google) through hundreds of webpages with no guide as to quality, you can go to one portal at Amazon and find exactly what you want.

A second approach to tackling monopolies is regulation. For example, the state can and does get involved if a private corporation excludes you on the basis of a protected identity, such a race or sexual orientation. US Senator Elizabeth Warren’s calls for Amazon, Apple or Google to choose whether they want to be “market platforms” or “market participators” is another example of the state’s attempt to impose regulations in order to make sure that these monopolies are more fair. The government also gets involved when it involves product safety. For example, the Forum on Information and Democracy just published a report outlining recommending principles for regulating quality standards for digital platforms, in the same way that governments might require standards for food or medicine sold on the market. In this approach, the state imposes limits or controls on corporations to try and curb or reform their power over consumers. However, this approach requires active government enforcement and involvement. As the House Judiciary report documented, even though they are equipped with anti-trust laws, many US regulatory agencies have been slow or unwilling to take on the Big Tech monopolies. Corporations also point out that government involvement can stifle innovative and entrepreneurship.

However, there might be a third approach: democratization. Mark Zuckerberg has said that, “in a lot of ways Facebook is more like a government than a traditional company.” If that is the case, then it has been a long time since the United States tolerated a government with the kind of absolute power Mark Zuckerberg exerts over Facebook (as CEO and founder, Zuckerberg retains majority voting shares). So could we democratize Facebook, and make it a company ruled by consent of the governed rather than fiat of the king? Could Facebook users appoint representatives to a “Constitutional Convention” to draft Facebook’s terms of service, or adopt a Bill of Rights to guide design and algorithmic principles? Facebook’s Oversight Board has already been compared to a Supreme Court, so why not add a legislative branch too? Could we have elections on representatives to a Facebook legislature, which would pass “laws” about how the online community should be governed? (A Facebook legislature would arguably be more effective than the referendum process Facebook tried last time it experimented with democratization.) 

Crucially, however, any democratization process would have to be coupled with genuine democratic reform of Facebook’s corporate governance: a Facebook Parliament in name only wouldn’t achieve much if Mark Zuckerberg retained absolute control of the company. True democratization would require in a change not just in who we think represents Facebook, but who owns Facebook—or, rather, who ought to own Facebook. Mark Zuckerberg? Or we, its users? If the answer is Mark Zuckerberg, a Facebook account will always be a privilege, not a right.

Democratizing digital sovereignty: an impossible task?

Julia Rone

The concept of digital sovereignty has increasingly gained traction in the last decade. A study by the Canadian scholars Stephan Couture and Sophie Toupin in the ProQuest database has shown that while the term appeared only 6 times in general publications before 2008, it was used almost 240 times between 2015-2018. As every new trendy term, “digital sovereignty” has been used in a variety of fields in multiple often conflicting ways. It has been “mobilized by a diversity of actors, from heads of states to indigenous scholars, to grassroots movements, and anarchist-oriented “tech collectives,” with very diverse conceptualizations, to promote goals as diverse as state protectionism, multistakeholder Internet governance or protection against state surveillance”.

Within the EU, Germany has been a champion of “digital sovereignty” — promoted in domestic discourse as a panacea, a magic solution that can at the same time increase the competitiveness of German digital industries, allow individuals to control their data and give power to the state to manage vulnerabilities in critical infrastructures. As Daniel Lambach and Kai Opperman have found, German domestic players have used the term in very vague ways, which has made it easier to organize coalitions around it to apply for funding or push for particular policies. Furthermore, the German Federal Foreign Office has made considerable efforts to promote the term in European policy debates. It has been more cautious at the international scene, where the US has promoted an open Internet (which completely suits its economic and geopolitical interests, one must add) and has been very suspicious of notions of digital sovereignty, associated with Chinese and Russian doctrines above all. Attempting to avoid qualifications of sovereignty as necessarily authoritarian, French President Emmanuel Macron proposed in a 2018 speech at the Internet Governance Forum a vision of the return of the democratic state in Internet governance, as different from both the Chinese model of control and the Californian model of private self-regulation. This unfortunately turned out to be easier said than done.

What all of this comes to show is that beyond the fact that more and more political and economic players talk about “digital sovereignty”, the term itself is up for grabs and there is no single accepted meaning for it. This might seem confusing but I argue it is liberating since it allows us to imagine digital sovereignty as how we want it to be rather than encountering it as a stable, ossified reality. Drawing on a recent discussion on conflicts of sovereignty in the European Union, I claim that discussions about digital sovereignty have been dominated by the same tension as more general discussions on sovereignty – namely the tension between national and supranational sovereignty. Yet, as Brack, Crespy and Coman convincingly argue, the more important sovereignty conflicts in recent European Union politics have in fact been between the people and parliaments, as bearers of democratic sovereignty, on the one hand, and executives at both the national and supranational level, on the other. The demand for “real democracy now” that informed the Spanish Indignados protests reverberated strongly across Europe and in a decade of protests against both austerity and free trade protesters and civil society alike made strong claims for democratic deepening. Sovereignty is ultimately bound with the question of “who rules” and since the French Revolution in Europe the answer to this question at least normatively has been “the people”. Of course, how do “the people” rule and who constitutes “the people” are questions that have sparked both theoretical and practical, sometimes extremely violent, debates over centuries. Yet, the democratic impulse behind the contemporary notion of sovereignty remains there and has become increasingly prominent in the aftermath of the 2008 financial crisis in which the insulation of markets from democratic control has become painfully visible.

What is remarkable is that none of these debates on sovereignty as, ultimately, democratic sovereignty has reached the field of digital policy. Talk about digital sovereignty in policy circles has often presupposed either an authoritarian omnipotent state — as evidenced in Russian and Chinese doctrines of digital sovereignty — or a democratic state but where all decisions are made by the executive, as in Macron’s vision of the ‘return of the state’ in Internet policy. Yet, almost all interesting issues of Internet regulation are issues that deserve a proper democratic debate and participation. States such as France attempting to regulate disinformation without even a basic consultation with citizens have rightly been accused of censorship and stifling political speech.

Who can decide what constitutes disinformation, hate speech or online harms? There is no easy answer to this question but certainly greater democratic involvement and discussion in decisions about silencing political messages would be appropriate. This democratic involvement can take the form of parliamentary debates, hearing and resolutions. But it can also take the form of debates at democratic neighbourhood assemblies or organized mini-publics events. It can take place at the European level with more involvement of the European Parliament and innovative uses of so-far ‘blunt’ instruments such as online public consultations or the European Citizen Initiative. Or it can take place at the national level, with parliaments even of small EU member states building up their capacity to monitor and debate Internet policy proposals. National citizens can also get involved in debates on Internet policy through petitions, referenda, and public consultations. Such type of initiatives will not only promote awareness about specific digital policies but will also increase their legitimacy and potentially their effectiveness if citizens have a sense of “ownership” with regard to new laws and regulations and have taken place in coining them.

Some of this might sound utopian. Some of it might sound painstakingly banal and obvious. But the truth is that while our democracies are struggling with the challenges posed by big tech, a lot of proposals for regulation have been shaped by the presence and power of private companies themselves or have been put forward by illiberal leaders with authoritarian tendencies. In such a context, demands for more digital democratic sovereignty could emancipate us from excessive private and executive power and allow us to reimagine digital content, data and infrastructures as something that is collectively owned and governed.

The early years of the Internet were marked by the techno-deterministic promise that digital tech would democratise politics. What happened instead was the immense concentration of power and influence in the hands of a few tech giants. The solution to this is not to take power from the private companies and give it back to powerful states acting as Big Brothers but instead to democratize both. We can use democracy as a technology, or what the ancient Greeks would call techne, to make both private corporations and states more open, participative and accountable. This is certainly not what Putin, Macron or Merkel would mean when they talk about digital sovereignty. But it is something that we as citizens should push for. Is it possible to democratize digital sovereignty? Or is such a vision bound to end up as the toothless reality of an occasional public consultation whose results decision makers ignore? This is ultimately a political question not a conceptual one. The notion of “digital sovereignty” is up for grabs. So is our democratic future.

Create your website with WordPress.com
Get started