Clubhouse in China shows that even “harmless” apps may put individuals in harm’s way

by Alina Utrata

Up until last week, most people hadn’t heard of the voice chatroom app Clubhouse—popular mostly with LA celebrities and Silicon Valley elites (and venture capitalist firm Andreessen Horowitz has invested tens of millions of dollars in the platform). The reason the app gave off such an air of exclusivity was because not just anyone could join—new users required an invite from a current user in order to sign up. Clubhouse growth has increased recently, especially after Elon Musk tweeted about it, with some invites being auctioning off for up to $125. But the recent uptick in user growth has underscored a privacy nightmare that is almost certainly putting Clubhouse users and non-users at risk—and in some cases even putting them in danger. 

The problem centers around Clubhouse invites. In order to invite someone onto the platform, you must allow the app access to your contact list. The app then uploads your entire contact list—and lets you know how many Clubhouse users also have your contacts in their phone’s contacts. Because of Clubhouse’s aggressive contact list collection policy (you literally cannot be invited onto the app unless one of your contacts has allowed Clubhouse access to their contacts) the app has has the capacity to quickly accumulate a “social graph.”

Some, like Will Oremus, have pointed out that this is a little bit “creepy.” You can suddenly see how many people have your therapist (or drug dealer) in their contact list, revealing networks or connections that were previously invisible. This “invite a friend” practice is also, as Alexander Hanff pointed out, almost certainly illegal under GDPR. By providing your contact list to Clubhouse, you have shared your contact’s personal information with the company without their consent. And while you may know that a friend shared your phone number with Clubhouse if you received an invite, Clubhouse also knows how many of its users have your phone number in their contacts—whether you are on the app at all. 

For places where GDPR does not apply, it is unclear whether this type of non-consensual data collection is illegal. As a California resident, I have the right to request that companies delete my personal information under California’s Consumer Privacy Act of 2018—and I wrote to the company to specifically request that Clubhouse delete my phone number that other users have shared. Clubhouse (or its parent company, Alpha Exploration Co) have 45 days to respond or request a 90 day extension.

While I personally find this type of data collection irritating, it may literally be a matter of life or death for others. Just this week, some articles have triumphantly proclaimed that “Clubhouse cracked the Great Firewall.” Chat rooms entitled “Does Xinjiang have concentration camps?” and “Friends from Tibet and Xinjiang, we want to invite you over for a chat” appeared on the platform, and were supposedly attended by individuals from mainland China (who downloaded the app via VPN, as Clubhouse is not available in China’s Apple app store). Clubhouse has since been shut down by China’s censors.

However, a report by the Stanford Internet Observatory pointed out that there are major security flaws in the Clubhouse app that almost certainly have put Chinese users and non-users of the app at risk. The SIO report found multiple problems with Clubhouse’s security—including lack of encryption for sensitive information; the use of a back-end infrastructure software located in Shanghai, which may therefore be legally obligated to share information with the Chinese government; and unclear policies surrounding Clubhouse’s storage and retention of chatroom audio. The report noted that “in at least one instance, SIO observed room metadata being relayed to servers we believe to be hosted in the PRC, and audio to servers managed by Chinese entities and distributed around the world via Anycast. It is also likely possible to connect Clubhouse IDs with user profiles.”

Clubhouse has almost certainly put attendees of those discussions who live in or have connections to mainland China at risk of retaliation from the Chinese government. It may also, by implication, have put non-Clubhouse users at risk, depending on how securely the users’ contact list data was stored. Even individuals who did not join the app or participate in the chatrooms may find themselves implicated if their phone number is found in the contact lists of individuals who are now associated with politically sensitive issues by virtue of participating in the Clubhouse chatrooms. 

This issue extends beyond the China context. As Dr Matt Mahmoudi discussed with me on my most recent podcast episode—data collection can be a death sentence. If individuals who work with undocumented immigrants, for instance, join Clubhouse, the network affects of their combined contact lists can reveal phone numbers and therefore the identity of undocumented individuals—and, if shared or demanded by ICE, lead to deportations. 

In the wake of the Stanford Internet Observatory report, Clubhouse has said that they are reviewing their data protection practices. But the fact remains that this comes only after individuals have already been put at risk because of Clubhouse’s poor data practices. These issues were easily predicted and prevented, underscoring the need for a duty of care and risk assessment for even seemingly “harmless” apps. And—as a rule of thumb—the less data collected, the better.

Review: What Tech Calls Reading

A Review of FSG x Logic Series

by Alina Utrata

Publisher Farrar, Straus and Giroux (FSG) and the tech magazine Logic teamed up to produce four books that capture “technology in all its contradictions and innovation, across borders and socioeconomic divisions, from history through the future, beyond platitudes and PR hype, and past doom and gloom.” In that, the FSG x Logic series succeeded beyond its wildest imagination. These books are some of the most well-researched, thought-provoking and—dare I say it—innovative takes on how technology is shaping our world. 

Here’s my review of three of the four—Blockchain Chicken Farm, Subprime Attention Crisis and What Tech Calls Thinking—but I highly recommend you read them all. (They average 200 pages each, so you could probably get through the whole series in the time it takes to finish Shoshana Zuboff’s Surveillance Capitalism.)

Blockchain Chicken Farm: And Other Stories of Tech in China’s Countryside

Xiaowei Wang

“Famine has its own vocabulary,” Xiaowei Wang writes, “a hungry language that haunts and lingers. My ninety-year-old great-uncle understands famine’s words well.” Wang writes as beautifully as they think, effortlessly weaving between ruminations on Chinese history, personal and family anecdotes, modern political and economic theory and first-hand research into the technological revolution sweeping rural China. Contradiction is a watchword in this book, as is contrast—they describe the difference between rural and urban life, of the East and the West, of family and the globe, of history and the present and the potential future. And yet, it all seems familiar. Wang invites us to think slowly about an industry that wants us to think fast—about whether any of this is actually about technology, or whether it is about capitalism, about globalization, about our politics and our communities—or, perhaps, about what it means to live a good life.

On blockchain chicken farms:

“The GoGoChicken project is a partnership between the village government and Lianmo Technology, a company that applies blockchain to physical objects, with a focus on provenance use cases—that is, tracking where something originates from. When falsified records and sprawling supply chains lead to issues of contamination and food safety, blockchain seems like a clear, logical solution. . . These chickens are delivered to consumers’ doors, butchered and vacuum sealed, with the ankle bracelet still attached, so customers can scan the QR code before preparing the chicken . . .”

On a Blockchain Chicken Farm in the Middle of Nowhere, pg 40

“A system of record keeping used to be textual, readable, and understandable to everyone. The technical component behind it was as simple as paper and pencil. That system was prone to falsification, but it was widely legible. Under governance by blockchain, records are tamperproof, but the technical systems are legible only to a select few. . . blockchain has yet to answer the question: If it takes power away from a central authority, can it truly put power back in the hands of the people, and not just a select group of people? Will it serve as an infrastructure that amplifies trust, rather than increasing both mistrust and a singular reliance on technical infrastructure? Will it provide ways to materially organize and enrich a community, rather than further accelerating financial systems that serve a select few?”

On a Blockchain Chicken Farm in the Middle of Nowhere, pg 48

On AI pig farming:

“In these large-scale farms, pigs are stamped with a unique identity mark on their bodies, similar to a QR code. That data is fed into a model made by Alibaba, and the model has the information it needs to monitor the pigs in real time, using video, temperature, and sound sensors. It’s through these channels that the model detects any sudden signs of fever or disease, or if pigs are crushing one another in their pens. If something does happen, the system recognizes the unique identifier on the pig’s body and gives an alert.”

When AI Farms Pigs, pg 63

“Like so many AI projects, ET Agricultural Brain naively assumes that the work of a farmer is to simply produce food for people in cities, and to make the food cheap and available. In this closed system, feeding humans is no different from feeding swaths of pigs on large farms. The project neglects the real work of smallholder farmers throughout the world. For thousands of years, the work of these farmers has been stewarding and maintaining the earth, rather than optimizing agricultural production. They use practices that yield nutrient-dense food, laying a foundation for healthy soils and rich ecology in an uncertain future. Their work is born out of commitment and responsibility: to their communities, to local ecology, to the land. Unlike machines, these farmers accept the responsibility of their actions with the land. They commit to the path of uncertainty.”

When AI Farms Pigs, pg 72

“After all, life is defined not by uncertainty itself but by a commitment to living despite it. In a time of economic and technological anxiety, the questions we ask cannot center on the inevitability of a closed system built by AI, and how to simply make those closed systems more rational or “fair.” What we face are the more difficult questions about the meaning of work, and the ways we commit, communicate, and exist in relation to each other. Answering these questions means looking beyond the rhetoric sold to us by tech companies. What we stand to gain is nothing short of true pleasure, a recognition that we are not isolated individuals, floating in a closed world.”

When AI Farms Pigs, pg 72

Subprime Attention Crisis: Advertising and the Time Bomb at the Heart of the Internet

Tim Hwang

Subprime Attention Crisis

In Subprime Attention Crisis, Tim Hwang argues that the terrifying thing about digital platforms is not how effective they are at manipulating behavior—it’s that they might not be very effective at all. Hwang documents, with precise and technical detail, how digital advertising markets work and how tech giants may be deliberately attempting to inflate their value, even as the actual effectiveness of online ads declines. If you think you’ve seen this film before, Hwang draws parallels to the subprime mortgages and financial systems that triggered the 2008 financial crash. He makes a compelling case that, sooner or later, the digital advertising bubble may burst—and the business model of the internet will explode overnight (not to mention all the things tech money subsidizes, from philanthropy to navigation maps to test and trace). Are Google and Facebook too big to fail? 

On potential systems breakdown:

“Whether underwriting a massive effort to scan the world’s books or enabling the purchase of leading robotics companies, Google’s revenue from programmatic advertising has, in effect, reshaped other industries. Major scientific breakthroughs, like recent advances in artificial intelligence and machine learning, have largely been made possible by a handful of corporations, many of which derive the vast majority of their wealth from online programmatic advertising. The fact that these invisible, silent programmatic marketplaces are critical to the continued functioning of the internet—and the solvency of so much more—begs a somewhat morbid thought experiment: What would a crisis in this elaborately designed system look like?”

The Plumbing, pg 25

“Intense dysfunction in the online advertising markets would threaten to create a structural breakdown of the classic bargain at the core of the information economy: services can be provided for free online to consumers, insofar as they are subsidized by the revenue generated from advertising. Companies would be forced to shift their business models in the face of a large and growing revenue gap, necessitating the rollout of models that require the consumer to pay directly for services. Paywalls, paid tiers of content, and subscription models would become more commonplace. Within the various properties owned by the dominant online platforms, services subsidized by advertising that are otherwise unprofitable might be shut down. How much would you be willing to pay for these services? What would you shell out for, and what would you leave behind? The ripple effects of a crisis in online advertising would fundamentally change how we consume and navigate the web.”

The Plumbing, pg 27

On fraud in digital advertising:

“One striking illustration is the subject of an ongoing lawsuit around claims that Facebook made in 2015 promoting the attractiveness of video advertising on its platform. At the time, the company was touting online video—and the advertising that could be sold alongside it—as the future of the platform, noting that it was “increasingly seeing a shift towards visual content on Facebook.” . . . But it turned out that Facebook overstated the level of attention being directed to its platform on the order of 60 to 80 percent. By undercounting the viewers of videos on Facebook, the platform overstated the average time users spent watching videos. . . . These inconsistencies have led some to claim that Facebook deliberately misled the advertising industry, a claim that Facebook has denied. Plaintiffs in a lawsuit against Facebook say that, in some cases, the company inflated its numbers by as much as 900 percent. Whatever the reasons for these errors in measurement, the “pivot to video” is a sharp illustration of how the modern advertising marketplace can leave buyers and sellers beholden to dominant platform decisions about what data to make available.”

Opacity, pg 70

On specific types of ad fraud:

“Click fraud is a widespread practice that uses automated scripts or armies of paid humans in “click farms” to deliver click-throughs on an ad. The result is that the advertising captures no real attention for the marketer. It is shown either to a human who was hired to click on the ad or to no one at all. The scale of this problem is enormous. A study conducted by Adobe in 2018 concluded that about 28 percent of website traffic showed “non-human signals,” indicating that it originated in automated scripts or in click farms. One study predicted that the advertising industry would lose $19 billion to click fraud in 2018—a loss of about $51 million per day. Some place this loss even higher. One estimate claims that $1 of every $3 spent on digital advertising is lost to click fraud.”

Subprime Attention, 85

What Tech Calls Thinking: An Inquiry into the Intellectual Bedrock of Silicon Valley

Adrian Daub

What Tech Calls Thinking

What Tech Calls Thinking is “about the history of ideas in a place that likes to pretend its ideas don’t have any history.” Daub has good reason to know this, as a professor of comparative literature at Stanford University (I never took a class with him, a fact I regretted more and more as the book went on). His turns of phrase do have the lyricism one associates with a literature seminar—e.g. “old motifs playing dress-up in a hoodie”—as he explores the ideas that run amok in Silicon Valley. He exposes delightful contradictions: thought leaders who engage only superficially with thoughts. CEOs who reject the university (drop out!), then build corporate campuses that look just like the university. As Daub explains the ideas of thinkers such as Abraham Maslow, Rene Girard, Ayn Rand, Jurgen Habermas, Karl Marx, Marshall McLuhan and Samuel Beckett, you get the sense, as Daub says, that these ideas “aren’t dangerous ideas in themselves. Their danger lies in the fact that they will probably lead to bad thinking.” The book is a compelling rejection of the pseudo-philosophy that has underpinned much of the Valley’s techno-determinism. “Quite frequently,” Daub explains, “these technologies are truly novel—but the companies that pioneer them use that novelty to suggest that traditional categories of understanding don’t do them justice, when in fact standard analytic tools largely apply just fine.” Daub’s analysis demonstrates the point well. 

On tech drop outs:

“You draw a regular salary and know what you’re doing with your life earlier than your peers, but you subsist on Snickers and Soylent far longer. You are prematurely self-directed and at the same time infantilized in ways that resemble college life for much longer than almost anyone in your age cohort. . . .  Dropping out is still understood as a rejection of a certain elite. But it is an anti-elitism whose very point is to usher you as quickly as possible into another elite—the elite of those who are sufficiently tuned in, the elite of those who get it, the ones who see through the world that the squares are happy to inhabit . . .  All of this seems to define the way tech practices dropping out of college: It’s a gesture of risk-taking that’s actually largely drained of risk. It’s a gesture of rejection that seems stuck on the very thing it’s supposedly rejecting.”

Dropping Out, pg 37

On platforms versus content creation:

“The idea that content is in a strange way secondary, even though the platforms Silicon Valley keeps inventing depend on it, is deeply ingrained. . . . To create content is to be distracted. To create the “platform” is to focus on the true structure of reality. Shaping media is better than shaping the content of such media. It is the person who makes the “platform” who becomes a billionaire. The person who provides the content—be it reviews on Yelp, self-published books on Amazon, your own car and waking hours through Uber—is a rube distracted by a glittering but pointless object.”

Content, pg 47

On gendered labor:

“Cartoonists, sex workers, mommy bloggers, book reviewers: there’s a pretty clear gender dimension to this division of labor. The programmers at Yelp are predominantly men. Its reviewers are mostly female . . . The problem isn’t that the act of providing content is ignored or uncompensated but rather that it isn’t recognized as labor. It is praised as essential, applauded as a form of civic engagement. Remunerated it is not. . . . And deciding what is and isn’t work has a long and ignominious history in the United States. They are “passionate,” “supportive” volunteers who want to help other people. These excuses are scripts, in other words, developed around domestic, especially female, labor. To explain why being a mom isn’t “real” work. To explain why women aren’t worth hiring, or promoting, or paying, or paying as much.”

Content, pg 51

On gendered data:

“There is the idea that running a company resembles being a sexual predator. But there is also the idea that data—resistant, squirrelly, but ultimately compliant—is a feminine resource to be seized, to be made to yield by a masculine force. . . .To grab data, to dispose of it, to make oneself its “boss”—the constant onslaught of highly publicized data breaches may well be a downstream effect of this kind of thinking. There isn’t very much of a care ethic when it comes to our data on the internet or in the cloud. Companies accumulate data and then withdraw from it, acting as though they have no responsibility for it—until the moment an evil hacker threatens said data. Which sounds, in other words, not too different from the heavily gendered imagery relied on by Snowflake. There is no sense of stewardship or responsibility for the data that you have “grabbed,” and the platform stays at a cool remove from the creaturely things that folks get up to when they go online and, wittingly or unwittingly, generate data.”

Content, pg 55

On disruption:

“There is an odd tension in the concept of “disruption,” and you can sense it here: disruption acts as though it thoroughly disrespects whatever existed previously, but in truth it often seeks to simply rearrange whatever exists. It is possessed of a deep fealty to whatever is already given. It seeks to make it more efficient, more exciting, more something, but it never wants to dispense altogether with what’s out there. This is why its gestures are always radical but its effects never really upset the apple cart: Uber claims to have “revolutionized” the experience of hailing a cab, but really that experience has stayed largely the same. What it managed to get rid of were steady jobs, unions, and anyone other than Uber’s making money on the whole enterprise.”

Desire, pg 104

Seeing Like a Social Media Site

The Anarchist’s Approach to Facebook

When John Perry Barlow published “A Declaration of the Independence of Cyberspace” nearly twenty-five years ago, he was expressing an idea that seemed almost obvious at the time: the internet was going to be a powerful tool to subvert state control. As Barlow explained to the “governments of the Industrial World,” those “weary giants of flesh and steel”—cyberspace does not lie within your borders. Cyberspace was a “civilization of the mind.” States might be able to control individuals’ bodies, but their weakness lay in their inability to capture minds.

In retrospect, this is a rather peculiar perspective of states’ real weakness, which has always been space. Literal, physical space—the endlessly vast terrain of the physical world—has historically been the friend of those attempting to avoid the state. As scholar James Scott documented in The Art of Not Being Governed, in early stages of state formation if the central government got too overbearing the population simply could—and often did—move. Similarly, John Torpey noted inThe Invention of the Passport that individuals wanting to avoid eighteenth-century France’s system of passes could simply walk from town to town, and passes were often “lost” (or, indeed, actually lost). As Richard Cobb noted, “there is no one more difficult to control than the pedestrian.” More technologically-savvy ways of traveling—the bus, the boat, the airplane—actually made it easier for the state to track and control movement.

Cyberspace may be the easiest place to track of all. It is, by definition, a mediated space. To visit, you must be in possession of hardware, which must be connected to a network, which is connected to other hardware, and other networks, and so on and so forth. Every single thing in the digital world is owned, controlled or monitored by someone else. It is impossible to be a pedestrian in cyberspace—you never walk alone. 

States have always attempted to make their populations more trackable, and thus more controllable. Scott calls this the process of making things “legible.” It includes “the creation of permanent last names, the standardization of weights and measures, the establishment of cadastral surveys and population registers, the invention of freehold tenure, the standardization of language and legal discourse, the design of cities, and the organization of transportation.” These things make previously complicated, complex and unstandardized facts knowable to the center, and thus more easy to administrate. If the state knows who you are, and where you are, then it can design systems to control you. What is legible is manipulable.

Cyberspace—and the associated processing of data—offers exciting new possibilities for the administrative center to make individuals more legible precisely because, as Barlow noted, it is “a space of the mind.” Only now, it’s not just states that have the capacity to do this—but sites. As Shoshana Zuboff documented in her book The Age of Surveillance Capitalism, sites like Facebook collect data about us in an attempt to make us more legible and, thus, more manipulatable. This is not, however, the first time that “technologically brilliant” centralized administrators have attempted to engineer society. 

Scott use the term “high modernism” to characterize schemes—attempted by planners across the political spectrum—that possess a “self-confidence about scientific and technical progress, the expansion of production, the growing satisfaction of human needs, the mastery of nature (including human nature), and, above all, the rational design of social order commensurate with the scientific understanding of natural laws.” In Seeing Like a State, Scott examines a number of these “high modernist” attempts to engineer forests in eighteenth-century Prussia and Saxony, urban cities in Paris and Brasilia, rural populations in ujamaa villages, and agricultural production in Soviet collective farms (to name a few). Each time, central administrators attempted to make complex, complicated processes—from people to nature—legible, and then engineer them into rational, organized systems based on scientific principles. It usually ended up going disastrously wrong—or, at least, not at all the way central authorities had planned it. 

The problem, Scott explained, is that “certain forms of knowledge and control require a narrowing of vision. . . designed or planned social order is necessarily schematic; it always ignores essential features of any real, functioning social order.” For example, mono-cropped forests became more vulnerable to disease and depleted soil structure—not to mention destroyed the diversity of the flora, insect, mammal, and bird populations which took generations to restore. The streets of Brasilia had not been designed with any local, community spaces where neighbors might interact; and, anyway, they forgot—ironically—to plan for construction workers, who subsequently founded their own settlement on the outskirts of the city, organized to defend their land and demanded urban services and secure titles. By 1980, Scott explained, “seventy-five percent of the population of Brasilia lived in settlements that had never been anticipated, while the planned city had reached less than half of its projected population of 557,000.” Contrary to Zuboff’s assertion that we are losing “the right to a future tense,” individuals and organic social processes have shown a remarkable capacity to resist and subvert otherwise brilliant plans to control them.

And yet this high-modernism characterizes most approaches to “regulating” social media, whether self-regulatory or state-imposed. And, precisely because cyberspace is so mediated, it is more difficult for users to resist or subvert the centrally-controlled processes imposed upon them. Misinformation on Facebook proliferates—and so the central administrators of Facebook try to engineer better algorithms, or hire legions of content moderators, or make centralized decisions about labeling posts, or simply kick off users. It is, in other words, a classic high-modernist approach to socially engineer the space of Facebook, and all it does is result in the platforms’ ruler—Mark Zuckerberg—consolidating more power. (Coincidentally, fellow Power-Shift contributor Jennifer Cobbe argued something quite similar in her recent article about the power of algorithmic censorship). Like previous attempts to engineer society, this one probably will not work well in practice—and there may be disastrous, authoritarian consequences as a result.

So what is the anarchist approach to social media? Consider this description of an urban community by twentieth-century activist Jane Jacobs, as recounted by Scott:

“The public peace-the sidewalk and street peace-of cities . . . is kept by an intricate, almost unconscious network of voluntary controls and standards among the people themselves, and enforced by the people themselves. . . . [an] incident that occurred on [Jacobs’] mixed-used street in Manhattan when an older man seemed to be trying to cajole an eight or nine-year-old girl to go with him. As Jacobs watched this from her second-floor window, wondering if she should intervene, the butcher’s wife appeared on the sidewalk, as did the owner of the deli, two patrons of a bar, a fruit vendor, and a laundryman, and several other people watched openly from their tenement windows, ready to frustrate a possible abduction. No “peace officer” appeared or was necessary. . . . There are no formal public or voluntary organizations of urban order here—no police, no private guards or neighborhood watch, no formal meetings or officeholders. Instead, the order is embedded in the logic of daily practice.”

How do we make social media sites more like Jacobs’ Manhattan, where people—not police or administrators—on “sidewalk terms” are empowered to shape their own cyber spaces? 

There may already be one example: Wikipedia. 

Wikipedia is not often thought of as an example of a social media site—but, as many librarians will tell you, it is not an encyclopedia. Yet Wikipedia is not only a remarkable repository of user-generated content, it also has been incredibly resilient to misinformation and extremist content. Indeed, as debates around Facebook wonder whether the site has eroded public discourse to such an extent that democracy itself has been undermined, debates around Wikipedia center around whether it is as accurate as the expert-generated content of Encyclopedia Britannica. (Encyclopedia Britannica says no; Wikipedia says it’s close.)

The difference is that Wikipedia empowers users. Anyone, absolutely anyone, can update Wikipedia. Everyone can see who has edited what, allowing users to self-regulate—and how users identified that suspected Russian agent Maria Butina was probably changing her own Wikipedia page, and changed it back. This radical transparency and empowerment produces organic social processes where, much like in the Manhattan street, individuals collectively mediate their own space. And, most importantly, it is dynamic—Wikipedia changes all the time. Instead of a static ruling (such as Facebook’s determination that the iconic photo of Napalm Girl would be banned for child nudity), Wikipedia’s process produces dialogue and deliberation, where communities constantly socially construct meaning and knowledge. Finally, because cyberspace is ultimately mediated space—individuals cannot just “walk” or “wander” across sidewalks, like in the real world—Wikipedia is mission-driven. It does not have the amorphous goal of “connecting the global community”, but rather “to create a world in which everyone can freely share in the sum of all knowledge.”

This suggests that no amount of design updates or changes to terms of service will ever “fix” Facebook—whether they are imposed by the US government, Mark Zuckerberg or Facebook’s Oversight Board. Instead, it is the high-modernism that is the problem. The anarchist’s approach would prioritize building designs that empower people and communities—so why not adopt the wiki-approach to the public square functions that social media currently serves, like wiki-newspapers or wiki-newsfeeds?

It might be better to take the anarchist’s approach. No algorithms are needed.

by Alina Utrata

The political arguments against digital monopolies in the House Judiciary Report

Alina Utrata

         The House Judiciary Committee’s report on digital monopolies (all 449 pages) was a meticulously-researched dossier of why the Big Four tech companies—Google, Apple, Amazon and Facebook—should be considered monopolies. However, leaving the nitty-gritty details aside, it’s worth examining how the report frames the political arguments for why monopolies are bad. 

         It’s important to distinguish economic and political anti-monopoly arguments, although they are related. Economically, the report has very strong reasoning. No doubt this is in part because one of its authors is Lina Khan, the brilliant lawyer whose innovative and compelling case for why Amazon should be considered a monopoly went viral in 2017, and built the legal argument this report was based on. The authors reason that monopolies are fundamentally anti-competitive, not conducive to entrepreneurship and innovation, and inevitably lead to fewer choices for consumers and worse quality in products and services, including a lack of privacy protections. In particular, it draws on Khan’s theory that anti-competitive behavior should not just be defined merely as resulting in high consumer prices (a la Bork), but through firms’ ability to use predatory pricing and reliance on their market infrastructure to harm competitors. 

         However, as former FTC chairman Robert Pitofsky pointed out, “It is bad history, bad policy, and bad law to exclude certain political values in interpreting the antitrust laws.”[1] The report explicitly acknowledges that monopolies do not just threaten the economy, stating, “Our economy and democracy are at stake.”[2] So what, politically, does the report say is the problem?

         Firstly, the affect that these digital platforms have on journalism. The report noted that, “a free and diverse press is essential to a vibrant democracy . . . independent journalism sustains our democracy by facilitating public discourse.” In particular, it points out the death of local news, and the fact that many communities effectively no longer have a fourth estate to hold local government accountable. The report also notes the power imbalance between the platforms and news organizations—the shift to content-aggregation, and the fact that most online traffic to digital publications is meditated through the platforms, means that small tweaks in algorithms can have major consequences for newspapers’ readership. While the report frames this in terms of newspapers’ bargaining power, it stops short of articulating the fundamental political issue at stake: unaccountable, private corporations have the power to determine what content we see and don’t see online.

         The second argument is that monopoly corporations infringe on the “economic liberty” of citizens. The report, both implicitly and explicitly, references the 1890 Congressional debates on anti-trust, in which US Senator Sherman proclaimed, “If we will not endure a king as a political power we should not endure a king over the production, transportation, and sale of any of the necessaries of life. If we would not submit to an emperor we should not submit to an autocrat of trade.”[3] This reasoning asserts that monopoly corporations exert a tyrannical power over individuals’ economic lives, directly analogous to the type of tyranny states exert over individuals’ political lives. Khan pointed out in a previous publication, in the 1890 debates, “what was at stake in keeping markets open—and keeping them free from industrial monarchs—was freedom.”[4]

         Repeatedly, the report notes that the committee had encountered a “prevalence of fear among market participants who depend on the dominant platforms.” It maintains that this was because of the economic dependence their monopoly power had created. For example, 37% of third-party sellers on Amazon—about 850,000 businesses—rely on Amazon as their sole source of income. Because of Amazon’s position as the gateway to e-commerce—Amazon controls about 65 to 70% of all U.S. online marketplace sales—it has the power to force sellers (or “internal competitors”) into arbitration. Amazon can kick sellers off the site, or lower the rankings of their products, or lengthen their shipping times—or, as happened to one third-party seller, refuse to release the products stored in Amazon warehouses, while still charging rent. Amazon forces sellers to give up their right to make a complaint in court as a condition for using its platform. Because of Amazon’s dominance, sellers cannot walk away. The report explicitly compares this marketplace power to the power of the state: 

“Because of the severe financial repercussions associated with suspension or delisting, many Amazon third-party sellers live in fear of the company. For sellers, Amazon functions as a “quasi-state,” and many “[s]ellers are more worried about a case being opened on Amazon than in actual court.” This is because Amazon’s internal dispute resolution system is characterized by uncertainty, unresponsiveness, and opaque decision-making processes.”[5]

          In this argument, monopolies are a threat to the economic liberty of individuals because they can use their dominance to subject those who depend on their markets to their own private law, as well as being able to pick “winners and losers.” The rise of this type of corporate law has been discussed before, specifically in reference to technology corporations. Frank Pasquale has predicted a shift from territorial to functional sovereignty, explaining, “in functional arenas from room-letting to transportation to commerce, persons will be increasingly subject to corporate, rather than democratic, control. For example: Who needs city housing regulators when AirBnB can use data-driven methods to effectively regulate room-letting, then house-letting, and eventually urban planning generally?”[6] Rory Van Loo wrote about the phenomenon more generally in Corporations as Courthousethe marketplace for dispute resolutions ranging from credit card companies to the Apple app store.[7]

         Finally, the report repeats Supreme Court Justice Louis Brandeis’s famous quote that, “We may have democracy, or we may have wealth concentrated in the hands of a few, but we cannot have both.” (Funnily enough, there is no documentation that Brandeis ever actually said that, although he certainly would have agreed with the sentiment.) It points out that “the growth in the platforms’ market power has coincided with an increase in their influence over the policymaking process.” The authors explicitly noted the corporations’ use of political lobbyists and their investments in think-tanks and non-profit advocacy groups to steer policy discussions. (Notably, Mohamed Abdalla and Moustafa Abdalla have just published a new paper entitled “The Grey Hoodie Project” about how Big Tech uses the strategies of Big Tobacco in order to influence academic research.) However, it’s not clear why monopolists’ power to influence the political process is any different from the ability of any wealthy individual or corporation. In fact, political theorist Rob Reich wrote a book Just Giving, arguing that philanthropy can subvert democratic processes. (An interesting real world example is when Facebook has donated $11 million dollars to the city of Menlo Park with the understanding that it would be used to establish and maintain a new police unit near Facebook’s headquarters.)

         A final political argument, not included in the report, comes from an unlikely source: Mark Zuckerberg (and, given his new role at Facebook, possibly former UK deputy prime minister Nick Clegg too).[8]  Zuckerberg argued during the committee hearings that breaking up companies like Facebook would allow other competitors, especially companies from China, to dominate the market in Facebook’s place. These companies, Zuckerberg claimed, don’t have the same values as the US—including democracy, competition, inclusion and free expression. Along with a dose of protectionism, the implicit argument is that it is better for private American corporations like Facebook to make decisions about who is allowed to say what online—and how to prioritize distributing that content—than it is to cede that power to authoritarian states. 

         The interesting thing is that Zuckerberg’s argument taps into a second strain of anti-monopoly political reasoning: that the state is scarier than corporations. Take the discourse around monopolies in 1950 during the debate on the Celler–Kefauver Act. As journalist Marquis Childs wrote, big corporations are “in reality collectivism—a kind of private socialism. . . [and] private socialism will sooner or later in a democracy become public socialism.”[9] In the shadow of the Cold War, the argument went that Big Firms will inevitably create a Big Government to regulate them, and Big Government will inevitably become fascism, communism, or other authoritarian forms of centralized state control. As Robert Pitofsky summed up, the argument asserts that “monopolies create economic conditions conducive to totalitarianism.”[10]  It’s the all-dominant state that citizens should be worried about, not necessarily the all-dominant corporation. (Tim Wu has written about this in the history of anti-monopoly in the US in his book, The Curse of Bigness: Antitrust in the New Gilded Age.

         To me, the most interesting thing to note is that the report did not mention the state’s reliance on these Big Tech corporations—particularly in new areas, like cloud computing. As the report documents, Amazon Web Services (AWS) dominates the cloud computing market, making up about half of global spending on cloud infrastructure services (and three times the market share of its closest competitor, Microsoft). An estimated 6,500 government agencies use AWS—including NASA and the CIA. If Target and Netflix are worried about using AWS, should the US government be worried about their dependency on Amazon Web Services? Does this type of consolidated infrastructure risk creating fragility in the system by becoming too big to fail?

         This question will continue to have relevance, especially as AWS and Microsoft’s Azure continue their battle for the Pentagon’s $10 billion cloud computing contract. Notably, US President Elect Joe Biden has appointed Mark Schwartz, an Enterprise Strategist at Amazon Web Services, to the Agency Review team for the critical and important Office of Management and Budget (along with a number of other individuals connected to Big Tech). Anti-trust and digital monopolies will certainly be a major issue for the future Biden Administration.

[1] Pitofsky, Robert. “Political Content of Antitrust.” University of Pennsylvania Law Review 127, no. 4 (January 1, 1979): 1051. 

[2] Emphasis added.

[3] 21 CONG. REC. 2459. Pitofsky, Robert. “Political Content of Antitrust.” University of Pennsylvania Law Review 127, no. 4 (January 1, 1979): 1051. 

[4] Khan, Lina. “Amazon’s Antitrust Paradox.” Yale Law Journal 126, no. 3 (January 1, 2016).

[5] Subcommittee on Antitrust, Commercial and Administrative Law of the Committee on the Judiciary. “Investigation of Competition in Digital Markets: Majority Staff Report and Recommendations.” US House of Representatives, October 6, 2020. Emphasis added.

[6] Pasquale, Frank. “From Territorial to Functional Sovereignty: The Case of Amazon.” Open Democracy. January 5, 2018.

[7] Loo, Rory Van. “The Corporation as Courthouse.” Yale Journal on Regulation 33 (2016): 56.

[8] Many thanks to John Naughton for pointing this out to me.

[10] Pitofsky, Robert. “Political Content of Antitrust.” University of Pennsylvania Law Review 127, no. 4 (January 1, 1979): 1051. 


Should you have a right to a Facebook account?

Alina Utrata

Now that the 2020 US presidential election has concluded, the post-mortem evaluation of how well social media platforms performed will begin. Since the content moderation debate has mostly focused on platforms’ willing or unwillingness to remove content or accounts, the post-election coverage will almost inevitably center around who and what was removed or labeled.

In 2019, the New York Times published a feature story about individuals who had had their Facebook accounts suspended—possibly because they had been misidentified as fake accounts in a general security sweep. However, these users do not know for certain. Individuals were only told that their accounts had been disabled because of “suspicious activity”—the appeal process to restore suspended Facebook accounts is not a transparent one, and cases frequently drag on for extended periods with no resolution. (Facebook, as the article documents, has quite sophisticated techniques for catching individuals attempting to make multiple accounts, foreclosing the solution of a “second Facebook account” for suspended users.)

Facebook CEO Mark Zuckerberg has frequently said that he does not want the platform to become the “arbiter of free speech.” Free speech, however, is a restriction that only applies to governments. As the existence of these suspended accounts show, in reality Facebook can limit speech or ban users for almost any reason it cares to put in its terms of service. It is a private corporation, not a government. 

The problem of the exclusionary policies of private corporations might be less acute in a competitive marketplace. For example, it could be inconvenient if a personal feud with your local corner store leads you to being banned from the shop; but it is always possible to buy milk from another store down the road. It is different, of course, if you happen to live in Lawton, Oklahoma—or one of the hundreds of communities across the US where Walmart is the dominant monopoly, capturing 70% or more of the grocery store market. Being banned from Walmart (either for using their electric scooters while intoxicated or violating the store’s policy on not carrying guns) might be far more significant for your life and livelihood.

Facebook’s form of monopoly power means that being banned from the platform can have significant consequences for individuals’ lives: loss of the data hosted on the platform (like photos or old messages), the ability to use messenger to connect to friends and family, or participate in professional or social groups only organized on Facebook. Some people depend on Facebook for their livelihoods, communicating with customers or selling on Facebook’s marketplace; or for political campaigns, reaching out to voters in a run for local city council, for example. The same dynamics are true for other digital monopolies, like Amazon. The recent House Judiciary report found that Amazon can, and often does, arbitrarily lower third-party sellers’ products in their search ranking, lengthen their shipping times, or kick them off the site entirely. About 850,000 businesses, or 37% of third-party sellers on Amazon, rely on Amazon as their sole source of income. Monopolies can be, as Zephyr Teachout argues, a form of economic tyranny.

There are two general approaches floated to remedying this monopoly power. The first is to “break them up.” Facebook or Amazon’s policies might be less important if there are many e-commerce or social networking sites in town—and perhaps their policies would improve if they had to compete with other platforms for users or sellers. On the other hand, Facebook might argue that the value of social networking sites are the fact that they are consolidated. As the sudden surge in popularity of the app Parler may soon demonstrate: there’s very little point in being on a social networking site if the people you want to reach aren’t there too. Alternative social networking sites may simply be complementary, rather than competitive. Similarly, Amazon might argue that it is convenient, and beneficial, to both consumers and sellers that e-commerce is located all in one place. Instead of searching online (by using another monopoly, Google) through hundreds of webpages with no guide as to quality, you can go to one portal at Amazon and find exactly what you want.

A second approach to tackling monopolies is regulation. For example, the state can and does get involved if a private corporation excludes you on the basis of a protected identity, such a race or sexual orientation. US Senator Elizabeth Warren’s calls for Amazon, Apple or Google to choose whether they want to be “market platforms” or “market participators” is another example of the state’s attempt to impose regulations in order to make sure that these monopolies are more fair. The government also gets involved when it involves product safety. For example, the Forum on Information and Democracy just published a report outlining recommending principles for regulating quality standards for digital platforms, in the same way that governments might require standards for food or medicine sold on the market. In this approach, the state imposes limits or controls on corporations to try and curb or reform their power over consumers. However, this approach requires active government enforcement and involvement. As the House Judiciary report documented, even though they are equipped with anti-trust laws, many US regulatory agencies have been slow or unwilling to take on the Big Tech monopolies. Corporations also point out that government involvement can stifle innovative and entrepreneurship.

However, there might be a third approach: democratization. Mark Zuckerberg has said that, “in a lot of ways Facebook is more like a government than a traditional company.” If that is the case, then it has been a long time since the United States tolerated a government with the kind of absolute power Mark Zuckerberg exerts over Facebook (as CEO and founder, Zuckerberg retains majority voting shares). So could we democratize Facebook, and make it a company ruled by consent of the governed rather than fiat of the king? Could Facebook users appoint representatives to a “Constitutional Convention” to draft Facebook’s terms of service, or adopt a Bill of Rights to guide design and algorithmic principles? Facebook’s Oversight Board has already been compared to a Supreme Court, so why not add a legislative branch too? Could we have elections on representatives to a Facebook legislature, which would pass “laws” about how the online community should be governed? (A Facebook legislature would arguably be more effective than the referendum process Facebook tried last time it experimented with democratization.) 

Crucially, however, any democratization process would have to be coupled with genuine democratic reform of Facebook’s corporate governance: a Facebook Parliament in name only wouldn’t achieve much if Mark Zuckerberg retained absolute control of the company. True democratization would require in a change not just in who we think represents Facebook, but who owns Facebook—or, rather, who ought to own Facebook. Mark Zuckerberg? Or we, its users? If the answer is Mark Zuckerberg, a Facebook account will always be a privilege, not a right.

Create your website with
Get started