Yochai Benkler and a team from the Berkman-Klein Centre have published an interesting study which comes to conclusions that challenge conventional wisdom about the power of social media.
“Contrary to the focus of most contemporary work on disinformation”, they write,
our findings suggest that this highly effective disinformation campaign, with potentially profound effects for both participation in and the legitimacy of the 2020 election, was an elite-driven, mass-media led process. Social media played only a secondary and supportive role. This chimes with the study on networked propaganda that Yochai, Robert Faris and Hal Roberts conducted in 2015-16 and published in 2018 in Network Propaganda: Manipulation, Disinformation, and Radicalization in American Politics. They argued that the right-wing media ecosystem in the US operates fundamentally differently than the rest of the media environment. Their view was that longstanding institutional, political, and cultural patterns in American politics interacted with technological change since the 1970s to create a propaganda feedback loop in American conservative media. This dynamic has, they thought, marginalised centre-right media and politicians, radicalised the right wing ecosystem, and rendered it susceptible to propaganda efforts, foreign and domestic.
The key insight in both studies is that we are dealing with an ecosystem, not a machine, which is why focussing exclusively on social media as a prime explanation for the political upheavals of the last decade is unduly reductionist. In that sense, much of the public (and academic) commentary on social media’s role brings to mind the cartoon of the drunk looking for his car keys under a lamppost, not because he lost them there, but because at least there’s light. Because social media are relatively new arrivals on the scene, it’s (too) tempting to over-estimate their impact. Media-ecology provides a better analytical lens because it means being alert to factors like diversity, symbiosis, feedback loops and parasitism rather than to uni-causal explanations.
Not surprisingly, Signal has been staggering under the load of refugees from WhatsApp following Facebook’s ultimatum about sharing their data with other companies in its group. According to data from Sensor Tower Signal was downloaded 8.8m times worldwide in the week after the WhatsApp changes were first announced on January 4. Compare that with 246,000 downloads the week before and you get some idea of the step-change. I guess the tweet — “Use Signal” — from Elon Musk on January 7 probably also added a spike.
In contrast, WhatsApp downloads during the period showed the reverse pattern — 9.7m downloads in the week after the announcement, compared with 11.3m before, a 14 per cent decrease.
This isn’t a crisis for Facebook — yet. But it’s a more serious challenge than the June 2020 advertising boycott. Evidence that Zuckerberg & Co are taking it seriously comes from announcements that Facebook has cancelled the February 8 deadline in its ultimatum to users. It now says that it will instead “go to people gradually to review the policy at their own pace before new business options are available on May 15.” As Charles Arthur has pointed out, the contrast between the leisurely pace at which Facebook has moved on questions of hate speech posted by alt-right outfits and it’s lightning response to the exodus from WhatsApp is instructive. It shows what really matters to the top brass.
Signal seems an interesting outfit, incidentally, and not just because of its technology. It’s a not-for-profit organisation, for one thing. Its software is open source — which means it can be independently assessed. And it’s been created by interesting people. Brian Acton, for example, is one of the two co-founders of WhatsApp, which Facebook bought in 2014 for $19B. He pumped $50m of that into Signal, and no doubt there’s a lot more where that came from. And Moxie Marlinspike, the CEO, is not only a cryptographer but also a hacker, a shipwright, and a licensed mariner. The New Yorker had a nice profile of him a while back.
It was eerily quiet on social media last week. That’s because Trump and his cultists had been “deplatformed”. By banning him, Twitter effectively took away the megaphone he’s been masterfully deploying since he ran for president. The shock of the 6 January assault on the Capitol was seismic enough to convince even Mark Zuckerberg that the plug finally had to be pulled. And so it was, even to the point of Amazon Web Services terminating the hosting of Parler, a Twitter alternative for alt-right extremists.
The deafening silence that followed these measures was, however, offset by an explosion of commentary about their implications for freedom, democracy and the future of civilisation as we know it. Wading knee-deep through such a torrent of opinion about the first amendment, free speech, censorship, tech power and “accountability” (whatever that might mean), it was sometimes hard to keep one’s bearings. But what came to mind continually was H L Mencken’s astute insight that “for every complex problem there is an answer that is clear, simple and wrong”. The air was filled with people touting such answers.
In the midst of the discursive chaos, though, some general themes could be discerned. The first highlighted cultural differences, especially between the US with its sacred first amendment on the one hand and European and other societies, which have more ambivalent histories of moderating speech. The obvious problem with this line of discussion is that the first amendment is about government regulation of speech and has nothing whatsoever to do with tech companies, which are free to do as they like on their platforms.
A second theme viewed the root cause of the problem as the lax regulatory climate in the US over the last three decades, which led to the emergence of a few giant tech companies that effectively became the hosts for much of the public sphere. If there were many Facebooks, YouTubes and Twitters, so the counter-argument runs, then censorship would be less effective and problematic because anyone denied a platform could always go elsewhere.
Then there were arguments about power and accountability. In a democracy, those who make decisions about which speech is acceptable and which isn’t ought to be democratically accountable. “The fact that a CEO can pull the plug on Potus’s loudspeaker without any checks and balances,” fumed EU commissioner Thierry Breton, “is not only confirmation of the power of these platforms, but it also displays deep weaknesses in the way our society is organised in the digital space.” Or, to put it another way, who elected the bosses of Facebook, Google, YouTube and Twitter?
What was missing from the discourse was any consideration of whether the problem exposed by the sudden deplatforming of Trump and his associates and camp followers is actually soluble – at least in the way it has been framed until now. The paradox that the internet is a global system but law is territorial (and culture-specific) has traditionally been a way of stopping conversations about how to get the technology under democratic control. And it was running through the discussion all week like a length of barbed wire that snagged anyone trying to make progress through the morass.
All of which suggests that it’d be worth trying to reframe the problem in more productive ways. One interesting suggestion for how to do that came last week in a thoughtful Twitter thread by Blayne Haggart, a Canadian political scientist. Forget about speech for a moment, he suggests, and think about an analogous problem in another sphere – banking. “Different societies have different tolerances for financial risk,” he writes, “with different regulatory regimes to match. Just like countries are free to set their own banking rules, they should be free to set strong conditions, including ownership rules, on how platforms operate in their territory. Decisions by a company in one country should not be binding on citizens in another country.”
In those terms, HSBC may be a “global” bank, but when it’s operating in the UK it has to obey British regulations. Similarly, when operating in the US, it follows that jurisdiction’s rules. Translating that to the tech sphere, it suggests that the time has come to stop accepting the tech giant’s claims to be hyper-global corporations, whereas in fact they are US companies operating in many jurisdictions across the globe, paying as little local tax as possible and resisting local regulation with all the lobbying resources they can muster. Facebook, YouTube, Google and Twitter can bleat as sanctimoniously as they like about freedom of speech and the first amendment in the US, but when they operate here, as Facebook UK, say, then they’re merely British subsidiaries of an American corporation incorporated in California. And these subsidiaries obey British laws on defamation, hate speech and other statutes that have nothing to do with the first amendment. Oh, and they should also pay taxes on their local revenues.
Publisher Farrar, Straus and Giroux (FSG) and the tech magazine Logic teamed up to produce four books that capture “technology in all its contradictions and innovation, across borders and socioeconomic divisions, from history through the future, beyond platitudes and PR hype, and past doom and gloom.” In that, the FSG x Logic series succeeded beyond its wildest imagination. These books are some of the most well-researched, thought-provoking and—dare I say it—innovative takes on how technology is shaping our world.
Here’s my review of three of the four—Blockchain Chicken Farm, Subprime Attention Crisis and What Tech Calls Thinking—but I highly recommend you read them all. (They average 200 pages each, so you could probably get through the whole series in the time it takes to finish Shoshana Zuboff’s Surveillance Capitalism.)
“Famine has its own vocabulary,” Xiaowei Wang writes, “a hungry language that haunts and lingers. My ninety-year-old great-uncle understands famine’s words well.” Wang writes as beautifully as they think, effortlessly weaving between ruminations on Chinese history, personal and family anecdotes, modern political and economic theory and first-hand research into the technological revolution sweeping rural China. Contradiction is a watchword in this book, as is contrast—they describe the difference between rural and urban life, of the East and the West, of family and the globe, of history and the present and the potential future. And yet, it all seems familiar. Wang invites us to think slowly about an industry that wants us to think fast—about whether any of this is actually about technology, or whether it is about capitalism, about globalization, about our politics and our communities—or, perhaps, about what it means to live a good life.
On blockchain chicken farms:
“The GoGoChicken project is a partnership between the village government and Lianmo Technology, a company that applies blockchain to physical objects, with a focus on provenance use cases—that is, tracking where something originates from. When falsified records and sprawling supply chains lead to issues of contamination and food safety, blockchain seems like a clear, logical solution. . . These chickens are delivered to consumers’ doors, butchered and vacuum sealed, with the ankle bracelet still attached, so customers can scan the QR code before preparing the chicken . . .”
On a Blockchain Chicken Farm in the Middle of Nowhere, pg 40
“A system of record keeping used to be textual, readable, and understandable to everyone. The technical component behind it was as simple as paper and pencil. That system was prone to falsification, but it was widely legible. Under governance by blockchain, records are tamperproof, but the technical systems are legible only to a select few. . . blockchain has yet to answer the question: If it takes power away from a central authority, can it truly put power back in the hands of the people, and not just a select group of people? Will it serve as an infrastructure that amplifies trust, rather than increasing both mistrust and a singular reliance on technical infrastructure? Will it provide ways to materially organize and enrich a community, rather than further accelerating financial systems that serve a select few?”
On a Blockchain Chicken Farm in the Middle of Nowhere, pg 48
On AI pig farming:
“In these large-scale farms, pigs are stamped with a unique identity mark on their bodies, similar to a QR code. That data is fed into a model made by Alibaba, and the model has the information it needs to monitor the pigs in real time, using video, temperature, and sound sensors. It’s through these channels that the model detects any sudden signs of fever or disease, or if pigs are crushing one another in their pens. If something does happen, the system recognizes the unique identifier on the pig’s body and gives an alert.”
When AI Farms Pigs, pg 63
“Like so many AI projects, ET Agricultural Brain naively assumes that the work of a farmer is to simply produce food for people in cities, and to make the food cheap and available. In this closed system, feeding humans is no different from feeding swaths of pigs on large farms. The project neglects the real work of smallholder farmers throughout the world. For thousands of years, the work of these farmers has been stewarding and maintaining the earth, rather than optimizing agricultural production. They use practices that yield nutrient-dense food, laying a foundation for healthy soils and rich ecology in an uncertain future. Their work is born out of commitment and responsibility: to their communities, to local ecology, to the land. Unlike machines, these farmers accept the responsibility of their actions with the land. They commit to the path of uncertainty.”
When AI Farms Pigs, pg 72
“After all, life is defined not by uncertainty itself but by a commitment to living despite it. In a time of economic and technological anxiety, the questions we ask cannot center on the inevitability of a closed system built by AI, and how to simply make those closed systems more rational or “fair.” What we face are the more difficult questions about the meaning of work, and the ways we commit, communicate, and exist in relation to each other. Answering these questions means looking beyond the rhetoric sold to us by tech companies. What we stand to gain is nothing short of true pleasure, a recognition that we are not isolated individuals, floating in a closed world.”
In Subprime Attention Crisis, Tim Hwang argues that the terrifying thing about digital platforms is not how effective they are at manipulating behavior—it’s that they might not be very effective at all. Hwang documents, with precise and technical detail, how digital advertising markets work and how tech giants may be deliberately attempting to inflate their value, even as the actual effectiveness of online ads declines. If you think you’ve seen this film before, Hwang draws parallels to the subprime mortgages and financial systems that triggered the 2008 financial crash. He makes a compelling case that, sooner or later, the digital advertising bubble may burst—and the business model of the internet will explode overnight (not to mention all the things tech money subsidizes, from philanthropy to navigation maps to test and trace). Are Google and Facebook too big to fail?
On potential systems breakdown:
“Whether underwriting a massive effort to scan the world’s books or enabling the purchase of leading robotics companies, Google’s revenue from programmatic advertising has, in effect, reshaped other industries. Major scientific breakthroughs, like recent advances in artificial intelligence and machine learning, have largely been made possible by a handful of corporations, many of which derive the vast majority of their wealth from online programmatic advertising. The fact that these invisible, silent programmatic marketplaces are critical to the continued functioning of the internet—and the solvency of so much more—begs a somewhat morbid thought experiment: What would a crisis in this elaborately designed system look like?”
The Plumbing, pg 25
“Intense dysfunction in the online advertising markets would threaten to create a structural breakdown of the classic bargain at the core of the information economy: services can be provided for free online to consumers, insofar as they are subsidized by the revenue generated from advertising. Companies would be forced to shift their business models in the face of a large and growing revenue gap, necessitating the rollout of models that require the consumer to pay directly for services. Paywalls, paid tiers of content, and subscription models would become more commonplace. Within the various properties owned by the dominant online platforms, services subsidized by advertising that are otherwise unprofitable might be shut down. How much would you be willing to pay for these services? What would you shell out for, and what would you leave behind? The ripple effects of a crisis in online advertising would fundamentally change how we consume and navigate the web.”
The Plumbing, pg 27
On fraud in digital advertising:
“One striking illustration is the subject of an ongoing lawsuit around claims that Facebook made in 2015 promoting the attractiveness of video advertising on its platform. At the time, the company was touting online video—and the advertising that could be sold alongside it—as the future of the platform, noting that it was “increasingly seeing a shift towards visual content on Facebook.” . . . But it turned out that Facebook overstated the level of attention being directed to its platform on the order of 60 to 80 percent. By undercounting the viewers of videos on Facebook, the platform overstated the average time users spent watching videos. . . . These inconsistencies have led some to claim that Facebook deliberately misled the advertising industry, a claim that Facebook has denied. Plaintiffs in a lawsuit against Facebook say that, in some cases, the company inflated its numbers by as much as 900 percent. Whatever the reasons for these errors in measurement, the “pivot to video” is a sharp illustration of how the modern advertising marketplace can leave buyers and sellers beholden to dominant platform decisions about what data to make available.”
Opacity, pg 70
On specific types of ad fraud:
“Click fraud is a widespread practice that uses automated scripts or armies of paid humans in “click farms” to deliver click-throughs on an ad. The result is that the advertising captures no real attention for the marketer. It is shown either to a human who was hired to click on the ad or to no one at all. The scale of this problem is enormous. A study conducted by Adobe in 2018 concluded that about 28 percent of website traffic showed “non-human signals,” indicating that it originated in automated scripts or in click farms. One study predicted that the advertising industry would lose $19 billion to click fraud in 2018—a loss of about $51 million per day. Some place this loss even higher. One estimate claims that $1 of every $3 spent on digital advertising is lost to click fraud.”
What Tech Calls Thinking is “about the history of ideas in a place that likes to pretend its ideas don’t have any history.” Daub has good reason to know this, as a professor of comparative literature at Stanford University (I never took a class with him, a fact I regretted more and more as the book went on). His turns of phrase do have the lyricism one associates with a literature seminar—e.g. “old motifs playing dress-up in a hoodie”—as he explores the ideas that run amok in Silicon Valley. He exposes delightful contradictions: thought leaders who engage only superficially with thoughts. CEOs who reject the university (drop out!), then build corporate campuses that look just like the university. As Daub explains the ideas of thinkers such as Abraham Maslow, Rene Girard, Ayn Rand, Jurgen Habermas, Karl Marx, Marshall McLuhan and Samuel Beckett, you get the sense, as Daub says, that these ideas “aren’t dangerous ideas in themselves. Their danger lies in the fact that they will probably lead to bad thinking.” The book is a compelling rejection of the pseudo-philosophy that has underpinned much of the Valley’s techno-determinism. “Quite frequently,” Daub explains, “these technologies are truly novel—but the companies that pioneer them use that novelty to suggest that traditional categories of understanding don’t do them justice, when in fact standard analytic tools largely apply just fine.” Daub’s analysis demonstrates the point well.
On tech drop outs:
“You draw a regular salary and know what you’re doing with your life earlier than your peers, but you subsist on Snickers and Soylent far longer. You are prematurely self-directed and at the same time infantilized in ways that resemble college life for much longer than almost anyone in your age cohort. . . . Dropping out is still understood as a rejection of a certain elite. But it is an anti-elitism whose very point is to usher you as quickly as possible into another elite—the elite of those who are sufficiently tuned in, the elite of those who get it, the ones who see through the world that the squares are happy to inhabit . . . All of this seems to define the way tech practices dropping out of college: It’s a gesture of risk-taking that’s actually largely drained of risk. It’s a gesture of rejection that seems stuck on the very thing it’s supposedly rejecting.”
Dropping Out, pg 37
On platforms versus content creation:
“The idea that content is in a strange way secondary, even though the platforms Silicon Valley keeps inventing depend on it, is deeply ingrained. . . . To create content is to be distracted. To create the “platform” is to focus on the true structure of reality. Shaping media is better than shaping the content of such media. It is the person who makes the “platform” who becomes a billionaire. The person who provides the content—be it reviews on Yelp, self-published books on Amazon, your own car and waking hours through Uber—is a rube distracted by a glittering but pointless object.”
Content, pg 47
On gendered labor:
“Cartoonists, sex workers, mommy bloggers, book reviewers: there’s a pretty clear gender dimension to this division of labor. The programmers at Yelp are predominantly men. Its reviewers are mostly female . . . The problem isn’t that the act of providing content is ignored or uncompensated but rather that it isn’t recognized as labor. It is praised as essential, applauded as a form of civic engagement. Remunerated it is not. . . . And deciding what is and isn’t work has a long and ignominious history in the United States. They are “passionate,” “supportive” volunteers who want to help other people. These excuses are scripts, in other words, developed around domestic, especially female, labor. To explain why being a mom isn’t “real” work. To explain why women aren’t worth hiring, or promoting, or paying, or paying as much.”
Content, pg 51
On gendered data:
“There is the idea that running a company resembles being a sexual predator. But there is also the idea that data—resistant, squirrelly, but ultimately compliant—is a feminine resource to be seized, to be made to yield by a masculine force. . . .To grab data, to dispose of it, to make oneself its “boss”—the constant onslaught of highly publicized data breaches may well be a downstream effect of this kind of thinking. There isn’t very much of a care ethic when it comes to our data on the internet or in the cloud. Companies accumulate data and then withdraw from it, acting as though they have no responsibility for it—until the moment an evil hacker threatens said data. Which sounds, in other words, not too different from the heavily gendered imagery relied on by Snowflake. There is no sense of stewardship or responsibility for the data that you have “grabbed,” and the platform stays at a cool remove from the creaturely things that folks get up to when they go online and, wittingly or unwittingly, generate data.”
Content, pg 55
“There is an odd tension in the concept of “disruption,” and you can sense it here: disruption acts as though it thoroughly disrespects whatever existed previously, but in truth it often seeks to simply rearrange whatever exists. It is possessed of a deep fealty to whatever is already given. It seeks to make it more efficient, more exciting, more something, but it never wants to dispense altogether with what’s out there. This is why its gestures are always radical but its effects never really upset the apple cart: Uber claims to have “revolutionized” the experience of hailing a cab, but really that experience has stayed largely the same. What it managed to get rid of were steady jobs, unions, and anyone other than Uber’s making money on the whole enterprise.”
We need state-owned, interoperable, democratically governed online public networks. From the people for the people.
posted by Julia Rone
The conversation so far
The following comments on Trump being banned from Twitter/ the removal of Parler from Android and iOS stores were, somewhat aptly, inspired by two threads on Twitter itself: the first by the British-Canadian blogger CoryDoctorow and the other by Canadian scholar Blayne Haggart. The point of this post ideally is to start the conversation from where Doctorow and Haggart have left it and involve more people from our team. Ideally, nobody will be censored in the process :p
Doctorow insists that the big problem with Apple and Android removing Parler is not so much censorship – ultimately different app stores can have different rules and this should be the case – but rather the fact that there are no alternative app stores. Thus, the core of his argument is that the US needs to enforce anti-trust laws that would allow for a fair competition between a number of competitors. The same argument can be extended to breaking up social media monopolists such as Facebook and Twitter. What we need is more competition.
Haggart attacks this argument in three ways:
First, he reminds that “market regulation of the type that @doctorow wants requires perfect competition. This is unlikely to happen for a number of reasons (e.g, low consumer understanding of platform issues, tendency to natural monopoly)”. Thus, the most likely outcome becomes the establishment of “a few more corporate oligarchs”. This basically leaves the state as a key regulator – much to the disappointment of cyber-libertarians who have argued against state regulation for decades.
The problem is, and this is Haggart’s second key point, that “as a non-American, it’s beyond frustrating that this debate (like so many internet policy debates) basically amounts to Americans arguing with other Americans about how to run the world. Other countries need to assert their standing in this debate” . This point had been made years ago also in Martin Hardie’s great paper “Foreigner in a free land” in which he noticed how most debates about copyright law focused on the US. Even progressive people such as Larry Lessig built their whole argumentation on the basis of references to the US constitution. But what about all of us – the poor souls from the rest of the world who don’t live in the US?
Of course, Facebook, Twitter, Alphabet, Amazon, etc. are all US tech companies. But they do operate globally. So even if the US states interferes in regulating them, the regulation it imposes might not chime well with people in France or Germany, let’s say. The famous American prudence with nudity is the oft quoted example of different standards when it comes to content regulation. No French person would be horrified by the sight of a bare breast (at least if we believe stereotypes) so why should nude photos be removed from the French social media. If we want platform governance to be truly democratic, the people affected by it should “have a say in that decision”. But as Haggart notes “This cannot happen so long as platforms are global, or decisions about them are made only in DC”.
So what does Haggart offer? Simple: break social media giants not along market lines but along national lines. Well, maybe not that simple…
If we take the idea of breaking up monopolies along national lines seriously…
This post starts from Haggart’s proposal to break up social media along national lines, assuming it is a good proposal. In fact I do this not for rhetorical purposes or for the sake of setting a straw man but because I actually think it is a good proposal. So the following lines aim to take the proposal seriously and consider different aspects of it discussing what potential drawbacks/problems should we keep in mind.
How to do this??
The first key problem is: who on Earth, can convince companies such as Facebook/Twitter to “break along national lines”. These companies spend fortunes on lobbying the US government and they are US national champions. Why would the US support breaking them up along national lines? (As a matter of fact, the question of how is also a notable problem in Deibert’s “Reset” – his idea that hacktivism, civil disobedience, and whistleblowers’ pressure can make private monopolists exercise restraint is very much wishful thinking). There are historical precedents for nationalization of companies but they seem to have involved either a violent revolution or a massive indebtedness of these companies making it necessary for the state to step in and save them with public money. Are there any precedents for nationalizing a company and then revealing how it operates to other states in order to make these states create their respective national versions of it? Maybe. But it seems highly unlikely that anyone in the US would want to do this.
Which leaves us with the rather utopian option two: all big democratic states get together and develop interoperable social media. The project is such a success that people fed up with Facebook and Google decide to join and the undue influence of private monopolists finally comes to an end. But this utopian vision itself opens up a series of new questions.
Okay, assuming we can have state platforms operating along national lines..
One option would be for each individual citizen to own their data but this might be too risky and unpractical. Another option would be to treat the data as public data – the same way we treat data from surveys and national statistics. The personal data from current social media platforms is used for online advertising/ training machine learning. If states own their citizens’ data, we might go back to a stage in which the best research was done by state bodies and universities rather than what we have now – the most cutting edge research is done in private companies, often in secret from the public. Mike Savage described this process of increased privatization of research in his brilliant piece The Coming Crisis of Empirical sociology. If anything, the recent case with Google firing AI researcher Timnit Gebru reveals the need to have independent public research that is not in-house research by social media giants or funded by them. It would be naive to think such independent academics can do such research in the current situation when the bulk of interesting data to be analysed is privately owned.
How to prevent authoritarian censorship and surveillance?
Finally, if we assume that states will own their own online public networks – fulfilling the same functions such as Facebook, but without the advertising, the one million dollar question is how to prevent censorship, overreach and surveillance. As Ron Deibert discusses in “Reset”, most states are currently involved in some sort of hacking and surveillance operations of foreign but also domestic citizens. What can be done about this? Here Haggart’s argument about the need for democratic accountability reveals its true importance and relevance. State-owned online public networks would have to abide by standards that have been democratically discussed and to be accountable to the public.
But what Hagart means when discussing democratic accountability should be expanded. Democracy and satisfaction with it have been declining in many Western nations with more and more decision-making power delegated to technocratic bodies. Yet, what the protests from 2010s in the US and the EU clearly showed is that people are dissatisfied with democracy not because they want authoritarianism but because they want more democracy, that is democratic deepening. Or in the words of the Spanish Indignados protesters:
“Real democracy, now”
Thus, to bring to conclusion the utopia of state public networks, the decisions about their governance should be made not by technocratic bodies or with “democratic accountability” used as a form of window-dressing which sadly is often the case now. Instead, policy decisions should be discussed broadly through a combination of public consultations, assemblies and in already existing national and regional assemblies in order to ensure people have ownership of the policies decided. State public networks should be not only democratically accountable but also democratically governed. Such a scenario would be one of what I call “democratic digital sovereignty” that goes beyond the arbitrariness of decisions by private CEOs but also escapes the pitfalls of state censorship and authoritarianism.
To sum up: we need state-owned interoperable online public networks. Citizen data gathered from the use of these media would be owned by the state and would be available for public academic research (which would be open access in order to encourage both transparency and innovation). The moderation policies of these public platforms would be democratically discussed and decided. In short, these will be platforms of the people and for the people. Nothing more, nothing less.