Can we imagine a better Internet?

The convenience of thinking together

by Alina Utrata and Julia Rone

Reflecting on our recent tech and environment workshop, two of our workshop hosts, Alina Utrata and Julia Rone, explore the questions from the event that are still making them think.

On June 17, over 40 participants from all over the world joined our workshop exploring “the cost of convenience” and the opaque impact that digital technology has on the environment.

Instead of having academics presenting long papers in front of Zoom screens with switched-off cameras, we opted for a more dialogic, interactive and (conveniently) short format.

We invited each participant (or team of participants) to share a provocation across the environmental impact of technology/the political economy of the environment/technology nexus and discussed in small groups. Then, in panel sessions we discussed the provocations (what we know already), the known unknowns (what we don’t know yet), and ideas for an action plan (what could we be doing). 

Below are our reflections on the workshop.

A visual representation of the workshop, produced by artist Tom Mclean.

There is no real technical or technological “fix” for the climate crisis

By Alina Utrata

I am currently working on the relationships between technology corporations and states.

For me, what stood out about the discussions was the sense among all participants that there was no real technical or technological “fix” for the climate crisis.

Instead, the conversations often revolved around globally embedded systems and structures of power—and asking why a certain technology is being deployed, by whom, for whom and how, rather than whether they could “fix” anything.

“I was inspired by how participants immediately recognised the importance of these systems, and instead focused our conversations on how to change them.”

Alina Utrata

In fact, it was pointed out that often the creators of these technological innovations deliberately promoted certain kinds of narratives about how they wanted the technology to be thought of—for example, the “cloud” as a kind of abstract, other place in the sky, rather than a real, tangible infrastructure with real costs.

The same could be said of the metaphors of “carbon footprint” or “carbon neutral”—the idea that as long as discrete, individual corporate entities were not personally responsible for a certain amount of emissions, then they could not be held culpable for a system that was failing the planet. 

Credit: Alex Machado for Unsplash

I was inspired by how participants immediately recognized the importance of these systems, and instead focused our conversations on how to change them.

Although many political concepts today are so commonplace that they seem ordinary, we discussed how they are often really quite modern or Western in origin.

For example, the idea of the shared, communal commons is an ancient one, and can be used as a political framework to tackle some of the harmful systems humans have put in place on our earth. 

Finally, we acknowledged that we all have a role to play in this fight for our future—but not all of us have or need to play the same role.

Some of us will be activists outside these systems of power, and some of us will be sympathetic voices from within.

The participants reaffirmed the need to both communicate and coordinate across disciplines within academia, and more broadly in sectors across the wider earth.


Should we abolish the Internet?

By Julia Rone

 Credit: Denny Müller for Unsplash

I am currently working on the democratic contestation of data centre construction.

John Naughton often says during our weekly meetings that the most interesting conversations are those that finish before you want them to end. That was definitely the case for me at the workshop since of each the sessions I hosted ended with a question that could be discussed for hours and that still lingers in my mind.

Concepts and conceptual problems

If I have to identify the key common threads running through the three sessions I hosted, the first one has to with concepts and conceptual problems. 

Several participants posed the crucial question how do we think of “progress”.

Is progress necessarily synonymous with growth, increased efficiency, better performance?

What are we sacrificing in the name of “progress”?

One participant asked the painfully straight-to-the-point question: “Should we abolish the Internet?” (considering the massive toll of tech companies on the environment, the rise of hate speech, cyber-bullying, polarization, etc.)

Do we feel loss at the thought? 

“Yes!” – I immediately said to myself.- “How could I talk to my family and to my friends”.

This question really provoked me to think further.

If I can’t live in a world without the Internet, can we think of a different Internet?

How can we re-invent the Internet to become more caring, accessible, more Earth-based and less extractive (as one of the provocations suggested).

Credit: Ehimetalor Akhere Unuabona for Unsplash

What does it mean to be sustainable?

Another, similarly important conceptual question was posed at the very end of the second session by a collegue who asked “What does it mean to be sustainable?” Why do we want to be sustainable? What and whom are we sustaining?

Should we not rather think of ways to radically change the system?

Our time ran out before discussing this in depth and therefore this question has also been bothering me since then. 

Ultimately, as another participant emphasised, research on the environmental impact of tech is most problematic and underdeveloped at two levels – the levels of concepts (how do we think of abstraction and extraction, for example?), but also at the lowest level of what individuals and communities do.

This latter question about on-the-ground labor, work and action is actually the second common thread between several of the contributions in the sessions I attended.

“It is difficult to disentangle the economic aspects of repair from the environmental ones.” 

A colleague studying workers who do repair for their livelihood (not as a hipster exercise) rightly pointed out that when discussing the environmental consequences of tech, and practices such as repair in particular, it is difficult to disentangle the economic aspects of repair from the environmental ones. 

Indeed, in a different context, scholars of the environmental impact of tech have clearly shown how tech companies’ extractive practices towards nature go hand-in-hand with dispossession, economic exploitation and extraction of value and profit from marginalised communities.

“In order to understand and better address the environmental consequences of digital tech, we need to be more open to the experiences of individuals and communities on the ground who often “know better” since they live (and occasionally also cause) the very consequences of tech we research.”

Julia Rone

Another colleague had studied the ways in which local leaders participate in decision-making about data centres in Thailand and controversies around water use – a topic very relevant to my own current project on data centres in the Netherlands.

Another participant yet had studied how participatory map-making not only consumes electricity but also changes the very way we see nature.

The reason why I found all these contributions so fascinating is that they challenged simplistic narratives of Big Tech Vs the Environment and showed how many players (with how many different intentions, principles and economic interests) are actually involved in the increasingly complex assemblage of humans-nature and tech. 

So to sum up – in order to understand and better address the environmental consequences of digital tech, we need to be more clear about the concepts we use as researchers but also to be more open to the experiences of individuals and communities on the ground who often “know better” since they live (and occasionally also cause) the very consequences of tech we research. 

To summarise…

Ultimately, each of us who attended (and hosted) the sessions of the workshop have a rich but still incomplete overview of the workshop.

By attending different sessions, there were provocations that individually we missed as sessions intertwined and overlapped (a bit like tectonic plates readjusting meaning, ideas and new perspectives for research).

We would love to hear from other attendees from the workshop, the ideas that struck them most during the sessions.

Luckily, some participants have submitted their provocation to our Zine, a unique document that we will share soon to help guide us forward in our thinking.

We can’t wait to share the Zine with you… stay tuned.

Apple clearly has power, but it isn’t accountable

By John Naughton

The only body that has, to date, been able to exert real control over the data-tracking industry is a giant private company which itself is subject to serious concerns about its monopolistic behaviour. Where is democracy in all this?

A few weeks ago, Apple dropped its long-promised bombshell on the data-tracking industry.

The latest version (14.5) of iOS — the operating system of the iPhone — included a provision that required app users explicitly to confirm that they wished to be tracked across the Internet in their online activities.

At the heart of the switch is a code known as “the identity for advertisers” or IDFA. It turns out that every iPhone comes with one of these identifiers, the object of which is to provide participants in the hidden real-time bidding system with aggregate data about the user’s interests.

For years, iPhone users had had the option to switch it off by digging into the privacy settings of their devices; but, because they’re human, very few had bothered to do that.

From 14.5 onwards, however, they couldn’t avoid making a decision, and you didn’t have to be a Nobel laureate to guess that most iPhone users would opt out.

Which explains why those who profit from the data-tracking racket had for months been terminally anxious about Apple’s perfidy.

Some of the defensive PR mounted on their behalf — for example Facebook’s weeping about the impact on small, defenceless businesses — defied parody.

“We have evidence of its [real-time bidding] illegitimacy, and a powerful law on the statute book which in principle could bring it under control — but which we appear unable to enforce.”

Other counter-offensives included attacks on Apple’s monopolistic control over its Apps store, plus charges of rank hypocrisy – that changes in version 14.5 were not motivated by Apple’s concerns for users’ privacy but by its own plans to enter the advertising business. And so on.

It’ll be a while until we know for sure whether the apocalyptic fears of the data-trackers were accurate.

It takes time for most iPhone users to install operating system updates, and so these are still relatively early days. But the first figures are promising. One data-analytics company, for example, has found that in the early weeks the daily opt-out rate for American users has been around 94 percent.

This is much higher than surveys conducted in the run-up to the change had suggested — one had estimated an opt-out rate closer to 60 per cent.

If the opt-out rate is as high as we’ve seen so far, then it’s bad news for the data-tracking racket and good news for humanity. And if you think that description of what the Financial Times estimates to be a $350B industry is unduly harsh, then a glance at a dictionary may be helpful.

Merriam-Webster, for example, defines ‘racket’ as “a fraudulent scheme, enterprise, or activity” or “a usually illegitimate enterprise made workable by bribery or intimidation”.

It’s not clear whether the computerised, high-speed auction system in which online ads are traded benefits from ‘bribery or intimidation’, but it is certainly illegal — and currently unregulated.

That is the conclusion of a remarkable recent investigation by two legal scholars, Michael Veale and Frederik Zuiderveen Borgesius, who set out to examine whether this ‘real-time bidding’ (RTB) system conforms to European data-protection law.

“The irony in this particular case is that there’s no need for such an overhaul: Europe already has the law in place.”

They asked whether RTB complies with three rules of the GDPR (General Data Protection Regulation) — the requirement for a legal basis, transparency, and security. They showed that for each of the requirements, most RTB practices do not comply. “Indeed”, they wrote, “it seems close to impossible to make RTB comply”. So, they concluded, it needs to be regulated.

It does.

Often the problem with tech regulation is that our legal systems need to be overhauled to deal with digital technology. But the irony in this particular case is that there’s no need for such an overhaul: Europe already has the law in place.

It’s the GDPR, which is part of the legal code of every EU country and has provision for swingeing punishments of infringers. The problem is it’s not being effectively enforced.

Why not? The answer is that the EU delegates regulatory power to the relevant institutions — in this case Data Protection Authorities — of its member states. And these local outfits are overwhelmed by the scale of the task – and are lamentably under-resourced for it.

Half of Europe’s DPAs have only five technical experts or fewer. And the Irish Data Protection Authority, on whose patch most of the tech giants have their European HQs, has the heaviest enforcement workload in Europe and is clearly swamped.

So here’s where we are: an illegal online system has been running wild for years, generating billions of profits for its participants.

We have evidence of its illegitimacy, and a powerful law on the statute book which in principle could bring it under control — but which we appear unable to enforce.

And the only body that has, to date, been able to exert real control over the aforementioned racket is… a giant private company which itself is subject to serious concerns about its monopolistic behaviour. And the question for today: where is democracy in all this? You only have to ask to know the answer.


A version of this post appeared in The Observer on 23 May, 2021.

In Review: How do we avoid the ‘racket’ of sustainable development and green tech?

By Mallika Balakrishnan

Ahead of COP26, can the narrative be shifted away from what Camila Nobrega and Joana Varon describe as a “dangerous mix of ‘green economy’ and techno-solutionism? Mallika Balakrishnan explores the Minderoo Centre for Technology and Democracy’s reading & discussion of Nobrega, Camila & Joana Varon. “Big tech goes green(washing): feminist lenses to unveil new tools in the master’s houses.” GISWatch: Technology, the environment, and a sustainable world. 2021.

On June 17, the Minderoo Centre will be hosting thinkers from academia, civil society, and industry for our workshop on Technology & the Environment.

In the lead up to COP26, we’re keen to spark discussion and amplify action at the nexus of technology and its impact on the environment.

One of the themes we’re hoping to explore more is the environmental cost of technological convenience. 

Frankly, critiques of convenience are often the place my brain starts to tune out: “convenience” frequently serves as shorthand for a framework of climate destruction via individual consumption choices.

Several, though not all, of these analyses are ableist and anti-poor, and they refuse to start from a commitment to decoloniality. 

Nevertheless, the environmental and social costs of convenience are staggering, and will be crucial to understand on the road to environmental justice.

I proposed reading Joana Varon and Camila Nobrega’s recently published article because I resonated strongly with their feminist, power-based analysis of technology and the environment, specifically around the role of big tech companies and intergovernmental meetings such as COP.

Their work articulates the dissonance between big tech’s stated commitments to climate justice and actual consolidation of power, in a way that helped me start to think about convenience at a level of analysis that doesn’t feel disingenuous. 

“Especially in high-level fora such as COP26, it might be difficult to shift the narrative away from what the authors call a “dangerous mix of ‘green economy’ and techno-solutionism.” “

Some themes and remarks that surfaced in our discussion: 

When it comes to the environment, Big Tech companies are eager to centre themselves in policy-setting debates.

This article highlighted how tech companies have already positioned themselves as having useful tools to help solve the climate crisis, sweeping under the rug the ways they are exacerbating environmental destruction. As brought up in our discussion, this feels reminiscent of tobacco companies’ roles in shaping narratives around the risk of lung cancer. Especially in high-level fora such as COP26, it might be difficult to shift the narrative away from what the authors call a “dangerous mix of ‘green economy’ and techno-solutionism.” 

Solidarity with local resistance reminds us to avoid consumer/market-centric framing.

So how might MCTD work to address the gap between policy discussions and tangible justice for impacted communities? We discussed the importance of amplifying—and not tokenizing—voices in movement, recognizing many who have been doing this work for years.

There’s a connection to be made to the twin logics of extraction and abstraction (as highlighted in Kate Crawford’s Atlas of AI). The relationship between technology and the environment is easily abstracted to technocratic language or boiled down to carbon footprint. This abstraction eschews an explicitly anti-accumulation, structural analysis, and in turn makes it easier for tech companies to position themselves as “green” solutioneers.

We should be in solidarity with real-time resistance and reject framing issues in ways that suggest:

1) the only relevant harms are consumer harms

2) the only relevant solutions are market solutions

3) everything is consumable and replaceable.

As far as tactics for socio-environmental justice go, planting a tree for every square mile of land destroyed leaves a lot to be desired. And as Varon and Nobrega remind us in this article, we should be thinking about the human, social, and environmental costs of environmental destruction as linked.

We also talked about the relationship between environmental destruction and the destruction of the commons: while there were some reservations around the concept of the commons, folks discussed the emancipatory potential of bienes comunes in challenging companies’ privatization and ownership of (often unceded) land. 

We need to look beyond “effectiveness” and remember structures of power.

How do we avoid the “racket” of sustainable development and green tech?

At one level, we need to push back on the claim that Big Tech can effectively parachute in and solve problems of environmental injustice. But whether or not a tech company’s proposed solutions do what they promise, we should remember that the consolidation of power to these companies is the broader context in which this is taking place. 

Drawing from insights around online advertising ecosystems, we discussed how a lack of transparency can make it difficult to hold power to account, especially in terms of regulation. Nevertheless, we emphasized that whether or not a company’s tech solution works is incidental to the power the company has: rather, it’s about how Big Tech companies have consolidated restructured capacity and centered themselves infrastructurally.

Convenience is costly. We need to be asking why, and for whom.

When we think about convenience, it’s worth remembering to question what is convenient for companies, for workers, and for frontline communities—we should think beyond convenience as ascribed only to the individual consumer. Analyses that treat people as totally separate individuals forego possibilities for power through collective action. 

Have a different perspective to add? There’s still time to submit your provocation to our Technology & the Environment Workshop before the May 15 deadline!

Read our call for provocations (no set format; we just want bold questions)here

Call for provocations – Technology and The Environment Workshop

Every day, consumers around the world utilise digital technology with unprecedented convenience, but at what environmental cost?

The Minderoo Centre for Technology and Democracy is examining the environmental impact of digital technology to acquire and disseminate an informed, independent assessment of the planetary consequences of the industry’s continued rate of expansion.

Using the resources of leading academic research, we want to expose the tremendous environmental impact of our relationship with digital technology. For example, what is the carbon footprint of a Google search? What are the real-world ramifications for our communities and our planet of the click-to-delivery process of an Amazon order? How does tech ‘progress’ drive planned obsolescence in the smartphone market?

Call for provocations – Technology and the Environment workshop –17th June – 12pm BST (7pm AWST/7AM EDT)

The Minderoo Centre for Technology and Democracy is calling for participants to provide provocations for a workshop to further explore the ‘cost of convenience’ and the opaque impact that digital technology has on the environment.

The workshop aims to provide a forum for emerging researchers to enter into speculation, critique, exchange, and dialogue on the topic. Although it is primarily aimed at international academic researchers and PhD students, the workshop is also open to journalists, tech workers and those pursuing research outside an academic context. 

Apply now by email – Technology and the Environment workshop

Applicants are asked to produce a 150-word provocation on a topic across the environmental impact of technology/the political economy of the environment/technology nexus, that they would like to discuss at the workshop.

To submit a 150 word provocation or to ask any questions aheads of application, please email: minderoo@crassh.cam.ac.uk

Applications are accepted until May 15.

Silencing Trump and authoritarian tech power

John Naughton:

It was eerily quiet on social media last week. That’s because Trump and his cultists had been “deplatformed”. By banning him, Twitter effectively took away the megaphone he’s been masterfully deploying since he ran for president. The shock of the 6 January assault on the Capitol was seismic enough to convince even Mark Zuckerberg that the plug finally had to be pulled. And so it was, even to the point of Amazon Web Services terminating the hosting of Parler, a Twitter alternative for alt-right extremists.

The deafening silence that followed these measures was, however, offset by an explosion of commentary about their implications for freedom, democracy and the future of civilisation as we know it. Wading knee-deep through such a torrent of opinion about the first amendment, free speech, censorship, tech power and “accountability” (whatever that might mean), it was sometimes hard to keep one’s bearings. But what came to mind continually was H L Mencken’s astute insight that “for every complex problem there is an answer that is clear, simple and wrong”. The air was filled with people touting such answers.

In the midst of the discursive chaos, though, some general themes could be discerned. The first highlighted cultural differences, especially between the US with its sacred first amendment on the one hand and European and other societies, which have more ambivalent histories of moderating speech. The obvious problem with this line of discussion is that the first amendment is about government regulation of speech and has nothing whatsoever to do with tech companies, which are free to do as they like on their platforms.

A second theme viewed the root cause of the problem as the lax regulatory climate in the US over the last three decades, which led to the emergence of a few giant tech companies that effectively became the hosts for much of the public sphere. If there were many Facebooks, YouTubes and Twitters, so the counter-argument runs, then censorship would be less effective and problematic because anyone denied a platform could always go elsewhere.

Then there were arguments about power and accountability. In a democracy, those who make decisions about which speech is acceptable and which isn’t ought to be democratically accountable. “The fact that a CEO can pull the plug on Potus’s loudspeaker without any checks and balances,” fumed EU commissioner Thierry Breton, “is not only confirmation of the power of these platforms, but it also displays deep weaknesses in the way our society is organised in the digital space.” Or, to put it another way, who elected the bosses of Facebook, Google, YouTube and Twitter?

What was missing from the discourse was any consideration of whether the problem exposed by the sudden deplatforming of Trump and his associates and camp followers is actually soluble – at least in the way it has been framed until now. The paradox that the internet is a global system but law is territorial (and culture-specific) has traditionally been a way of stopping conversations about how to get the technology under democratic control. And it was running through the discussion all week like a length of barbed wire that snagged anyone trying to make progress through the morass.

All of which suggests that it’d be worth trying to reframe the problem in more productive ways. One interesting suggestion for how to do that came last week in a thoughtful Twitter thread by Blayne Haggart, a Canadian political scientist. Forget about speech for a moment, he suggests, and think about an analogous problem in another sphere – banking. “Different societies have different tolerances for financial risk,” he writes, “with different regulatory regimes to match. Just like countries are free to set their own banking rules, they should be free to set strong conditions, including ownership rules, on how platforms operate in their territory. Decisions by a company in one country should not be binding on citizens in another country.”

In those terms, HSBC may be a “global” bank, but when it’s operating in the UK it has to obey British regulations. Similarly, when operating in the US, it follows that jurisdiction’s rules. Translating that to the tech sphere, it suggests that the time has come to stop accepting the tech giant’s claims to be hyper-global corporations, whereas in fact they are US companies operating in many jurisdictions across the globe, paying as little local tax as possible and resisting local regulation with all the lobbying resources they can muster. Facebook, YouTube, Google and Twitter can bleat as sanctimoniously as they like about freedom of speech and the first amendment in the US, but when they operate here, as Facebook UK, say, then they’re merely British subsidiaries of an American corporation incorporated in California. And these subsidiaries obey British laws on defamation, hate speech and other statutes that have nothing to do with the first amendment. Oh, and they should also pay taxes on their local revenues.

Review: What Tech Calls Reading

A Review of FSG x Logic Series

by Alina Utrata


Publisher Farrar, Straus and Giroux (FSG) and the tech magazine Logic teamed up to produce four books that capture “technology in all its contradictions and innovation, across borders and socioeconomic divisions, from history through the future, beyond platitudes and PR hype, and past doom and gloom.” In that, the FSG x Logic series succeeded beyond its wildest imagination. These books are some of the most well-researched, thought-provoking and—dare I say it—innovative takes on how technology is shaping our world. 

Here’s my review of three of the four—Blockchain Chicken Farm, Subprime Attention Crisis and What Tech Calls Thinking—but I highly recommend you read them all. (They average 200 pages each, so you could probably get through the whole series in the time it takes to finish Shoshana Zuboff’s Surveillance Capitalism.)


Blockchain Chicken Farm: And Other Stories of Tech in China’s Countryside

Xiaowei Wang

“Famine has its own vocabulary,” Xiaowei Wang writes, “a hungry language that haunts and lingers. My ninety-year-old great-uncle understands famine’s words well.” Wang writes as beautifully as they think, effortlessly weaving between ruminations on Chinese history, personal and family anecdotes, modern political and economic theory and first-hand research into the technological revolution sweeping rural China. Contradiction is a watchword in this book, as is contrast—they describe the difference between rural and urban life, of the East and the West, of family and the globe, of history and the present and the potential future. And yet, it all seems familiar. Wang invites us to think slowly about an industry that wants us to think fast—about whether any of this is actually about technology, or whether it is about capitalism, about globalization, about our politics and our communities—or, perhaps, about what it means to live a good life.

On blockchain chicken farms:

“The GoGoChicken project is a partnership between the village government and Lianmo Technology, a company that applies blockchain to physical objects, with a focus on provenance use cases—that is, tracking where something originates from. When falsified records and sprawling supply chains lead to issues of contamination and food safety, blockchain seems like a clear, logical solution. . . These chickens are delivered to consumers’ doors, butchered and vacuum sealed, with the ankle bracelet still attached, so customers can scan the QR code before preparing the chicken . . .”

On a Blockchain Chicken Farm in the Middle of Nowhere, pg 40

“A system of record keeping used to be textual, readable, and understandable to everyone. The technical component behind it was as simple as paper and pencil. That system was prone to falsification, but it was widely legible. Under governance by blockchain, records are tamperproof, but the technical systems are legible only to a select few. . . blockchain has yet to answer the question: If it takes power away from a central authority, can it truly put power back in the hands of the people, and not just a select group of people? Will it serve as an infrastructure that amplifies trust, rather than increasing both mistrust and a singular reliance on technical infrastructure? Will it provide ways to materially organize and enrich a community, rather than further accelerating financial systems that serve a select few?”

On a Blockchain Chicken Farm in the Middle of Nowhere, pg 48

On AI pig farming:

“In these large-scale farms, pigs are stamped with a unique identity mark on their bodies, similar to a QR code. That data is fed into a model made by Alibaba, and the model has the information it needs to monitor the pigs in real time, using video, temperature, and sound sensors. It’s through these channels that the model detects any sudden signs of fever or disease, or if pigs are crushing one another in their pens. If something does happen, the system recognizes the unique identifier on the pig’s body and gives an alert.”

When AI Farms Pigs, pg 63

“Like so many AI projects, ET Agricultural Brain naively assumes that the work of a farmer is to simply produce food for people in cities, and to make the food cheap and available. In this closed system, feeding humans is no different from feeding swaths of pigs on large farms. The project neglects the real work of smallholder farmers throughout the world. For thousands of years, the work of these farmers has been stewarding and maintaining the earth, rather than optimizing agricultural production. They use practices that yield nutrient-dense food, laying a foundation for healthy soils and rich ecology in an uncertain future. Their work is born out of commitment and responsibility: to their communities, to local ecology, to the land. Unlike machines, these farmers accept the responsibility of their actions with the land. They commit to the path of uncertainty.”

When AI Farms Pigs, pg 72

“After all, life is defined not by uncertainty itself but by a commitment to living despite it. In a time of economic and technological anxiety, the questions we ask cannot center on the inevitability of a closed system built by AI, and how to simply make those closed systems more rational or “fair.” What we face are the more difficult questions about the meaning of work, and the ways we commit, communicate, and exist in relation to each other. Answering these questions means looking beyond the rhetoric sold to us by tech companies. What we stand to gain is nothing short of true pleasure, a recognition that we are not isolated individuals, floating in a closed world.”

When AI Farms Pigs, pg 72

Subprime Attention Crisis: Advertising and the Time Bomb at the Heart of the Internet

Tim Hwang

Subprime Attention Crisis

In Subprime Attention Crisis, Tim Hwang argues that the terrifying thing about digital platforms is not how effective they are at manipulating behavior—it’s that they might not be very effective at all. Hwang documents, with precise and technical detail, how digital advertising markets work and how tech giants may be deliberately attempting to inflate their value, even as the actual effectiveness of online ads declines. If you think you’ve seen this film before, Hwang draws parallels to the subprime mortgages and financial systems that triggered the 2008 financial crash. He makes a compelling case that, sooner or later, the digital advertising bubble may burst—and the business model of the internet will explode overnight (not to mention all the things tech money subsidizes, from philanthropy to navigation maps to test and trace). Are Google and Facebook too big to fail? 

On potential systems breakdown:

“Whether underwriting a massive effort to scan the world’s books or enabling the purchase of leading robotics companies, Google’s revenue from programmatic advertising has, in effect, reshaped other industries. Major scientific breakthroughs, like recent advances in artificial intelligence and machine learning, have largely been made possible by a handful of corporations, many of which derive the vast majority of their wealth from online programmatic advertising. The fact that these invisible, silent programmatic marketplaces are critical to the continued functioning of the internet—and the solvency of so much more—begs a somewhat morbid thought experiment: What would a crisis in this elaborately designed system look like?”

The Plumbing, pg 25

“Intense dysfunction in the online advertising markets would threaten to create a structural breakdown of the classic bargain at the core of the information economy: services can be provided for free online to consumers, insofar as they are subsidized by the revenue generated from advertising. Companies would be forced to shift their business models in the face of a large and growing revenue gap, necessitating the rollout of models that require the consumer to pay directly for services. Paywalls, paid tiers of content, and subscription models would become more commonplace. Within the various properties owned by the dominant online platforms, services subsidized by advertising that are otherwise unprofitable might be shut down. How much would you be willing to pay for these services? What would you shell out for, and what would you leave behind? The ripple effects of a crisis in online advertising would fundamentally change how we consume and navigate the web.”

The Plumbing, pg 27

On fraud in digital advertising:

“One striking illustration is the subject of an ongoing lawsuit around claims that Facebook made in 2015 promoting the attractiveness of video advertising on its platform. At the time, the company was touting online video—and the advertising that could be sold alongside it—as the future of the platform, noting that it was “increasingly seeing a shift towards visual content on Facebook.” . . . But it turned out that Facebook overstated the level of attention being directed to its platform on the order of 60 to 80 percent. By undercounting the viewers of videos on Facebook, the platform overstated the average time users spent watching videos. . . . These inconsistencies have led some to claim that Facebook deliberately misled the advertising industry, a claim that Facebook has denied. Plaintiffs in a lawsuit against Facebook say that, in some cases, the company inflated its numbers by as much as 900 percent. Whatever the reasons for these errors in measurement, the “pivot to video” is a sharp illustration of how the modern advertising marketplace can leave buyers and sellers beholden to dominant platform decisions about what data to make available.”

Opacity, pg 70

On specific types of ad fraud:

“Click fraud is a widespread practice that uses automated scripts or armies of paid humans in “click farms” to deliver click-throughs on an ad. The result is that the advertising captures no real attention for the marketer. It is shown either to a human who was hired to click on the ad or to no one at all. The scale of this problem is enormous. A study conducted by Adobe in 2018 concluded that about 28 percent of website traffic showed “non-human signals,” indicating that it originated in automated scripts or in click farms. One study predicted that the advertising industry would lose $19 billion to click fraud in 2018—a loss of about $51 million per day. Some place this loss even higher. One estimate claims that $1 of every $3 spent on digital advertising is lost to click fraud.”

Subprime Attention, 85

What Tech Calls Thinking: An Inquiry into the Intellectual Bedrock of Silicon Valley

Adrian Daub

What Tech Calls Thinking

What Tech Calls Thinking is “about the history of ideas in a place that likes to pretend its ideas don’t have any history.” Daub has good reason to know this, as a professor of comparative literature at Stanford University (I never took a class with him, a fact I regretted more and more as the book went on). His turns of phrase do have the lyricism one associates with a literature seminar—e.g. “old motifs playing dress-up in a hoodie”—as he explores the ideas that run amok in Silicon Valley. He exposes delightful contradictions: thought leaders who engage only superficially with thoughts. CEOs who reject the university (drop out!), then build corporate campuses that look just like the university. As Daub explains the ideas of thinkers such as Abraham Maslow, Rene Girard, Ayn Rand, Jurgen Habermas, Karl Marx, Marshall McLuhan and Samuel Beckett, you get the sense, as Daub says, that these ideas “aren’t dangerous ideas in themselves. Their danger lies in the fact that they will probably lead to bad thinking.” The book is a compelling rejection of the pseudo-philosophy that has underpinned much of the Valley’s techno-determinism. “Quite frequently,” Daub explains, “these technologies are truly novel—but the companies that pioneer them use that novelty to suggest that traditional categories of understanding don’t do them justice, when in fact standard analytic tools largely apply just fine.” Daub’s analysis demonstrates the point well. 

On tech drop outs:

“You draw a regular salary and know what you’re doing with your life earlier than your peers, but you subsist on Snickers and Soylent far longer. You are prematurely self-directed and at the same time infantilized in ways that resemble college life for much longer than almost anyone in your age cohort. . . .  Dropping out is still understood as a rejection of a certain elite. But it is an anti-elitism whose very point is to usher you as quickly as possible into another elite—the elite of those who are sufficiently tuned in, the elite of those who get it, the ones who see through the world that the squares are happy to inhabit . . .  All of this seems to define the way tech practices dropping out of college: It’s a gesture of risk-taking that’s actually largely drained of risk. It’s a gesture of rejection that seems stuck on the very thing it’s supposedly rejecting.”

Dropping Out, pg 37

On platforms versus content creation:

“The idea that content is in a strange way secondary, even though the platforms Silicon Valley keeps inventing depend on it, is deeply ingrained. . . . To create content is to be distracted. To create the “platform” is to focus on the true structure of reality. Shaping media is better than shaping the content of such media. It is the person who makes the “platform” who becomes a billionaire. The person who provides the content—be it reviews on Yelp, self-published books on Amazon, your own car and waking hours through Uber—is a rube distracted by a glittering but pointless object.”

Content, pg 47

On gendered labor:

“Cartoonists, sex workers, mommy bloggers, book reviewers: there’s a pretty clear gender dimension to this division of labor. The programmers at Yelp are predominantly men. Its reviewers are mostly female . . . The problem isn’t that the act of providing content is ignored or uncompensated but rather that it isn’t recognized as labor. It is praised as essential, applauded as a form of civic engagement. Remunerated it is not. . . . And deciding what is and isn’t work has a long and ignominious history in the United States. They are “passionate,” “supportive” volunteers who want to help other people. These excuses are scripts, in other words, developed around domestic, especially female, labor. To explain why being a mom isn’t “real” work. To explain why women aren’t worth hiring, or promoting, or paying, or paying as much.”

Content, pg 51

On gendered data:

“There is the idea that running a company resembles being a sexual predator. But there is also the idea that data—resistant, squirrelly, but ultimately compliant—is a feminine resource to be seized, to be made to yield by a masculine force. . . .To grab data, to dispose of it, to make oneself its “boss”—the constant onslaught of highly publicized data breaches may well be a downstream effect of this kind of thinking. There isn’t very much of a care ethic when it comes to our data on the internet or in the cloud. Companies accumulate data and then withdraw from it, acting as though they have no responsibility for it—until the moment an evil hacker threatens said data. Which sounds, in other words, not too different from the heavily gendered imagery relied on by Snowflake. There is no sense of stewardship or responsibility for the data that you have “grabbed,” and the platform stays at a cool remove from the creaturely things that folks get up to when they go online and, wittingly or unwittingly, generate data.”

Content, pg 55

On disruption:

“There is an odd tension in the concept of “disruption,” and you can sense it here: disruption acts as though it thoroughly disrespects whatever existed previously, but in truth it often seeks to simply rearrange whatever exists. It is possessed of a deep fealty to whatever is already given. It seeks to make it more efficient, more exciting, more something, but it never wants to dispense altogether with what’s out there. This is why its gestures are always radical but its effects never really upset the apple cart: Uber claims to have “revolutionized” the experience of hailing a cab, but really that experience has stayed largely the same. What it managed to get rid of were steady jobs, unions, and anyone other than Uber’s making money on the whole enterprise.”

Desire, pg 104
Create your website with WordPress.com
Get started