The A.I. Backlash Backlash
Greetings from Read Max HQ! This week’s newsletter delves into the current state of A.I. discourse.
This tweet from Times reporter Kevin Roose (quote-tweeting Matt Yglesias, who screen-shot an Ezra Klein column) hit my desk on Tuesday:
I don’t mean to single out Roose, but the sentiments expressed here—both in the tweet and the quoted paragraphs—exemplify a new development in the never-ending online A.I. discourse: the backlash to the A.I. backlash. Since the release of ChatGPT in 2022, A.I. discourse has gone through at least two distinct cycles, especially on social media and, to a lesser extent, in the popular press.
First came the hype cycle, which persisted through much of 2023. During this phase, the loudest voices predicted near-term chaos and global societal transformation due to unstoppable artificial intelligence. Twitter was dominated by LinkedIn-style A.I. hustlers claiming that “AI is going to nuke the bottom third of performers in jobs done on computers — even creative ones — in the next 24 months.”
When the anticipated economic transformation failed to materialize within the promised timeframe—and when many of the highly visible A.I. implementations proved to be less than useful—a backlash cycle emerged, marked by strong anti-A.I. sentiment. For many, A.I. became symbolic of a wayward and powerful tech industry, and those who admitted to or encouraged A.I. usage, particularly in creative fields, faced intense criticism.
But now, this backlash cycle is encountering its own backlash. Last December, Casey Newton, the prominent tech columnist (and co-host of the Hard Fork podcast with Roose), wrote a piece titled “The phony comforts of AI skepticism,” suggesting that critics were willfully ignoring the advancing power and importance of A.I. systems.
there is an enormous disconnect between external critics of AI, who post about it on social networks and in their newsletters, and internal critics of AI — people who work on it directly, either for companies like OpenAI or Anthropic or researchers who study it. […]There is a… rarely stated conclusion… which goes something like: Therefore, superintelligence is unlikely to arrive any time soon, if ever. LLMs are a Silicon Valley folly like so many others, and will soon go the way of NFTs and DAOs. […]This is the ongoing blind spot of the “AI is fake and sucks” crowd. This is the problem with telling people over and over again that it’s all a big bubble about to pop. They’re staring at the floor of AI’s current abilities, while each day the actual practitioners are successfully raising the ceiling.
In January, Nate Silver wrote a similar post, “It’s time to come to grips with AI,” which specifically criticized “the left” for its A.I. skepticism.
The problem is that the left (as opposed to the technocratic center) isn’t holding up its end of the bargain when it comes to AI. It is totally out to lunch on the issue.For the real leaders of the left, the issue simply isn’t on the radar. Bernie Sanders has only tweeted about “AI” once in passing, and AOC’s concerns have been limited to one tweet about “deepfakes.”Meanwhile, the vibe from lefty public intellectuals has been smug dismissiveness.
And this week, Klein’s interview with Ben Buchanan, Biden’s special advisor for artificial intelligence, arrived with the headline “The Government Knows A.G.I. Is Coming.” Klein’s not as direct as Newton or Silver, but he’s obviously aiming his introduction to the interview at what Newton calls “the ‘A.I. is fake and sucks’ crowd”:
If you’ve been telling yourself this isn’t coming, I really think you need to question that. It’s not web3. It’s not vaporware. A lot of what we’re talking about is already here, right now.I think we are on the cusp of an era in human history that is unlike any of the eras we have experienced before. And we’re not prepared in part because it’s not clear what it would mean to prepare. We don’t know what this will look like, what it will feel like. We don’t know how labor markets will respond. We don’t know which country is going to get there first. We don’t know what it will mean for war. We don’t know what it will mean for peace.And while there is so much else going on in the world to cover, I do think there’s a good chance that, when we look back on this era in human history, A.I. will have been the thing that matters.
The core of the anti-backlash position is something like this: A.I. is actually quite powerful and useful, and even if you don’t like that, significant resources are being invested in it, so it’s crucial to take it seriously instead of dismissing it.
Who these columns are specifically responding to is uncertain. The targets of criticism are somewhat vague. Newton mentions Gary Marcus, the cognitive scientist and prolific blogger, while acknowledging that Marcus “doesn’t say that AI is fake and sucks, exactly.” Silver seems to be responding to a few tweets. Klein doesn’t specify anyone at all. The ripostes aren’t as much about rigorous A.I.-critical voices, but instead about a general, dismissive anti-A.I. sentiment on social media, which leads Roose to state he suffers a “social penalty for taking AGI progress seriously.”
However, I also don’t think that this backlash-to-the-backlash is limited to Big Accounts complaining about their Twitter mentions. Anecdotally, I see more pushback than I used to against some of the more vocal A.I. critics, and defenses of A.I. usage from people who are otherwise quite critical of the tech industry. It’s not a new hype cycle yet – but it’s clear the discursive ground has shifted slightly in favor of A.I.
Why the Shift?
Why the shift?
Some proponents of a new hype cycle vaguely invoke rumors and words from sources. (Klein, in his column this week: “Person after person… has been coming to me saying… We’re about to get to artificial general intelligence”; Roose, a few months ago: “it is hard to impress… how much the vibe has shifted here… twice this month I’ve been asked at a party, ‘are you feeling the AGI?’”). The issue with relying on “A.I. insiders” to gauge A.I. progress is that these insiders have been whispering similar things for years, and we need to admit that even the smartest people in the industry don’t have great credibility when it comes to timelines.
However, it’s not all whispers at parties. There are public developments that help explain some of the renewed enthusiasm and pushback against aggressive skepticism. For example, the new popularity of “Deep Research” and visible “chain-of-thought” models. OpenAI, Google, and XAI now have products that create reports based on internet searches, with a step-by-step “chain of thought” visible to the user. As Arvind Narayanan puts it:
We’re seeing the same form-versus-function confusion with Deep Research now that we saw in the early days of chatbots. Back then people were wowed by chatbots’ conversational abilities and mimicry of linguistic fluency, and hence underappreciated the limitations. Now people are wowed by the ‘report’ format of Deep Research outputs and its mimicry of authoritativeness, and underappreciate the difficulty of fact-checking 10-page papers.
This format makes it easy to believe that LLMs are improving quickly, even as evidence suggests progress may be slowing. Klein even mentions Deep Research in his interview:
“I asked Deep Research to do this report on the tensions between the Madisonian Constitutional system and the highly polarized nationalized parties we now have. And what it produced in a matter of minutes was at least the median of what any of the teams I’ve worked with on this could produce within days.”
Since ChatGPT’s debut, the format and “character” of an LLM’s text generation heavily influence how users understand and trust the output. As Simon Willison and Benedict Evans—both bloggers who are not reflexively A.I.-critical—have pointed out, Deep Research has the same flaws as previous LLM-apps.
“It’s absolutely worth spending time exploring,” Willison writes, “but be careful not to fall for its surface-level charm.”
Beyond Deep Research, a more broad development has softened the ground for renewed A.I. hype: generative A.I. output has become more reliable since 2022, and more people have incorporated A.I. into their work in seemingly useful ways. The latest models are more dependable than the GPT 3-era models, and many provide citations that allow you to double-check the work.
It’s no longer necessarily the case that asking an LLM a factual question is strictly worse than Googling it, and it’s much easier to double-check the answers and understand its sourcing than it was just a year ago. These improvements coincide with more people integrating A.I. into their work, finding more productive applications for the models than the suggestions from A.I. influencers.
Consumer Adoption & A.I. Skepticism
I don’t use A.I. much for writing or research (old habits die hard), but I’ve found it extremely useful for creating and cleaning up audio transcriptions, or for finding tip-of-my-tongue words and phrases. (It’s possible that all these people, myself including, are fooling themselves about the amount of time they’re saving, or about the actual quality of the work being produced—but what matters in the question of hype and backlash is whether people feel as though the A.I. is useful.)
Aggressive A.I. integration into products like Google Search and Apple notifications over the last couple years has mostly been a flop, and cheating on homework appears to be the most widespread use for ChatGPT. But it’s harder to argue that A.I. products are categorically useless and damaging when so many people seem to use them to supplement tasks like writing code, doing research, or translating with no apparent harm.
I tend to think that more widespread consumer adoption is, on balance, a good development for A.I. skepticism. I’m desperate for A.I. to be demystified and shed of its worrying reputation as a universal solution. I hope that widespread use may help accomplish that demystification:
The more people use A.I. with some regularity, the more broad familiarity they’ll develop with its specific and and consistent shortcomings; the more people understand how LLMs work from practical experience, the more they can recognize A.I. as an impressive but flawed technology, rather than as some inevitable and unchallengeable godhead.
The Problem with A.G.I.
In many ways I’m sympathetic to the backlash-to-the-backlash. But where I draw the line on A.I. openness, personally, is “artificial general intelligence.” I don’t like this phrase, and I wish journalists would stop using it. It tends to be casually used as though it refers to a widely understood technical benchmark, but there’s no universal test or widely accepted, non-tautological definition.
“A canonical definition of A.G.I.,” Klein’s interview subject Buchanan says, “is a system capable of doing almost any cognitive task a human can do. 🆗.”
Nor, I think we should be clear, could there be: “Intelligence” is not an empirical quality or a threshold that be achieved but a socio-cultural concept whose definition emerges at any given point in time from complex and overlapping scientific, cultural, and political processes. In practice it’s used to refer to dozens of distinct scenarios from “apocalyptic science-fiction singularity” to “particularly powerful new LLM.” This collapse tends to obfuscate more than it clarifies:
When you say “A.G.I. is coming soon,” do you mean a we’re about to flip the switch and birth a super-intelligence? Or do you mean that computer is going to do email jobs? Or do you just mean that pretty soon A.I. companies will stop losing money?
Among the only verifiable definitions of “A.G.I.” is the contractual one between Microsoft and OpenAI: OpenAI will have achieved A.G.I. only when OpenAI has developed systems that have the “capability” to generate the maximum total profits to which its earliest investors, including Microsoft, are entitled, according to documents OpenAI distributed to investors. Those profits total about $100 billion, the documents showed.
There is something very funny about A.G.I.’s meaning shifting from a groovy “technosingularity” to a lawyerly “$100b in profits,” but it’s worth noting that at least one person might directly benefit in some way from the use of “A.G.I.”, Sam Altman.
But I think what really gets to me about the overuse of “A.G.I.” is not the vagueness or the fact that Sam Altman might profit from it, but the teleology it imposes—the way its use compresses the already and always ongoing process of A.I. progress, development, and disruption into a single point of inflection. Instead of treating A.I. like a normal technological development we’re left anxiously awaiting a kind of announcement, serving only those who profit from cultivating it.
1 It’s an off-the-cuff interview and I wouldn’t want to make to much of it, but I thought it was interesting that Klein’s immediate illustrative example didn’t involve a way that A.I. might replace him, but a way it might replace people who work for him. Whatever else you can say about this technology, it has a way of making people think like bosses.