“Girl Feeding Pigs” (1800), Richard Westall

You don’t need me to tell you that artificial intelligence is making our lives worse. The new tools we did not ask for but have been ordered to enjoy are bit by bit degrading our culture, both online and IRL, with fake images, fake videos, lazy and clichéd writing, buggy code, and, sometime very soon, a wave of job losses. AI bad, people!

Of course, I love bad things! I’m a negative guy, as we all know. That is, I really do like to look at why and how bad things are bad, to wallow in the negativity in order to extract something of meaning—I won’t say “something good”—from the experience.

Which is why a recent New York Times headline leapt out at me:

“AI sludge”? Didn’t they mean “AI slop,” the phenomenon to which John Oliver recently dedicated an episode of Last Week Tonight?

Are AI slop and AI sludge the same thing? When I brought this up to some of my colleagues—who work at a large website dedicated to covering Silicon Valley, including many AI companies—they sort of shrugged. Two different ways to say the same thing, more or less: AI-generated crap.

But the more I thought about it, subtle distinctions became apparent, and that’s what I want to dive into today!

What is AI slop?

AI slop is imagery characterized by a few things:

  1. It’s produced at scale. There’s tons of it—lots of still images, lots of videos, lots of everything, so its creators can put it on every social media platform, every single day. Sorry, hour. Sorry, minute—every single minute.

  2. It’s often weird and over the top. Think of our friend shrimp Jesus up there. John Oliver called attention to animals carved from wood. There’s sad-eyed refugee children and patriotic trucks and buff babies and oh god:

    I found this image on this Reddit thread.

  3. It’s meant to be gawked at. This is crucial: These images are designed for people to consume on social networks. That’s why they’re mixing and remixing these engagement-bait themes ad infinitum, so that human beings will stare and share—often, tragically, believing the images are real—and the creators will pocket more money from the platforms.

There’s also a more expansive definition of slop that includes any hastily concocted AI imagery, like ad campaigns or streaming-TV credit sequences. And as slop spreads, and we’re served an ever bigger helping of soulless, mass-produced mediocrity, this may become the default definition. For the moment, be glad that slop is still mostly confined to the social networks, which you can avoid if you really try.

Be glad, too, that slop is such a damn good term! Not only is it sloppily produced, with no consideration given to concept or refinement, it’s also fed to us as slop, poured unceremoniously into a trough where we piggishly gobble it up. It may taste bad, it may leave us feeling ill at ease, but at least it sates for a moment our insatiable hunger for entertainment and outrage. Slop it is!

Okay, so what’s AI sludge?

Let’s start with that New York Times article, which looks at the insanity of the current job market:

The number of applications submitted on LinkedIn has surged more than 45 percent in the past year. The platform is clocking an average of 11,000 applications per minute, and generative artificial intelligence tools are contributing to the deluge.

With a simple prompt, ChatGPT, the chatbot developed by OpenAI, will insert every keyword from a job description into a résumé. Some candidates are going a step further, paying for A.I. agents that can autonomously find jobs and apply on their behalf. Recruiters say it’s getting harder to tell who is genuinely qualified or interested, and many of the résumés look suspiciously similar.

Like slop, sludge is generated en masse by AI tools in order to game a system. For slop, the system is the social algorithms and our own in-built craving for stimulation. For sludge, it’s the AI filters that so many employers—or job-vetting platforms—use to winnow down the already unmanageable volumes of applicants.

The difference, I think, is that sludge is not meant to be consumed. No one is creating AI sludge because someone out there likes wading through it. Instead, people are using AI to juice their resumes: a) to get through an arbitrary filter, and b) so that, on the other side of the filter, a human can pay attention to the details of their work history outside of or beyond the requisite keywords.

Sludge, in fact, is a by-product of our dueling attempts to streamline the process. The filtering systems were installed so that hiring managers could look only at proper candidates—which as a hiring manager myself I like the sound of. But applicants want a streamlined process, too—to cut through the sludge the filters create—so their AI tools reverse-engineer things, building a new wall of sludge on the hiring side.

What’s amazing about AI sludge is that it’s intended as a weapon: It’s a way of telling people to GO AWAY, WE DON’T WANT YOU! Think beyond the realm of job application to AI chatbots, which seem like they might be designed to answer your urgent questions but are really just a way to fend you off, to make you do the work yourself or, ideally for the large corporations and governmental institutions that deploy sludge so widely, give up completely.

In the 2008 best seller Nudge, the legal scholar Cass R. Sunstein and the economist Richard H. Thaler marshaled behavioral-science research to show how small tweaks could help us make better choices. An updated version of the book includes a section on what they called “sludge”—tortuous administrative demands, endless wait times, and excessive procedural fuss that impede us in our lives.

The whole idea of sludge struck a chord. In the past several years, the topic has attracted a growing body of work. Researchers have shown how sludge leads people to forgo essential benefits and quietly accept outcomes they never would have otherwise chosen. Sunstein had encountered plenty of the stuff working with the Department of Homeland Security and, before that, as administrator of the Office of Information and Regulatory Affairs. “People might want to sign their child up for some beneficial program, such as free transportation or free school meals, but the sludge might defeat them,” he wrote in the Duke Law Journal.

The defeat part rang darkly to me. When I started talking with people about their sludge stories, I noticed that almost all ended the same way—with a weary, bedraggled Fuck it. Beholding the sheer unaccountability of the system, they’d pay that erroneous medical bill or give up on contesting that ticket. And this isn’t happening just here and there. Instead, I came to see this as a permanent condition. We are living in the state of Fuck it.

Sludge, Colin points out, is all over the place: “gym-quitting labyrinths, Airbnb hijinks, illogical conversations with the permitting office, confounding interactions with the IRS.“ And it’s not necessarily all the fault of AI. But what makes AI sludge different is that it’s easier and cheaper to create and deploy, which means that the human sludge you’ve become accustomed to—”customer service,” ha!—is about to get exponentially worse, once every business and agency finds a way to replace its unhelpful humans with even less-helpful robots.

(Of course, you will probably be encouraged to pay an additional fee to access that once-derided human level of unhelpfulness. And many of us will probably pay up. Pay up or give up!)

What’s especially awful is that everyone hates sludge. It’s good for no one, because we all at some point have to deal with something that has gone wrong, doesn’t work, or is simply unclear but important, and we will one day need assistance from people and systems in resolving these matters. Even sludge-slingers will have to deal with sludge, whether their own or another slinger’s.

The only ones who won't have to care are the rich, who have “earned” yet more of their wealth from ensludging the rest of us. They can afford the VIP service, they can foist the problem-solving on assistants, and they can afford to just write off anything that is easier to replace than to fix.

In this context, I’m almost delighted at the current LinkedIn mess—an epic sludge-slinging battle that can only really be resolved if employers disable their sadistic filters and recommit to, you know, actually reading the resumes that come in. Imagine if you really did have a fair shot at a job because you knew that a person—an unhurried, somewhat qualified hiring manager—was going to give your work history its proper consideration? But that would require human intelligence, which the wealthy no longer care to invest in.

You know what? I’m going to give my good buddy Zohran Mamdani a call about this. If New York City can mandate that companies list salary ranges in job ads, if it can require that freelancers get paid within 30 days of finishing their work, then maybe outlawing or otherwise restricting AI filters in resume scanning could be put on the table. If Mr. Cardamom can’t do it, I guess there’s always … Curtis Sliwa? 🪨🪨🪨

Read a Previous Attempt: Dear Mark Zuckerberg

1 I’ll allow that in a few cases, specifically those that involve searching through and synthesizing large amounts of data, the AI tools are useful. But those involve interpretation rather than creation, and it’s in the realm of creation that AI is poison.

Reply

or to participate

Keep Reading

No posts found