• Trying!
  • Posts
  • We All Need to Quit AI Right Now

We All Need to Quit AI Right Now

Because it's bad: for the earth, for society, and for our very ability to think critically.

In partnership with

Today’s advertiser is good ol’ Authory, the portfolio system for writers and other people who make things on the Internet. For each of you who clicks the ad, I’ll get $2! And remember: This is a service I actually like and subscribe to myself. Authory = good. Ad clicks = also goods.

Hello! It’s been a while, hasn’t it? More than ten whole days since I last sent you a depressing yet beautifully written email full of obscure lit-theory references, ‘80s pop-music quotations, fart jokes, and essential wisdom for surviving life on Earth, whether or not there’s an apocalypse going on. During my time off (which I’ll write about soon), I did a tiny little bit of reflecting on where to take this project called Trying! 

For those of you just now joining, Trying! began as a way to force myself to start writing again. Every day for 100 days (yes, including weekends), I sent out what I hoped was a decent piece of writing—usually essays but occasionally Real Reporting™ and, once, short fiction—just to see if I could do it. (Here are some very explicit instructions on how I wrote each one.) Along the way, I produced 150,000 words, published 11 of the essays in this handsome print edition, and amassed an army of subscribers whose lives have been transformed by my work and who therefore now heed my every command. They—you—are my own personal, existentialist Proud Boys. Stand back and stand by, kiddos!

Writing at that pace, however, is unsustainable. During that first stretch, I had to sideline many parts of my life to squeeze in a couple of hours of writing a day: I ran less, I read less, I spent less IRL time with my friends and family. While I still have loads of ideas, I now want to space them out a bit more, just so I have the freedom to do other things. From here on out, I will probably write two or three pieces a week, ideally on Mondays and Thursdays and, uh, some other day. But don’t worry: They will be of precisely the same quality as the dailies—no better, no worse. That’s because having more or less time to write and revise doesn’t matter that much to me. I get an idea, I start writing it, and where that leads depends on what I’ve been pondering lately and all kinds of random, sudden inspiration. An extra few hours or days doesn’t change that—doesn’t necessarily help me refine the initial idea or expand on the follow-through. I think, I write, I do a quick edit, I click send. In this new era, that won’t change.

But I still haven’t figured out what else this second stage will bring. I don’t know what benefits paid subscribers should get versus what free subscribers receive. Maybe you should all just upgrade to paid subscriptions, and I won’t have to think about it? Honestly, the rates are quite reasonable!

I do want to bring you more Real Reporting™ (the big issue is time, as always), and I do want to monkey with the format: There will likely be some quick takes on a range of subjects, or deep dives into books and movies that, whether they are good or bad, I happen to love. How often those will happen remains a mystery.

Two things I do know, however: I will continue to put a single, short paragraph before the ad, and I am giving up on AI entirely (and you should, too).

More after the ad, which you definitely need to click on to support me, whether you’re a paid subscriber or not…

🪨

Writers, don’t let your work disappear!

Imagine losing years of articles because a site shut down. What would you do if all your work samples disappeared?

With Authory, that’s a nightmare you’ll never have to face. Authory automatically creates a portfolio that backs up everything you’ve ever written and will write, so your work is always safe.

That’s right: Authory finds and backs up all your past work and saves every new piece you publish, wherever they appear.

Join thousands of writers who already trust Authory to protect their work and never lose a piece again.

🪨

When I began Trying!, I knew I’d need some kind of illustration to sit atop each essay. The problem, of course, is that I cannot draw. Stick figures stymie me. I loathe erasers. What I see in my head is never what appears on paper. I do not want to draw. So, I figured, I’d plug in to the various AI engines and have them produce images of Sisyphus pushing his boulder up Mount Tartarus, and in various scenarios to match my themes. The illustrations were often amusing, particularly in how often they were wrong (AI doesn’t fully understand the concepts of uphill and downhill, let alone push), but I can say they were never good. They were weird, and fascinating, and occasionally apt, but they always lacked the intentionality and the wit to qualify as good. They were accidents, meaningless end products of a faulty, unthinking algorithm.

This was to be expected! I’ve been playing with AI models for about two years now, mostly to solve problems at work. Can I create an AI chatbot that will function as the front end to an encyclopedic 800-page book about gardening? Can I get AI to produce intelligent articles based on end-of-day financial reports or Google News alerts? Can I get it to extract information from dozens of dense paragraphs and build that into a structured database? Can I make the AI do these things reliably, so that we, its human overseers, don’t have to constantly check and refine its processes?

The answer has almost always been no. AI is, at this point, fundamentally unreliable. It gets things wrong, and it makes things up, and it can’t even tell the difference between the two. It’s not even good for first drafts, especially in a professional context. My colleagues and I are able (or should be able) to create our own work in less time than it takes to correct the output of ChatGPT or Gemini or whatever. The only people who are impressed by what AI creates are those who cannot read or write well in the first place. Which is, unfortunately, a lot of people—but not you, my dear paid subscribers!

But you already know all this. You know the AI helpbots are garbage, and the imagery is sloppy, and the energy consumption is criminal. I’ve known it all for a long time, too, and still I continued to monkey with AI. Part of that was to continue to prove that AI could not do what AI boosters keep promising it can do. I needed evidence of its failures, you know?

No, the real, final reason for me—and you—to give up on AI right now is this: AI bad for brain! This is based on two recent studies, one of 666 people in the UK, AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking, and the other of 319 “knowledge workers,” The Impact of Generative AI on Critical Thinking: Self-Reported Reductions in Cognitive Effort and Confidence Effects From a Survey of Knowledge Workers. Both involved surveys and a lot of self-reporting of AI use and how the respondents acted, thought, or adapted, but the UK one seems to have included some actual measurement of people’s abilities. Here are the money quotes, first from the UK study:

The findings revealed a significant negative correlation between frequent AI tool usage and critical thinking abilities, mediated by increased cognitive offloading. Younger participants exhibited higher dependence on AI tools and lower critical thinking scores compared to older participants.

And now from the U.S. study, which was done by researchers at Microsoft and Carnegie Mellon:

Specifically, higher confidence in GenAI is associated with less critical thinking, while higher self-confidence is associated with more critical thinking.

In other words, the more you trust AI, the worse your own ability to think. There’s probably a bit of a feedback loop happening here: Since you’re losing the ability to think critically, you outsource more of it to the AI you mistakenly trust, which further erodes your critical thinking.

Honestly, this explains a lot. It explains why the Powers That Be are foisting AI upon us—so that, forced to use it, we lose our ability to understand what they’re up to. And it also explains their apparently earnest belief that AI will somehow save us all (and earn them trillions); they have outsourced their own critical thinking to shit-machines so fully that they believe their own lies2 .

But we cannot do the same. Your ability to think, and think critically, is the one thing that cannot take away from you—unless they can get you to give it away freely.

Now is the time to reject AI as fully as you are able. Ask every chatbot to connect you with a human—do everything you can to bypass its poorly programmed matrix of useless help links. If you are a Chrome user, follow these instructions to remove AI summaries from your search results:

  1. Go to chrome://settings/searchEngines

  2. Click the “edit” button next to Google

  3. Copy-paste this code {google:baseURL}/search?udm=14&q=%s into the field called “URL with %s in place of query”

  4. Click save

  5. Now your search results will have no AI summaries!

Look, that’s just the easy part. The harder part is, um, two parts. One of them is resisting the temptation to use AI at all. Don’t fuck around on ChatGPT and have it write crude song lyrics, in the style of Paul Simon but filthy, about FBI director Kash Patel. Or, more seriously, don’t use it to summarize work emails, to generate PowerPoint slides, or transform the transcripts of your co-op board meetings into coherent, properly formatted minutes. Just stay the fuck away from it.

The temptation, I know, can be overwhelming. We’re starting to see job ads for “prompt engineers,” and we’re hearing everyone from analysts to educators say that we all, even kids, need to at least learn how to use these new technologies. For those of us who want to stay up on such developments, it sounds persuasive. Shouldn’t we understand the capabilities of these “tools” before fully dismissing them? All I can say is that I have been doing that for two years now, and I should have fully dismissed them long ago. They are bad and they make you worse.

The second, and harder, part of the harder parts1 is this: Think! Right now the goal of the U.S. government and much of the business sector is to make us stop thinking. The more they can erode our ability to understand the world, the more they can get us to offload our imaginations, our calculations, and our idle ponderings, the worse we will get at using our brains and the less we can do to fight back.

I’m low-key obsessed with Thimk!, a MAD Magazine copycat from the 1950s.

How can you make yourself think? (Besides becoming a paid subscriber, of course!) I don’t know! Read books, maybe? Do everyday math on scraps of paper with whatever half-dead Muji pen is lying around? Just sit and stare out the window at the street or the backyard or the airshaft? The answer, obviously, is going to depend on you.

For me, it comes down to this: Can I choose a more difficult path? That might mean forcing myself to finish reading a novel I’m not all that into, or calculating fractions during a run (at 13 minutes in, should I consider myself one-third finished—or one-quarter finished?), or attempting to write an essay every day for 100 days. What I want is to develop my own abilities, to enhance my own self-confidence, so that I don’t ever need to turn to the hallucinating tin cans for assistance. The more we can all do that, whatever our abilities, the better off we’ll all be.

Back at the end of December, I wrote about an international study of adult skills that found that, duh, Americans are not only bad at reading but that:

American adults’ literacy has in fact declined over the past decade. Back in the 2012/2015 surveys, just 18 percent scored at or below Level 1 (while the Level 4+ cohort has, luckily, remained constant). Overall, we are less able to make sense of the written word than we were during Obama’s second term. We were dumb; we’ve grown dumberer.

And you know who’s leading the dumbening? Older adults. Yes, Americans born from 1958 to 1988 showed the sharpest declines in reading scores, with the oldest cohort—older Gen X and younger Baby Boomers—worsening the worstest, even though one of the reading-comp passages was about bus fares.

This terrified me, but what terrifies me even more is that the survey was performed before the current AI boom. If we were already getting dumber, less able to comprehend the world in front of us, without the AI tools that we now know wreck us, how much stupider will America be five years hence?

The answer is a lot. America will be so much more stupider.

Okay, that’s enough. You use your brain, I’ll use mine, we’ll both skip the AI, and who knows, maybe I’ll try drawing again one day. 🪨🪨🪨

Read a Previous Attempt

1  Heh heh!

2  Maybe this is also why the scientists behind the studies think of this as a wake-up call to improve AI tools rather than dump them entirely.

Reply

or to participate.