- Trying!
- Posts
- The Mass Hysteria Around AI
The Mass Hysteria Around AI
Up is down, left is right, cats and dogs living together...

One of my worst fears is this: I wake up underwater and in the dark, with both lungs full of air and the knowledge that the surface is within reach, yet no sense of which direction might be up, and which down. Which way do I swim? Will I choose unwisely and drown myself? This is a stupid fear not only because it is unlikely to happen1 but also because that’s not how the human body works. We have what scientists call the vestibular system in your inner ear, whose otolith organs, according to scienceoffalling.com, “contain tiny crystals that move when you tilt your head or when your body accelerates, sending signals to your brain via hair cells to let it know you’re moving or tilting.” In other words, you have structures in your head and brain that react to gravity and tell you which way is up. It’s innate, even in a nightmare scenario.
I bring this up because the AI models I’ve been using to illustrate Trying! for the past month still struggle with this basic concept. Nearly every day since October 31, I’ve asked ChatGPT’s Dall•e model to show Sisyphus pushing his boulder uphill, and every once in a while, Dall•e has responded with Sisyphus pushing the boulder downhill.

Which way is up?
At times, I’ve told Dall•e it’s gotten this wrong, and asked it to correct the direction, and still it’s gotten things wrong. I had thought—maybe even hoped—that when I switched from Dall•e to Midjourney for this month, things would improve. Alas, you can tell by the image that tops this newsletter, Midjourney has no vestibular system, either.
Never mind the much-discussed shortcomings of AI image generation, like its difficulty rendering hands or lettering. This seems even more basic: AI cannot tell up from down! If you are an AI optimist, this seems like a major problem.
But I’m not an AI optimist in any form.
Oh God, AI Sucks
For a year and a half now, I’ve been doing AI experiments at work, and for a year and a half they’ve been failing. The various versions of AI (ChatGPT, Gemini, etc.) are unable to produce a simple news article that would pass muster with even the most lackadaisical editor. They have no sense of fact, they have no sense of context. When tasked with creating words, they operate in a vacuum—no, an alternate universe—where reality and quality do not exist. They blather on, sometimes convincingly, almost like a newsletter writer who’s whimsically chosen to churn out daily prose but ultimately has nothing to say worth knowing. The most I’ve been able to get the AIs to do is extract and summarize and reorganize text from existing, human-written works; this they can do far faster and more reliably than your average intern, who doesn’t want to be doing such menial tasks anyway. Interns, meanwhile, now get assigned big investigative features.
And still I don’t trust the AIs to get these things right. We still need to have a layer of human beings look over the AI work, again and again, and make sure no errors or “hallucinations” have been introduced.
Really, though, the hallucinations we attribute to AI are our hallucinations: We see what AI produces—its flowing and grammatically correct sentences, its smooth and instantaneous illustrations—and hallucinate not only intelligence but intention behind them. And this tells us far more about how we read and see than about how AIs do.
I can sum that up even without ChatGPT: Because most of us are barely literate and can’t draw, we think AI is fucking amazing—but that’s only because most of us are barely literate and can’t draw. I include myself among that latter group (and you’d be within your rights to include me in the former). I can’t draw at all. Maybe when you signed up for Trying! you saw that image of the stick-figure Sisyphus pushing the boulder uphill; that was seriously the best I can do. When I look at even the worst things Dall•e produced for me—the images where I asked it to draw like a kindergartner—I despair of ever being able to achieve such a feat. Here’s one:

I could never make this myself.
Look, art is difficult. And we are easily impressed. The blogger Scott Alexander last month published the results of his “AI Art Turing Test,” in which he asked “11,000 people to classify fifty pictures as either human art or AI-generated images.“ The median score was about 60%, which isn’t that far off from 50-50, meaning that most people couldn’t tell what was human-made and what was AI-generated. The newslettererer Max Read had a nice take on this: “People prefer A.I. art because people prefer bad art.“ Clearly, most people haven’t heard of my good–bad/like–hate matrix. Or omg, what if they have?!?
Often, I look at the results AI has produced for me, and I’m agog: How did a pile of microscopic transistors, fueled by the electrical output of a small city, create this wonderful insanity? There is beauty in some of these images, and wit, however accidental. I keep using them because the pictures are arresting and eye-catching, and I simply cannot make something like that myself. Also, I can’t afford—yet—to hire human illustrators.
But I never mistake them for art. I never mistake them for being even good. They are expressions of a pattern-matching algorithm, one that exists without a basic sense of good and bad or up and down, so the question of quality is nonsensical. One can only ask, “Is it appropriate?” Or “Is it surprising?” But even when the answer to both is yes, it should once again remind us of our own inadequacies in this department: If we were more talented, if we were more discerning, we would not need this at all.
Because you, my subscribers, are sophisticated and intelligent people—especially the paying subscribers—I don’t need to tell you this, but I will: Whenever you encounter AI art or writing that seduces you with its superficially professional slickness, you have a choice. You can either be impressed, and hail AI as a miracle for mankind. Or you can take the opportunity to raise your own goddamn standards.
One of the things I learned in the magazine business, which I’ll attribute to my time at Condé Nast and Bon Appétit, is that talent is just the beginning. You can hire great writers, photographers, designers, and that will take you far. But when they file their work, however genius it might be, you can always ask this simple but vital question: “How can we make it better?” It’s a difficult question to ask in a media world where time and money are never abundant, but it works. It pushes us human beings to come up with new, more original ideas, images, paragraphs, and it tells us that our standards are not fixed and immutable. We can always ask more of ourselves and our arts (whatever “arts” means), and we can get more, as long as we’re willing to ask the question—this question I posed to Midjourney in response to the header image: Can you make that, but better? Here’s what I got:

Is that better? I don’t know. But at least, in three of the four images it produced for me, it got the directionality correct. That’s a start. But, like Sisyphus, AI has an uphill journey ahead of it. 🪨🪨🪨
Notes
At some point, I will write about justified versus unjustified fears.
Reply