You aren’t hallucinating: AI can lead you astray

Frustrated by, as she put it, the AI-generated “redundant drivel” that we are seeing in pitches and news releases from home furnishings companies, Courtney Porter, our editor in chief, wrote a great column last week about the need for companies to build artificial intelligence protocols — and the need to do it quickly.

If you haven’t seen her piece, you can check it out here. I recommend you give it a read.

I wanted to explore one issue she raised in her column a little more deeply: “The more you experiment with generative AI large language models, the more you familiarize yourself with their capabilities and limitations,” Porter wrote. “LLMs learn and adapt quickly and, sometimes, they hallucinate. They can summarize and they can paraphrase. They can outline and they can edit. They get the gist. But they cannot make your point for you. This is where taste comes in.”

Making it all up

The problem of AI hallucinations — in other words, the systems making up information and presenting it as factual — is a worrying one.

The New York Times looked at the issue recently and what it found was not reassuring: “More than two years after the arrival of ChatGPT, tech companies, office workers and everyday consumers are using AI bots for an increasingly wide array of tasks. But there is still no way of ensuring that these systems produce accurate information,” the Times reported in a May 5 article. “The newest and most powerful technologies — so-called reasoning systems from companies like OpenAI, Google and the Chinese start-up DeepSeek — are generating more errors, not fewer. As their math skills have notably improved, their handle on facts has gotten shakier. It is not entirely clear why.” (Emphasis mine.)

The Times reports that one test showed error rates as high as 79% in one of the new AI systems; others peg an average error rate across generative AI systems at about 20%.

That’s just one of several recent articles I’ve read about the problem. As The Washington Post reported on June 3: “Lawyers using AI keep citing fake cases in court. Judges aren’t happy.”

A few days later, Washington Post sports columnist Sally Jenkins wrote about her experience with an AI-controlled bot she called Sage. “In just a few minutes of chatting on the subject of tennis, Sage spewed so many manifest falsehoods, untruths and bad fictions about subjects from João Fonseca to Frances Tiafoe to Coco Gauff that I recoiled from the laptop as if a cobra had spit from it.” When Jenkins asked Sage to evaluate a column she’d written, Sage just flat out made things up, quoting “facts” to Jenkins that she’d never written.

The dangers of such mistakes vary. An error in a legal brief could have serious repercussions in terms of how a judge rules in a case. A mistake in an article about tennis — or in a news release about a new furniture collection, for that matter — is not nearly as serious.

And, some argue, humans make mistakes all the time. What’s the big deal about ChatGPT getting a few things wrong? (I may have made an error in this article. I hope not, but it happens.)

I think the issue is that we expect humans to make mistakes, but the way AI is being sold to us leads us to believe that it’s infallible.

In reality, says Kevin Roose, co-host of The New York Times’ podcast “Hard Fork,” AI-powered bots are more like “a very smart assistant who has a dozen Ph.D.s but is also high on ketamine like 30% of the time.”

You still matter

What does all this mean? AI is an aid, but it requires you to bring your judgement, your experience, your skills, your knowledge and, as Courtney wrote, your taste to the effort.

AI can be great for getting your project stated, for brainstorming and generating ideas, for writing rough drafts and making suggestions for improvements, or for summarizing documents. It can, ironically, even be great for fact checking.

See Also
See Also
See Also
See Also
woman lounging wrapped in bright collage wallpaper
See Also
See Also

And there are ways to help ensure you’re getting higher-quality information from services like ChatGPT:

* Enter detailed, specific prompts. This also helps to save time as the chatbot is more likely to provide you with the information you seek with fewer attempts.

* Request that the chatbot be honest. You can tell it, “If you don’t know the answer, it’s OK. Just say, ‘I don’t know.’”

* Ask for proof. You can request that a chatbot provide sources for its information — or ask it to fact check itself.

Even if you’re confident the information your AI assistant provides is correct, you want to review it carefully. That’s, in part, to avoid that “redundant drivel” Courtney lamented, but also to inject your own sense of style and personality, your own ideas into the work.

Interior design and home furnishings are creative fields, filled with creative people. Use AI for drudgery or when you get stuck, but don’t let it take the joy, art and imagination out of what we do.

And don’t let it go wrong when you need something just right.

View Comments (0)

Leave a Reply

Your email address will not be published.

Scroll To Top