Seductive Answers
So there I was, manually cleaning up old files, trying to free up some space on my drive to squeze in another backup. You know how it goes. Delete, delete, keep, delete... wait, what's this?
Buried in a backup from 2014, I found this note about simple answers and stupid algorithms. Reading it now, in 2025, it's both funny and a bit unsettling how spot-on some of it was.
Here's what I wrote back then:
* People asking for simple answers
* Highly complex world – and growing in complexity
* There is nothing wrong with simple answers as long as you understand the problem with all the variables.
* Smart algorithms do make our lives a little easies... That's good. But at the same time I don't want to end as simple boolean expression in some crazy complex algorithm indicating that I'm terrorist because a read the wrong books and wrote the wrong tweets.
* It's not a technology problem but a cultural. We need to question every answer even seductive simple answers from geniuses or algorithms. That's way more work than simply trusting those answers but the only way to avoid stupid actions based on stupid answers based on stupid algorithms based on unspecific questions.
Remember 2014? Amazon was seemingly at random suggesting you buy a refrigerator after you'd bought some book because the Item-item collab filtering1 had computed a high similarity though the last thing you wanted was a refrigerator.
Fast forward to today, and oh shit, have things changed. Now we've got ChatGPT writing our emails, midjourney creating art, and AI assistants that can explain quantum physics while helping you plan dinner. The stupid algorithms got smart. Really smart.
But here's the thing, that note wasn't really about the algorithms being stupid. It was about us being lazy. And that part? That hasn't changed one bit. If anything, it's gotten worse.
From Stupid to Sophisticated
Let's be honest about what AI looked like in 2014. It was mostly about predictions and classifications. "Customers who bought this also bought that." "This email is probably spam." "You might like this movie." Simple stuff, and often hilariously wrong.
Then in 2016, AlphaGo beaten the world champion at Go – a game a googol (1 followed by 100 zeros) more complex than chess. Using reinforcement learning, the AI had leared to play creative moves that won games.
In 2017 came the transformer architecture2, and the trajectory changed. Transformers could be trained in parallel at far larger scale than LSTM and other architectures.
And then in 2022, ChatGPT entered the scene and everything changed. A hundred million users in two months. Suddenly, everyone had access to an AI that could write, code, explain, and create3. It was confident. It was articulate. It was often completely wrong, but in such a convincing way that you'd never know unless you already knew.
The problem isn't that the algorithms are stupid anymore. The problem is that they're smart enough to be able to give us exactly what we want: simple, confident answers to complex questions. No hedging. No "it depends." Just answers, delivered with the certainty of a know-it-all at a dinner party.
We Are the Perfect Suckers
Here's where it gets interesting. Our brains are basically lazy. I don't mean that as an insult – it's a feature, not a bug. Evolution favors efficiency in a scarces world, so we conserve energy where we can, and thinking hard burns calories. So we've developed all these shortcuts.
Daniel Kahneman called it System 1 and System 2 thinking4. System 1 is fast, automatic, intuitive. It's what you use to recognize a face or know that 2 + 2 = 4. System 2 is slow, deliberate, logical. It's what you need for 17 * 24 or figuring out why your code isn't working, well or alternatively you can use calcuater and a debugger to again save energy.
Guess which one we prefer to use? Yep, the easy one.
But even Kahneman, who wrote the book on cognitive biases, fell for it. Parts of his book "Thinking, Fast and Slow" cited studies that couldn't be replicated. The chapter on priming? Turns out a lot of that research is underpowered. In 2017, he admitted he'd "placed too much faith in underpowered studies" and that "there is a special irony in [his] mistake because the first paper that Amos Tversky and [he] published was about the belief in the 'law of small numbers'" making researchers trust the results of underpowered studies with unreasonably small samples.5
Think about that for a second. The world's expert on how we fool ourselves... fooled himself. If that doesn't make you humble about your own thinking, nothing will.
We've got all these built-in biases:
- Confirmation bias: We love information that agrees with us
- Availability bias: If we can easily remember something, we think it's common
- Authority bias: If someone (or something) seems expert-ish, we trust it
All in all really just tools to conserving thinking power. Why waste precious calories questiong anything if you can trust the expert that saying what you already belive.
Now here comes modern AI, serving up ansewers that hit all these buttons. It's confident. It tells us what we want to hear because it's trained on what we've clicked on before. And it's always available with instant answers.
The Cost of Easy Answers
A few weeks ago, a post went viral on r/analytics. The title: "We just found out our AI has been making up analytics data for 3 months and I'm gonna throw up."
The story was visceral. A company had been using an AI agent since November to answer leadership questions about metrics. Fast answers, detailed explanations, everyone loved it. The VP of sales had made territory decisions based on data that didn't exist. The CFO showed the board a deck with fake insights. The AI had been inventing plausible-sounding percentages the whole time. The person only caught it by accident, when someone asked them to double-check something.
The post got deleted. And it was most certainyly made up.
But here's the thing, it went viral because it felt true. Someone left this comment6:
People trust their tooling. I don't go around to every app URL and see by myself if it works, I open dashboard and look if stuff is green. If AI would make the dashboard forever green, regardless of what happens IRL, I am sure my boss would be in heaven for a week or two, before something major would happen. I can guarantee you NOBODY would notice, at least UNTIL something major would happen.
That's it. That's the whole thing. We don't verify because verification is work. Neither stories, nor green dashboards, nor anything, really.
That's what was worried about in 2014.
How to Not Be a Sucker
Alright, so we're lazy thinkers using tools that enable our laziness. What do we do about it?
Here are a few strategies that help:
- Create Distsance: When using AI to evaluate something you've made, don't frame it as "your" work. Instead of asking an AI "can you review my code?" ask it "can you review this code?" and ask it to be blunt. With the response discussing some work you create a little bit distance which makes it easier to hear criticism. This works with your own thinking too. Instead of "why do I believe this?" try "why would someone believe this?" It's the same question, but the second one is easier to answer honestly.
- Three-Source Rule: For anything that matters, find three different sources. Not three Google results that all cite the same original source. Three actually different perspectives. Yes, it's more work. That's the point. -Flip Test: Whatever answer you get, try to argue the opposite. Ask the AI to argue the opposite. If you can't make a decent case for the other side, you don't understand the issue well enough. -"So What?" Check: Simple answers often fall apart when you push on consequences. "AI will replace all jobs!" Okay, so what happens next? "Universal basic income!" Okay, who pays for it? Keep pushing until the simple answer reveals its complexity.
- Specifics Hunt: Vague answers hide ignorance. "Many experts agree..." Which experts? "Studies show..." Which studies? "It's well known that..." Known by whom? Demand specifics.
- Time Delay: For non-urgent decisions, wait 24 hours after getting an answer before acting on it. It's amazing how different things look after your brain has had time to process in the background.
Here's what I'm not saying: throw away your iPhone, delete ChatGPT, go live in the woods. That's just another simple answer to a complex problem.
AI is incredible. It really does make our lives easier. I use it all the time. But try to use it with respect for what it can do and awareness of what it can't.
The sweet spot is using AI to enhance, not replace thinking. Let it find information, suggest options, check your logic, play devils advocat. But you still need to be the one asking the questions, evaluating the answers, making the connections. You need to stay in the driver's seat, even if you're using cruise control.
The Question Is the Answer
One more though: That note from 2014 ended with a call to "question every answer even seductive simple answers."
We live in a world of infinite answers. Any question you can think of, there's an AI ready to respond in milliseconds. But good questions? Those are becoming rare. And the ability to sit with uncertainty, to say "I don't know" or "it's complicated" – that's becoming downright countercultural.
But here's the thing: every major advance in human understanding came from someone refusing to accept a simple answer. "The sun goes around the Earth" was a simple answer. "Heavy objects fall faster" was a simple answer.
It starts with each of us deciding that convenience isn't worth competence. That fast isn't always better than right. That "I need to think about that" is a perfectly valid response in a world that demands instant takes.
So next time you get a simple answer to a complex question, pause. Poke at it. Question it. Make it prove itself. Your brain might grumble about the extra calories, but your future self will thank you.
After all, in a world of infinite answers, the real power isn't in knowing things. It's in knowing how to think about things. And that's one job we definitely shouldn't outsource.
References
-
I'm guessing Amazon was still using Item-based collaborative filtering recommendation algorithms in 2014 or at least some variant of it with a few things layered on top. ↩
-
Vaswani, Ashish, et al. "Attention Is All You Need." Advances in Neural Information Processing Systems, 2017 ↩
-
ChatGPT reportedly reached 100 million users within two months of launch, making it one of the fastest-growing consumer applications in history. ↩
-
Kahneman, Daniel. Thinking, Fast and Slow. 2011. See also Farnam Street's summary "Daniel Kahneman Explains the Machinery of Thought." ↩
-
Daniel Kahneman’s response to Reconstruction of a Train Wreck: How Priming Research Went off the Rail. ↩
-
Comment by Forward_Ad_356 on "We just found out AI has been making up analytics data for three months and I’m gonna throw up." ↩