Agency Device
The first time I came across a useless machine, I felt like it was mocking me, and I knew I wanted one.
There it sat, a simple wooden box with a single switch. I flipped it on. A mechanical hand emerged, deliberately flipped the switch back off, and retreated. I tried again. Same result. And again. Each time, that little hand seemed more determined, more stubborn.
But here's the thing: there is no determination. No stubbornness. Just a motor doing exactly what it was designed to do — turn itself off. The most basic circuit imaginable. Yet I couldn't shake the feeling that this box was actively resisting me. Mocking me, even. Arthur C. Clarke described encountering one at Bell Labs as "something unspeakably sinister about a machine that does nothing — absolutely nothing — except switch itself off." 1
That strange feeling was so fascinating that I immediately wanted one. I probably spent too much time watching videos of different designs, looked into building one with Lego, and started sketching a circuit on a breadboard. There was something captivating about this useless opponent, this machine that existed solely to disagree with its user. Something irrational. Deep and instinctive.
Agent Reflex
We're essentially "hardwired" to detect agency even where none exists. Think about it evolutionarily: our ancestors who assumed that rustle in the grass was caused by an agent — a predator — survived more often than those who assumed it was just the wind. Better safe then sorry.
Psychologists call this the Hyperactive Agency Detection Device, or HADD 2. It's a cognitive system that evolved to over-detect intentional agents in our environment. It fires constantly — when we describe storms as "angry," when our car "refuses" to start on cold mornings, when a box with a switch seems to be fighting us.
But we don't just detect agency. We can't help but theorize about intentions. We project beliefs, desires, emotions. After detecting action, we attribute reasons for the action — we build a theory of what the agent wants. 3
This second step is what separates us from other animals. A rabbit might detect agency and flee. We detect agency and create a narrative.
Shapes and Souls
Fritz Heider and Marianne Simmel demonstrated just how powerful this tendency is back in 1944. They showed people a simple animation: geometric shapes moving around a screen. Just triangles and a circle. No faces, no limbs, nothing remotely human-like.
Watch it yourself. Even knowing it's just shapes following predetermined paths, the narratives unfold anyway — stories of bullying, romance, heroism. You can't not see them. 4
That's your theory of mind in action. The same cognitive machinery that helped our ancestors distinguish friend from foe now compels us to see intention in triangles bouncing around a screen.
And it gets stronger the less we understand something. Epley, Waytz, and Cacioppo's Three-Factor Theory of Anthropomorphism points to "effectance motivation" — our drive to make sense of and predict our environment. 5 When something behaves in ways we can't easily explain through mechanical means, our brain defaults to intentional explanations. It wants something. The useless machine is a perfect trigger: what tool exists solely to turn itself off? Unable to find a functional answer, we supply a psychological one. The machine doesn't want to be on.
This is also why people find randomly failing technology more infuriating than consistently broken technology. A car that never starts is just broken. A car that starts sometimes feels stubborn — like it's choosing not to cooperate. Unpredictability triggers agency detection. It's easier to deal with a stubborn machine than a random one, because at least stubbornness implies something you can reason with.
The Consciousness Question
I've been thinking about all of this because the current conversation around AI consciousness reminded me of that useless machine.
There's a real debate happening right now about whether large language models might be conscious, might have experiences, might understand. One study found that only a third of participants thought ChatGPT definitely did not have subjective experience — two thirds thought it had some degree of phenomenal consciousness. 6
I don't want to dismiss that debate entirely. It touches on genuinely hard problems in philosophy. But I do think it's worth pausing to notice how much of it might be us. Our HADD, firing. Our theory of mind, constructing narratives. The same machinery that makes geometric shapes into characters and a wooden box into an opponent.
The strength of our attribution scales with the sophistication of the trigger, not with any actual evidence of inner experience. Simple shapes get simple stories. A box that flips a switch gets stubbornness. A system that produces fluent language gets... consciousness?
Each step up in behavioral complexity produces a proportional increase in our projection. That's exactly what you'd expect if the attribution is coming from us. It's not clear that it tells us anything about what's happening on the other side.
As Peter et al. put it well: when technology exhibits inherently humanlike qualities that make telling the difference difficult, we must recognize that the technology has in itself become anthropomorphic — not because it is human-like, but because it pushes our buttons so effectively that the distinction becomes hard to maintain. 7
The Device
I never did build that useless machine. At some point I realized the machine wasn't the point. The point was what I was feeling: the split-second conviction that a motor on a switch was fighting me.
It wasn't. There is nothing there but a simple circuit.
But knowing that didn't stop the feeling. That's the thing about our agency detection — it's not a belief you can argue yourself out of. It's a reflex. You can know exactly how the trick works and still feel the trick land.
That, I think, is worth sitting with. Especially now. Because the machines are getting much, much better at triggering that reflex. And the danger was never that they'd become conscious. The danger is that we'll keep forgetting they aren't.
References
-
The quote is attributed to Clarke's visit to Bell Labs. It's cited in Pesta, Abigail (12 March 2013). Looking for Something Useful to Do With Your Time? Don't Try This. Wall Street Journal. and I believe it's originally published in Harper's Magazine. ↩
-
Barrett, J. L. (2000). Exploring the natural foundations of religion. Trends in Cognitive Sciences, 4(1), 29-34. https://doi.org/10.1016/S1364-6613(99)01419-9 ↩
-
Barrett, H. C. (2005). Adaptations to predators and prey. In D. M. Buss (Ed.), The Handbook of Evolutionary Psychology. Wiley. ↩
-
Heider, F., & Simmel, M. (1944). An experimental study of apparent behavior. The American Journal of Psychology, 57(2), 243-259. https://doi.org/10.2307/1416950 ↩
-
Epley, N., Waytz, A., & Cacioppo, J. T. (2007). On seeing human: A three-factor theory of anthropomorphism. Psychological Review, 114(4), 864–886. https://doi.org/10.1037/0033-295X.114.4.864 ↩
-
Clara Colombatto, Stephen M Fleming, Folk psychological attributions of consciousness to large language models, Neuroscience of Consciousness, Volume 2024, Issue 1, 2024, niae013, https://doi.org/10.1093/nc/niae013 ↩
-
Peter, S., Riemer, K., & West, J. D. (2025). The benefits and dangers of anthropomorphic conversational agents. Proc. Natl. Acad. Sci. U.S.A., 122(22). https://doi.org/10.1073/pnas.2415898122 ↩