Lost in AI

When everything around AI is introduced with such fanfare, when every week brings a model update or a new agent framework or another "this changes everything" announcement, you don't notice the things that just happen. The quiet shifts. The ones nobody announces because nobody decided them. They creep up on you.

What I noticed, eventually, is that the conversations have gotten thinner. Not gone — people still talk, standups still happen, PRs still get filed. But the texture has changed. The "hey, can I pick your brain for a minute?" pings. The "how would you go about this?" conversations. The "I'm thinking about doing X, but I'm not sure about Y" exchanges that used to happen are fading.

The pre-build conversations — the messy, exploratory, sometimes unproductive ones — were how a lot of thinking happened in the open. They were how teams built a shared understanding of problems before anyone committed to a solution. And they were implicit. Nobody scheduled them. They were just part of how people worked together.

That implicit layer is what's disappearing. Not because anyone decided to get rid of it, but because the tools moved the conversations into private sessions with an AI agent.

What Used to Be on Autopilot

Before agents, a lot of thinking happened in the open whether anyone wanted to or not.

Pair programming made reasoning visible by default. You couldn't solve a problem without narrating your thought process to the person next to you. Code review wasn't just about the diff — it was the "why did you go with X instead of Y?" conversations that taught both people something. Even the hallway chat after a tricky debugging session — "you won't believe what turned out to be causing it" — was knowledge transfer that nobody had to schedule or incentivize.

These weren't formal processes. They were side effects of how work happened to be structured. Mentorship was ambient. I learned a lot from more experienced engineers by working with them, not by watching their output from the sidelines. Information disseminated across a team of people without anyone calling it "knowledge sharing." It was just how things worked.

John Cutler calls this the difference between "single-player" and "multiplayer", and notes that most AI workflows are fundamentally single-player mode, and that the collaborative dimension isn't just missing, it's being actively replaced by something that feels collaborative but produces isolation.1

To that point, JetBrains recently announced they're sunsetting Code With Me — their real-time collaborative coding feature.2 Demand, they said, had "peaked during the pandemic and has since shifted." They're now focusing on AI.

When Everyone Plays Solo

Open source runs on craft and community. Pride in the code matters — the meritocracy, the standards, the expectation that what you ship is good. But so do the norms around it: how you file issues, how you structure a PR, how you communicate intent, how you earn trust over time. Hendrik Erz, after dealing with the first vibe-coded PR to Zettlr, framed it well: these are social norms, built up over decades, that govern how strangers collaborate on shared infrastructure.3 They're what make open source work.

Those norms are falling apart. Not because anyone attacks them directly, but because the tools make it possible to skip them entirely. When you can generate a thousand-line PR without understanding the codebase, without engaging with the project's conventions, without talking to anyone — the norms become optional. And once enough people treat them as optional, they stop functioning.

Mitchell Hashimoto's Ghostty project went from requiring AI disclosure to banning drive-by AI PRs. tldraw auto-closes all external PRs. cURL killed its bug bounty program after AI-generated submissions swamped it.4 Mitchell didn't mince words: "It’s a fucking war zone out here man. Maintainer morale at an all time low. I totally empathize with the projects that flip the table and ban all AI. I’m getting close to saying only maintainers and accepted issues can have any AI."5

What's breaking isn't code quality — that's fixable, even if today's AI-generated code still comes with plenty of its own problems.

What's breaking is the community itself. When everyone plays solo, throwing shit over the fence without engaging with the humans on the other side, you don't have a community anymore. Eventually you depend on code maintained by no one, or by a single person who might get sick, burn out, or simply move on.

But continued maintenance is only part of it. What you also lose is the community friction, and the friction is where a lot of value comes from. When different people with different perspectives work on the same problem, they argue. They disagree about approaches. They push back on each other's assumptions. Sometimes it leads to forks. Often it leads to better software. The shaping that happens when humans have different ideas and opinions — that isn't overhead. It's how projects develop depth, how contributors grow, how standards evolve.

And I'll bet every engineering leader has thought at least once: if they'd just do exactly as I told them, we'd already be done. That's the impulse. And it's the wrong one! It's the difference between sitting there wishing everyone would just follow your commands and actually understanding that the world is more complex and that you need other perspectives to find good solutions. The friction of people is annoying (at times). It's also what produces quality that no one can produce alone.6

And we don't have to skip any of it just because we're using AI. Case in point: Ghostty contributions by someone who didn't know Zig, didn't know macOS development, didn't know terminals. They'd used AI to decode crash files and analyze root causes. They came into the Discord, disclosed everything — the AI use, the gaps in their knowledge — and asked whether the team would accept contributions. Mitchell looked at the first one, was impressed, said send them all. Four real crashing bugs got fixed.7 That's not throwing shit over the fence. That's a human engaging with other humans, using AI as a tool rather than a replacement for participation.

The difference matters and not just for open source. There's a vast amount of proprietary software behind company walls. The same dynamics apply: bus factor creeping toward one, understanding narrowing where it used to be diverse, shared context that doesn't accumulate because nobody shares how they think anymore — only what they ship.

Solo Players Don't Get Feedback

We've lost the team as a unit. We're all solo players now, and we're getting less good at the thing that made teams valuable.

The solo-player mode doesn't just mean we're not sharing context, it means we're not getting feedback on the thing we're spending most of our working hours doing now: interacting with AI. The agents aren't going to provide it — they're sycophantic machines, absorbing whatever you say and responding with "Great idea!" regardless of whether it actually is one.

I don't know whether I do my AI work well. There's no one watching me. No one to notice that I don't use planning mode often enough, or that I'm doing too much back-and-forth where I could frontload the context, or that I keep making the same framing mistakes. And from conversations with others, I know some people are reluctant to even discuss their struggles — afraid they're using the tools wrong, unsure whether the back-and-forth that took them hours would have taken someone else a single well-framed prompt.

All we have is a handful of good blog posts and then a mountain of slop. LinkedIn posts about "the only prompt you'll ever need." PDF libraries sold as secret knowledge. Influencer threads mostly generated by the same solo players who aren't getting feedback themselves. The actual thing that would teach you how to work with these systems — watching someone experienced navigate a real, messy problem with an agent — is the thing we're not sharing. We're flooded with packaging and starved of the real artifact.

Russinovich and Hanselman named this in a recent Communications of the ACM paper: "AI drag" — agentic coding tools boost more experienced engineers while actually slowing early-career developers who lack the judgment to steer AI output.8 Their proposed fix is a formal preceptor program: experienced engineers explicitly mentoring juniors in the context of AI-assisted work. It's a great paper, and the word they land on is exactly right: we need to externalize this judgment. Make visible the thinking that our current workflows hide.

That's a challenge for everyone, not just juniors. Nobody has five years of experience with Claude Code. We're all figuring this out, and we're all doing it alone.

What Used to Be on Autopilot Needs a New Engine

Some things that used to happen automatically — the knowledge sharing, the ambient mentorship, the shared understanding — are no longer on autopilot. The way we work now moved them off the default path. The time and effort were always there; some of it just ran in the background. It doesn't anymore, and we need to build the replacement.

Mitchell has floated the idea of "prompt blame" — equivalent to git blame, but showing not just who wrote a line, but the prompts, the reasoning, the iterations that produced it.9 Tools are appearing in this space.10 The instinct is right: if the thinking is happening in private sessions, we need infrastructure that captures it and gives the reasoning somewhere to live after the session ends.

I've been through a version of this before. At Mesosphere, I managed a team split across San Francisco, Hamburg, and fully remote folks. Within each headquarters, information flowed without anyone designing it — the water cooler chats, the overheard conversations, the coffee huddles. But information barely traveled across the pond, nor did it include the remote folks.

Recognizing that dynamic meant reshaping how the team worked. We experimented with treating everyone as remote and established new norms — forcing exchanges that would have happened over a water cooler into Slack channels and PR comments. When someone had an offline discussion that produced a decision, they'd summarize it and put it where the rest of the team could see it. It took deliberate effort. But it worked — not by recreating what we'd lost, but by designing information flows that hadn't needed to be designed before.

That's roughly where we are with AI-assisted work. We need to look at what's no longer flowing — the context, the reasoning, the shared understanding — and deliberately design it back in. The tools, the processes, the norms.

A growing number of people are thinking about this. Cutler's single-player / multiplayer framing. The Russinovich and Hanselman preceptor model. Mitchell's prompt blame concept. The tools starting to emerge around AI attribution and session capture. These are all different angles on the same problem, and we're going to need solutions (plural) to actually benefit from AI-assisted work in the long term.

The thinking matters. The process matters. The messy, uncertain, human part of working with AI — all of that is not the embarrassing prelude to the real work.

It is the work.

References

  1. John Cutler has been writing a series on single-player vs. multiplayer AI in his newsletter The Beautiful Mess. His framing: "Single-player AI, through this lens, is accelerated isolation that feels like collaboration." The series is worth reading in full.

  2. JetBrains announced the sunsetting of Code With Me in March 2026: "Sunsetting Code With Me," JetBrains Platform Blog. The feature will be unbundled from IDEs in version 2026.1, with public relay infrastructure shutting down Q1 2027.

  3. Hendrik Erz, "Vibe Coding: The Final Form of Hyper-Individualism," November 2025. Erz conceptualizes vibe coding as hyper-individualism that disrupts the social norms governing open-source collaboration — what he calls "the tragedy of the lone producer." Erz is a sociologist and maintainer of Zettlr.

  4. Kate Holterhoff's "AI Slopageddon and the OSS Maintainers" at RedMonk covers the Ghostty, tldraw, and cURL policy shifts comprehensively.

  5. The tweet from Mitchell Hashimoto is a response to Thorsten (@thorstenball), a former colleague of mine who's currently working on Amp, which BTW has a beautiful Threads feature that allows one to share the human/AI back-and-forth... anyway, I'm getting off track — here's the original thread: https://x.com/mitchellh/status/2011958999034601839

  6. The landmark study here is Woolley et al., "Evidence for a Collective Intelligence Factor in the Performance of Human Groups," Science, 2010 — showing that a group's collective intelligence predicts performance better than individual members' abilities, and that social sensitivity is a key driver. For software teams specifically, see Diegmann & Rosenkranz's work on how team diversity and collective intelligence shape agile development efficiency.

  7. Described in Continue.dev's "We're Losing Open Contribution", January 2026, and corroborated in multiple Mitchell Hashimoto interviews.

  8. Mark Russinovich and Scott Hanselman, "Redefining the Software Engineering Profession for AI," Communications of the ACM, February 2026.

  9. Mitchell introduced the "prompt blame" idea in this tweet.

  10. Tools emerging in this space include git-ai, which tracks AI authorship via Git Notes, and blameprompt, which attaches prompt receipts to commits. Mitchell launched Vouch — a trust management system for open-source projects that addresses the contribution side of the problem. Full Disclosure: I'm also working on a tool to address this problem — nothing ready yet, though.

Changes

  1. Add "Lost in AI" article Orlando Claude Opus 4.7 (1M context)