Most AI transformations fail not because of technology limits, but because organizations confuse capability with product thinking. The gap between what AI can do and what it should do is a product problem — and most companies aren't set up to solve it.
I ran ten Writing Lab sessions on how agent swarms should communicate. Nineteen perspectives. Two branches. The answer turned out to be the method I've been using all along.
Agent swarms reproduce dysfunctional team dynamics — not because the agents are flawed, but because system designers import the wrong organizational metaphors. A team of AI agents performs up to 37% worse than its best member. The fix isn't better models. It's better organizational design.
AI agents will mediate trillions in commerce. The instinct is to race ahead and optimize. But what if the real leverage is somewhere else entirely?
On the ethics of AI-staged dialogue, and what we owe the thinkers we invoke. A framework for responsible use of AI to generate multi-perspective conversations.
Most organizations approach AI transformation the way a novice approaches a fight: more force, more speed, more resources. Jigoro Kano figured out why this fails in 1882. His insights are more relevant now than any agile framework.
The most dangerous advice comes from a single perspective that sounds comprehensive. I've learned to stage structured debates between different viewpoints — human and machine — before forming conclusions. Here's why disagreement is a feature, not a bug.