The most dangerous advice isn’t wrong advice. It’s advice that’s right from one angle and catastrophically blind to others.

I learned this the hard way, years ago, giving a client a recommendation that made perfect sense from a product perspective and completely ignored the political reality that would kill it. The technical analysis was sound. The organizational reading was absent. The advice was useless.

Since then, I’ve developed a practice that sounds strange when I describe it: before I advise on anything significant, I stage arguments. Structured debates between different perspectives, different frameworks, different voices. Some human. Some machine.

The goal isn’t consensus. It’s collision.

The Problem with Single Perspectives

Every framework sees the world through its lens. A product manager sees user problems and solutions. A finance leader sees cost structures and returns. An engineer sees systems and constraints. A change manager sees stakeholder dynamics and adoption curves.

Each perspective is true. Each is also incomplete. And the incompleteness is invisible from inside the framework.

This is why smart people give bad advice. They’re not wrong about what they see. They’re wrong about what they don’t see. And because their analysis is internally coherent, the gaps are hidden.

The same problem applies to AI systems. A language model trained on business strategy literature will give you strategy-shaped answers. One trained on technical documentation will give you engineering-shaped answers. Both will sound confident. Neither will flag what’s missing from its training.

Structured Disagreement as Method

My approach inverts the usual pattern. Instead of seeking the best single perspective, I deliberately create conflict between multiple perspectives.

In practice, this means orchestrating debates. I might have a product strategist perspective argue with an organizational psychologist perspective. A technologist with a philosopher. A practitioner with a theorist. Sometimes these are human experts in a room. Sometimes they’re AI systems prompted to embody different viewpoints.

The magic happens in the collision. When two coherent frameworks disagree, the disagreement illuminates the assumptions each framework takes for granted. The product strategist’s blind spots become visible when the organizational psychologist objects. The technologist’s hidden assumptions surface when the philosopher pushes back.

This is different from “getting multiple opinions” or “consulting stakeholders.” Those processes often smooth over disagreement, seeking the comfortable middle. I’m doing the opposite: engineering productive collision, then watching carefully what emerges.

How It Works with AI

Language models make this practice scalable in ways that weren’t possible before. I can stage a debate between perspectives that would be difficult or impossible to convene in person. A discussion between a deceased philosopher and a contemporary strategist. A dialogue between a practitioner from one domain and a theorist from another. A multi-voice exploration where five different frameworks respond to the same problem.

The key is structure. Not “what do you think?” but “here is the question, here is your perspective, here is your opponent’s perspective, now engage.” The prompting matters enormously. A poorly structured debate produces heat without light. A well-structured one reveals things I couldn’t see alone.

I’ve had AI-staged debates surface assumptions I’d held for years without examining. I’ve watched machine perspectives collide in ways that exposed gaps in my own thinking. The debates aren’t oracles delivering truth. They’re pressure tests revealing weakness.

The Same Skill, Different Contexts

This isn’t a trick I invented for working with AI. It’s a facilitation skill I’ve used for years in executive workshops.

When you put 40 managers in a room to discuss strategy, the default mode is convergence. People seek agreement. Conflict feels uncomfortable. The loudest voice or highest rank tends to win.

Effective facilitation inverts this. You design for productive disagreement. You create structures where different perspectives must collide. You protect space for the minority view that might be seeing something the majority misses.

The AI debates are the same skill applied differently. Instead of managing human dynamics in a conference room, I’m managing perspective dynamics in a conversation with machines. The goal is identical: surface what single viewpoints hide.

Why This Matters for Advisory Work

Most advisory work is reductive. A consultant arrives, applies their framework, delivers their conclusion. The power comes from the framework’s apparent comprehensiveness. The danger comes from the same source.

I’ve learned to distrust comprehensive-sounding answers, including my own. The question I now ask before any significant advice: What perspective is this missing? What would a smart person with a different framework object to?

The structured debates are how I answer that question rigorously. Not by trying to imagine objections — imagination is limited by my own blind spots — but by actually generating them from different viewpoints.

The advice that emerges isn’t the average of the perspectives. It’s shaped by having survived their collision. The recommendations are stronger because they’ve been stress-tested. The gaps are smaller because they’ve been illuminated.

The Deeper Point

There’s a reason I call this section of my practice “The Laboratory.”

A laboratory is a space for experiments. For testing hypotheses against reality. For being wrong in controlled ways that teach you something.

The debates I stage are experiments in thinking. Each one tests whether a conclusion can survive contact with a different perspective. Some survive. Some don’t. Both outcomes are valuable.

The alternative is what most of us do most of the time: think from one perspective, reach a conclusion, stop. It’s faster. It feels more decisive. And it’s how most bad advice gets generated.

Human judgment, structured well, is the most powerful technology available. The structure is what makes the difference. And sometimes the best structure is one that forces us to argue with ourselves before we advise anyone else.


This approach has a name: the Writing Lab. If you want to see it in action, explore The Laboratory — where I publish selected debates between AI perspectives on questions that matter to transformation and AI product work.

A good place to start: The Reckoning — six practitioners autopsy Agile honestly. There’s a facilitator. There are rules. And there’s the kind of collision that reveals what everyone’s too polite to say.