It started with a question about Agile.

I wanted to understand what had gone wrong — not the surface complaint that everyone makes, but the structural failure underneath. So I did what I do in executive workshops: I assembled a panel of thinkers whose perspectives would create productive tension. A complexity scientist’s framework against a leadership philosopher’s. A product strategist’s pragmatism against a strategic mapper’s long view. A relational dynamics expert’s lens cutting through all of it.

The difference is that none of them were in the room. The discussion was staged using AI — large language models prompted with specific thinkers’ published work, orchestrated by a facilitator (also AI-generated), and directed by me.

What emerged was something I hadn’t expected: not just useful insight about organizational transformation, but a genuinely multi-vocal exploration that preserved the tensions between worldviews rather than collapsing them into a single authoritative take. It read like a real conversation. In many ways, it functioned like one.

And that’s precisely what made me uncomfortable.


When the Agile discussion produced such rich results, I went further. I invited Stanisław Lem, Frank Herbert, Isaac Asimov, and Octavia Butler to reflect on what the first panel had said — to look at the death of a methodology as a symptom of how civilizations process ideas. Then I brought in Lem as Trurl, alongside Douglas Adams, Terry Pratchett, and Kurt Vonnegut, to write a short story about the whole thing.

Each layer deepened the thinking. Each layer also deepened the ethical question.

Because the thinkers whose published work I used to generate those first-layer perspectives didn’t agree to be in that room. Neither did Lem. I constructed representations of their thinking using AI and published the result. The fact that I did it with care and genuine admiration for their work doesn’t automatically make it right.


I’ve spent time sitting with this question rather than rushing past it, because I think the rush is where most people go wrong. The typical response to “Is it ethical to use AI this way?” falls into one of two reflexes: either unexamined enthusiasm (“AI can do amazing things!”) or reflexive prohibition (“You shouldn’t put words in people’s mouths”). Neither is adequate.

The more honest answer lives in the tension between three competing values.

The first is intellectual attribution. When I stage a discussion about complexity and the AI applies the Cynefin framework, I’m doing something that most AI-generated content doesn’t bother with: naming where the ideas come from. The alternative — absorbing a thinker’s decades of work into undifferentiated “AI knowledge” and presenting it without attribution — is arguably less respectful. At least the staged discussion tells the reader: this perspective exists because a specific person spent their career developing it. Go find their work.

The second is consent. None of the thinkers whose work informed these discussions consented to having their ideas interpreted by AI. I can argue that commentary, interpretation, and creative engagement with public intellectuals’ work has a long tradition — that what I’m doing is structurally similar to a philosopher writing a dialogue between Aristotle and Nietzsche. But the AI adds a dimension that traditional commentary doesn’t have: the verisimilitude of the result. When the AI sounds like a specific person, the reader’s brain processes it differently than a clearly attributed paraphrase would. The simulation of voice creates a simulation of presence, and presence implies participation.

The third is dignity. There’s a difference between engaging with someone’s ideas and puppeteering their persona. The distinction is sometimes obvious and sometimes vanishingly subtle. I believe the line runs through intention and attention: Am I invoking this person’s work because their framework is essential to the inquiry I’m conducting? Or am I borrowing their name to lend authority to my own conclusions? The first is scholarship. The second is appropriation.


These three tensions led me to a practice I didn’t start with but arrived at through deliberation: a two-layer system that separates the generation of ideas from the presentation of voices.

Here’s how it works.

When I create a writing lab discussion, I use real thinkers’ names and published work to prompt the AI. The names matter at this stage — they give the model specific, coherent intellectual frameworks to work from. An AI prompted with Dave Snowden’s body of work produces something meaningfully different from one prompted to be “a complexity scientist.” The specificity of the source produces the distinctiveness of the voice.

But when I publish, I don’t attach those names to the output. Instead, each thinker becomes a descriptive persona: The Complexity Scientist. The Leadership Philosopher. The Product Strategist. The perspectives remain vivid and distinct — they carry the intellectual shape of their source. But the reader encounters a role, not a performance of a specific person.

The intellectual lineage is preserved through attribution. Every discussion includes a section that credits the real thinkers whose work informed each persona, names their key works, and points readers toward the originals. The reader knows exactly where the ideas come from. They just don’t encounter a simulation of a specific person saying them.

This approach resolves the consent problem substantially. I’m no longer performing someone’s voice without permission — I’m engaging with their published ideas and attributing that engagement. That’s within the normal bounds of intellectual discourse. It also resolves the dignity problem: thinkers are credited as sources of ideas, not performed as characters.

And it does something I didn’t anticipate. It creates the conditions for the real thinkers to engage. A discussion that simulates someone’s voice is an imposition — the real person can only object or tolerate it. A discussion that draws on someone’s ideas, credits them properly, and presents the result through an anonymous persona is an invitation. The real person can engage, respond, disagree, extend — without the awkwardness of reacting to a puppet version of themselves.


There’s one important exception to the persona system, and it illuminates the principle underneath.

The literary discussions — Lem as Trurl, Douglas Adams, Terry Pratchett, Kurt Vonnegut — keep their real names. Not because dead people can’t object, which would be a pragmatic standard rather than a principled one. They keep their names because they serve a fundamentally different function.

When a living professional thinker appears in a discussion, the reader trusts the content partly because of that person’s authority. That’s the mechanism that requires protection — the borrowing of credibility through simulation.

When Lem-as-Trurl says “I built a machine that could create anything beginning with the letter N,” the reader isn’t trusting a professional claim. They’re enjoying a literary allusion. The name evokes a fictional universe, not a professional reputation. Replace Lem with “The Cybernetician Satirist” and the allusion becomes a riddle — the reader who doesn’t know Lem can’t decode it, and the reader who does wonders why you’re being coy.

The distinction isn’t alive versus dead. It’s authority-borrowing versus creative-imaginative. When I invoke a thinker’s professional framework to lend weight to an argument about organizational transformation, that’s authority-borrowing — and it deserves the protection of persona treatment, whether the thinker is living or deceased. When I invoke a novelist’s literary universe to add creative richness to an exploration of civilizational patterns, that’s creative-imaginative — and the name is the reference.

This means a figure like Lem could fall on either side depending on the discussion. Lem-the-novelist, channeling the Cyberiad? His name stays — it’s a literary tribute. Lem-the-cybernetician, making theoretical claims about information degradation? Persona treatment — because that’s authority-borrowing.

The test is simple: Is this invocation asking the reader to enjoy a literary reference, or to trust a professional claim?


Here is what I’ve committed to — not as a one-time declaration, but as an evolving practice that I expect to revisit as AI capabilities change and cultural norms develop.

Every discussion is unmistakably identified as AI-staged. Not in a footnote. Not in small type. In the primary framing that every reader encounters before engaging with the content. You will always know — before you read a single exchange — that this is an AI-generated conversation directed by me, not a transcript of real people talking.

The two-layer system is disclosed. I explain that real thinkers’ work informs the generation of perspectives, that descriptive personas replace names in publication, and why. I don’t hide the fact that names were involved in the process — I explain the choice to remove them as a matter of principle.

The perspectives are interpretive, not authoritative. I work to represent each thinker’s published positions as faithfully as I can. The AI is directed, not unleashed; I guide it toward accuracy based on my own engagement with each person’s work. But any AI interpretation is necessarily incomplete, and potentially inaccurate. What you’re reading is my best understanding of how these frameworks might engage with each other, filtered through a technology that generates plausible text based on patterns. It’s a starting point for your own thinking, not a substitute for the original sources.

Every discussion points you toward the real work. The attribution section names the thinkers, their key works, and their frameworks. The discussion should function as a gateway — something that makes you curious enough to go find the original writing. If the discussion replaces the source rather than directing you toward it, I’ve failed.

I’m accountable. My name is on this work. If a thinker whose work has informed a discussion believes their ideas have been misrepresented, I want to hear about it. I commit to responding with respect and speed — including modifying or removing content. The method only works if the relationship between the creator and the source thinkers is one of good faith, and good faith requires accountability.

The method creates invitation, not imposition. The persona system is designed so that thinkers whose work I draw on can engage with the discussion — respond, disagree, extend — without the awkwardness of reacting to a simulation of themselves. The goal is to create conditions for intellectual exchange, not barriers that exclude the very people whose ideas are being explored.


There’s a larger question underneath all of this, and I don’t want to pretend I’ve resolved it.

AI is going to make it trivially easy to generate synthetic dialogue using anyone’s name and likeness. Most of what gets produced will be careless, exploitative, or simply indifferent to the people it represents. The question isn’t whether that world is coming — it’s here. The question is what responsible practice looks like inside it.

I don’t think the answer is prohibition. A world where nobody can engage with public intellectuals’ ideas through interpretive, creative formats — including AI-assisted ones — is a world with less intellectual exchange, not more. But I also don’t think the answer is the current default, which is no standards, no disclosure, and no accountability.

What I’m trying to build at Innomada is a third path: AI-staged dialogue practiced as a form of intellectual respect. A two-layer system that uses real work to generate real insight, and publishes it in a way that protects real people. A framework that distinguishes between borrowing someone’s authority and engaging with their imagination. A practice that evolves as conditions change — because any ethical framework that claims to be finished has already failed.

The method is called a writing lab. It’s how I develop insight — the same way I facilitate transformation with executive teams, by orchestrating competing perspectives until something emerges that no single viewpoint could produce alone. The difference is that some of those perspectives are generated by AI, and I believe you deserve to know that.

Whether this constitutes a responsible use of other people’s intellectual legacies is not a question I can answer alone. It requires ongoing conversation — including, ideally, with the thinkers themselves. The persona system is designed to make that conversation possible. This article is my opening move.


Every staged discussion published on Innomada is AI-generated. Perspectives are informed by named thinkers’ published work, presented through descriptive personas, and credited in full. Literary discussions may retain authors’ real names when the function is creative-imaginative rather than authority-borrowing. No real person participated in, reviewed, or endorsed any conversation. Readers are encouraged to engage with each thinker’s original writing. Learn about the Writing Lab methodology.