I Hope You Like Cavendish
Biological monoculture and the infrastructure of thought
A Hundred Bananas You’ll Never Taste
There are over a hundred varieties of banana but you probably only eat one — the Cavendish. The others have names like Ladyfinger, Red Dacca, Bluggoe, Manzano — apple bananas — that taste a little like a strawberry. There are plantains that anchor entire cuisines in the Caribbean. There’s also tiny wild bananas in Southeast Asia that are really more seed than fruit.
For most of human history, the banana was not a single thing. It was a sprawling, genetically messy, regionally adapted family of fruits. Different climates grew different varieties. Different cultures cooked them differently. If you traveled, bananas surprised you.
But, the logistics of global shipping demanded uniformity. You needed a banana that ripened predictably, survived weeks in a cargo hold, and looked the same everywhere. The industry found one type that fit the slot, built a supply chain around it, and quietly abandoned everything else. Not because the other varieties were worse. Because they didn’t scale.
Today, the Cavendish accounts for roughly 99% of banana exports to the developed world. It is, by any objective measure, fine, but it is also a clone. Every Cavendish on earth is genetically identical, propagated from cuttings, incapable of sexual reproduction. Which is bad news, because a soil fungus is moving through the global Cavendish supply and there is no replacement banana in the wings.
We have here one of the best metaphors regarding the dangers of monoculture, and even if you’ve heard a banana monoculture metaphor set-up before, it’s still a useful lens when talking about AI.
The Model We All Eat
I love Claude — sometimes I worry I love it too much. I use Anthropic’s products every day, and I think the work the company is doing on alignment is some of the most important research happening. This essay isn’t a critique of any particular model or company.
It’s a question about what happens when the ecosystem narrows.
The LLM market right now looks a lot like the banana export market circa 1960. There are technically many options — open source models, specialized fine-tunes, regional players, research experiments. But in practice, the overwhelming majority of usage flows through a tiny handful of products. One dominant player, a couple of strong runners-up, and a long tail that most people never touch.
This isn’t unusual for early-stage technology markets. But LLMs aren’t search engines or social networks. They’re something new: infrastructure for thought. They help people write, reason, code, decide, form opinions, and understand the world. The surface area of influence is unlike anything we’ve built before.
So for the sake of argument, imagine a near future where there are really only two frontier models that matter — maybe three if we’re generous. The majority of knowledge workers, students, creators, and developers funnel their cognitive work through one of these systems.
What does that world look like?
The Narrowing
Researchers have started calling it “generative monoculture” — a measurable narrowing of output diversity relative to the data these models were trained on. When you ask LLMs to write book reviews, they skew overwhelmingly positive, even for books that actual readers found divisive or mediocre. When you ask them to write code, solutions converge on the same patterns — far more similar to each other than solutions humans write for the same problems.
If most of the world’s knowledge workers are routing their thinking through two or three models that have all been trained to converge on similar patterns of “good” output, you get something like a Cavendish situation. Not a single point of failure exactly — but a narrowing of the cognitive supply chain.
Good Enough?
When everyone’s co-thinker shares the same training data, the same alignment tendencies, and the same optimization targets, the range of ideas in circulation contracts. Not because the models are wrong, but because they’re optimized for a particular band of “good enough” that gradually crowds out the weird, the contrarian, the genuinely novel.
Anyone who reads enough LLM output recognizes the voice. The careful hedging. The numbered lists. The “Great question!” opener. As more writing, code, and creative work passes through these systems, a kind of stylistic regression to the mean takes hold. Not because the models can’t be diverse, but because at scale, the median output wins.
However, in most ways, software is more malleable than biology. LLMs can be updated, retrained, and forked in ways that banana genomes can’t. The feedback loops are faster, and course corrections can happen at the speed of a model release rather than a growing season. The open source ecosystem, when taken as a whole, represents some diversity at the architectural level, even if consumer usage is concentrated.
But I think the deeper structural parallel holds: when you optimize a complex system for a single axis of performance, you become fragile along every other axis. The Cavendish was optimized for shipping durability and resistance to one specific disease. LLMs are optimized for helpfulness and safety as measured by current alignment techniques. Both optimizations are good and necessary. Both create blind spots that won’t be visible until the environment shifts.
What We Might Do
The landscape is so dynamic and unpredictable. I don’t really have an answer to the issue of model monoculture — partly because I don’t really understand the whole stack enough and partly because it’s impossible to predict.
I try to keep my judgments sharp and my palate flexible. I do love Claude, but at some point I might have to walk away from that relationship and never look back. Open source models, small fine-tuned models, local models, models trained on different data with different values — these are the heirloom varieties. They might not be as polished as the frontier products, but they carry the genetic diversity the ecosystem needs to stay resilient.
The moment most of the training data for the next generation comes from the current generation, we’re in a cycle that’s very hard to break. Maybe we need to actively preserve and curate human-generated work the way seed banks preserve crop diversity. Unpolished blog posts, half-finished novels, unedited rants — that stuff might matter more than we think. We can’t be afraid of the weird or the rough around the edges.
Because the edges are still where originality lies. The weird is not a bug. It’s the immune system.
Every contrarian take, every genuinely novel idea, every voice that doesn’t fit neatly into the median — these are the things that keep a culture’s thinking from collapsing into a single point. That’s exactly why it matters.
Joaquin Perez writes Loopcraft, a newsletter about individual power in the age of AI. If this resonated, subscribe at loopcraft.io.

