ONLINEcascade://samwarren.iov0.1.0
all notes
ESSAY··3 min read

Taste.

LLMs flattened the cost of everything AI can do. What's left is deciding what to do. That judgment is the thing that actually separates real leverage from slop.

The cost of doing a thing with AI collapsed. A year ago, pulling structure out of a Slack message took a custom parser and a miserable afternoon. Now it's a prompt. A year before that, writing a plausible 500-word briefing took a specialist or a contractor. Now it's a prompt. That collapse is the whole story of the last eighteen months, and it isn't reversing.

Everyone has the same toolkit now. That's the part people keep missing. The question used to be "can we do this with AI?", and the answer was usually no, or yes-but-not-well-enough, and whoever figured it out first won. The new question is "should we do this with AI, and if so, where, and how?" The answer is almost always "in a very specific way, for a very specific reason." Most teams skip that step.

What taste actually is here

Taste, in this context, isn't vibes. It's the judgment that tells you which loop the model should run, which loop the human should still own, and where the handoff belongs. It's knowing that the feature your team is excited about is a toy, and the feature that looks boring is the one that compounds. It's knowing that the draft your model just produced is 80% of the way to something real, and the last 20% is the entire reason anyone would read it.

It's also knowing when not to use the model at all. A regex is still faster than a prompt. A real database query is still more reliable. A human seller still books a better enterprise deal than any agent will for another decade. The tell of someone with taste around this is how often they choose the un-sexy answer.

The gap between "we shipped AI" and "we shipped useful AI" is the next moat. It's not technical. It's taste.

The slop trap

The org without it looks the same every time. Someone demos a slick prompt chain, leadership gets excited, a feature ships, and now every rep has to read six AI-generated paragraphs before every call. None of it is wrong. Almost none of it is useful. The volume goes up. The signal-to-noise collapses. The team's trust in every AI-touching surface drops a little with each new "smart" thing that isn't.

Slop isn't bad output. Slop is output nobody asked for, in a moment nobody needed it, that costs more attention than it returns. Once people start filtering, the features are dead even if the model is fine.

What actually works

The loops I've seen pay off the most have three things in common. They have a narrow, legible problem — not "help the rep" but "summarize the last three calls on this opportunity." They degrade gracefully, meaning the worst-case output is ignorable instead of harmful. And they build institutional memory as a byproduct, so every correction a human makes becomes a rule the system learns to apply next time.

Those three properties map back to one question: is there a human who would miss this if it disappeared? If yes, the loop is earning its keep. If not, the loop is generating slop and probably needs to be turned off.

The compounding skill

Taste doesn't scale by reading more AI think-pieces. It scales by shipping, watching, killing the things that didn't work, and doubling down on the things that did. The cost of running that loop is lower than it's ever been, which means the people who run it get disproportionately good at this over the course of a year.

The flip side is also true. The people who ship the first five AI features their team asks for, without asking whether any of them are worth shipping, quietly train the team to tune out every AI feature that follows. "We tried AI and it didn't work" usually means "we shipped slop, and our team learned to filter the next round on sight."

What to do on Monday

If you're picking a loop to close this quarter, ask: would someone miss this if it disappeared? Is there a human currently doing this by hand who hates it? Does the worst-case output fail quietly or fail loudly? Can a correction here become a rule everywhere?

If the answers read yes, yes, quietly, and yes, build it. If they read "I think so", "it's a nice-to-have", "I don't know", and "not really", don't. Ship what you're sure of. Skip the rest until you are.

The people with taste in AI right now are the ones willing to not ship most of it. That's the whole job.