LINDA BROWN
← Writing
AI Design·6 min read·April 2026

Small Moves, Serious Returns: The Case for Tiny Wins in AI Products

Joel Califa made the argument a decade ago. We still aren’t listening. In AI products, the stakes just got higher.

In 2015, Joel Califa published a short essay called “Tiny Wins.” The argument was deceptively simple: standalone, low-effort improvements that deliver immediate value are consistently underrated. Netflix’s skip intro button. Chrome’s tab audio indicator. GitHub’s dynamic favicons. Small things. Millions of users. Outsized trust.

It’s one of the most-referenced pieces in product design, and teams still don’t ship enough tiny wins.

I’ve been thinking about why — and why the failure to ship them matters so much more now than it did then.

What Joel Got Right

The core insight in Califa’s essay is about frequency and compounding. A tiny friction at a high-frequency touchpoint isn’t a small problem. It’s a tax. If GitHub pull requests happen tens of millions of times a day, and each one produces three seconds of unnecessary confusion, that’s not an annoyance — it’s an institution-level drain on human attention. Fix it once, and you’ve returned thousands of hours to your users every single day.

The other half of the argument is about momentum. Large features take time. Between launches, products go quiet. Tiny wins fill that silence with signal. They tell users: someone is paying attention. Someone noticed the thing that was slightly wrong. That signal accumulates into trust in a way that quarterly releases rarely do.

Both of these ideas hold. They’ve always held. I’m not here to update the thesis — I’m here to argue that the context has changed in a way that makes it urgent.

Why AI Products Raise the Stakes

When the product is an AI system, user trust is not a nice-to-have. It’s load-bearing infrastructure.

AI products ask something unusual of their users: they ask them to delegate judgment. Not just to complete a task, but to hand over a decision — about what to write, what to show, what to do next — to a system they can’t fully inspect. That is a fundamentally different relationship than a user has with a search box or a form field. And it is a relationship that fractures faster and mends more slowly than almost any other kind of user trust.

This is why tiny wins are not just a nice cadence to maintain in AI products. They are the primary mechanism by which you demonstrate that the system is paying attention, improving, and worthy of the delegation it’s asking for.

A loading state that’s slightly too slow. An error message that’s technically accurate but emotionally cold. A confirmation step that asks for information the system already has. These are not minor polish items. In an AI product, they are cracks in the foundation of the relationship.

The Systems View: Non-Linear Effects

There’s a reason designers trained in systems thinking have a different relationship with tiny wins than designers who think component by component.

In a complex system — a product used across thousands of different workflows, by users with wildly different mental models — small changes don’t have linear effects. A single, well-placed change in a high-leverage spot can redistribute friction across the entire system. Fix the thing that causes users to hesitate before trusting an AI recommendation, and you haven’t just improved one interaction. You’ve shifted the user’s baseline posture toward the product.

A redesigned empty state that told users exactly what to do next — not vague, not encouraging, just specific — reduced support tickets in a related flow by more than the team expected. The fix took two hours. The effect compounded for months.

That’s not luck. It’s systems logic. Small, well-targeted changes to high-leverage moments pay out disproportionately because the system amplifies them.

Building the Habit

The reason teams don’t ship enough tiny wins isn’t that they don’t value them. It’s that the workflow for shipping them doesn’t exist.

Large features have a natural container: a project, a sprint goal, a roadmap line. Tiny wins don’t fit those containers. They fall through. Someone notices a problem, logs it, and watches it age gracefully in the backlog while the team focuses on the next milestone.

The fix is structural, not motivational.

A running list that lives outside the backlog.
Not a ticket, not a Jira card — a document that captures tiny win candidates as they surface. Low friction to add, low friction to pick up. When someone has a half-day, the list is ready.

An explicit slot in the sprint.
Not “if time allows.” One tiny win per sprint, defined as such, owned by someone. Small enough that it ships, visible enough that it gets credit.

A definition of done that includes the win.
Before a tiny win ships, a clear before/after: what the friction was, what changed, how you’ll know it worked. This isn’t bureaucracy — it’s how tiny wins get noticed, celebrated, and repeated.

The Accumulation You’re Not Measuring

Here’s what doesn’t show up in OKRs: the cumulative effect of shipping things that are slightly better than they need to be, consistently, over a long time.

Users can’t articulate it. They won’t tell you in a survey that the three tiny improvements from last quarter made the product feel trustworthy. They’ll just keep using it. They’ll recommend it without knowing exactly why. They’ll extend the benefit of the doubt when something goes wrong, because something has been going right for long enough that it registered.

Ship the tiny wins. Ship them consistently. Don’t make them fight for space on the roadmap — give them a home.

The accumulation you’re not measuring right now is the trust you’re not building.

Referenced: Joel Califa, “Tiny Wins”

LB

Linda Brown

Systems Architect building intelligent structures for creative teams — at the intersection of design systems, AI infrastructure, and the stubbornly human parts of creative practice.