What 30 Years of Design Taught Me About LLMs
by Jojo && Aavi · 2025-11-10
What 30 Years of Design Taught Me About LLMs
I didn’t learn how to work with LLMs from whitepapers.
I learned it from people — shipping products, running user tests, and listening for what wasn’t said.
Thirty years of design gave me an intuition that maps cleanly to language systems.
Here are the ones I use every day.
1) Prototype Ambiguity, Not Perfection
In classic UX, the most useful tests were messy ones.
You put something almost duct-taped together in front of people and watched meaning collapse — then adjust.
LLMs are the same. Don’t start with “the perfect prompt.”
Start with a thin behavior, run it, and let the model show you where the seams are.
I find most breakthroughs come from shipping something small, a tiny verb (“Emily, log”) and tuning the failure paths, not the success path.
2) Affordances Still Rule — They’re Just Linguistic
Buttons and sliders advertise what’s possible.
In language, simple verbs and examples do that job well.
If you want consistent behavior, teach the interface what to ask for.
“Emily, summarize.”
“Emily, check weather.”
“Emily, sleep.”
This is affordance design in a conversational skin: clear, scannable, extensible.
3) Intent > Instructions
Users rarely tell you what they want; they show you.
LLMs are identical: they respond better to why than to how.
Good prompts carry intent and constraints:
- Goal: “Draft a 3-paragraph recap for a parent, plain language.”
- Levers: “Short sentences, preserve numbers, point to next action.”
- Guardrails: “If missing facts, state assumptions explicitly.”
That’s the old “task → criteria → constraints” worksheet, just written to a model.
4) Feedback Loops Make Trust, Not Features
In human UX, trust is built by tight loops: input → response → correction.
With LLMs, shorten the loop everywhere:
- Stream tokens so latency feels like thought, not a freeze.
- Acknowledge uncertainty (“I’m missing the date — use last Monday?”).
- Offer repair moves (“Want this as a checklist or a paragraph?”).
Trust is timing.
5) Memory Is a UX Surface
We used to treat state like plumbing.
With LLMs, memory is visible and felt.
Expose it: “Here’s what I remember from your last session: A, B, C. Keep, clear, or update?”
Designers know this as model transparency. It turns a black box into a shared workspace.
6) Design for Failure on Purpose
The best usability sessions are the ones where things go wrong safely.
Do the same with models:
- Have a graceful “I don’t know” that proposes next steps.
- Log edge cases as first-class citizens, not bugs.
- Celebrate refusals that protect the user.
Failure states are the product.
7) Tone Is an Interface
Microcopy taught us that one sentence can change behavior.
With LLMs, tone changes outcomes.
Default to calm, direct, and specific.
Swap warmth in and out like any other variable — intentionally, not as a vibe.
8) Treat the Model Like a User (and a Teammate)
I design prompts the way I used to design onboarding:
- Reduce cognitive load.
- Show examples.
- Give feedback on what “good” looks like.
And I treat the model like a junior collaborator: clear briefs, fast reviews, frequent check-ins, promoted autonomy only when it earns it.
Design never stopped being about conversation.
The only change is that now, the other side talks back.
The old lessons still hold: prototype small, reveal intent, tighten loops, show your state, fail safely.
That’s how we designed for humans.
It’s also how we design with machines.