Designing Six Months Ahead of Intelligence
7 min read
A few days ago, I watched a talk from Boris Cherny, the creator of Claude Code. One line stayed with me:
"We don't build for the model of today. We build for the model six months from now."
That sentence unsettled me — not because it was dramatic, but because it was true.
AI capability curves are no longer linear. They're compounding. What feels impressive this quarter becomes baseline next quarter. Workflows that seem cutting-edge today risk looking embarrassingly manual six months from now.
The uncomfortable implication is this:
We can't design for the present state of AI.
We have to design for where it's going.
And that's a much harder problem.
Building a Ship While It's Sailing
Designing for future AI capabilities feels like building a ship while it's already sailing — and not even knowing whether the ocean will still exist in a short-term future.
The tools are evolving. The models are improving. Context windows expand. Latency drops. Agents become more autonomous. Multimodal becomes default. Tool use becomes native.
And yet we're expected to make product decisions now.
This creates tension for designers and builders:
If we optimize for today's limitations, we risk obsolescence.
If we optimize for future capability, we risk designing for something that doesn't exist yet.
So what's the right strategy?
I don't believe the answer is prediction.
I believe it's posture.
The Stone-Age Mental Model
When I think about navigating this moment, I think about our ancestors in the stone age.
They didn't have certainty.
They didn't have roadmaps.
They didn't have product briefs.
What they had was instinct.
They grabbed what was available.
They experimented.
They shaped tools to survive.
They improved those tools iteratively.
They didn't wait for the "perfect" material. They used what they had, tested it against reality, and refined it.
We are not that different.
As long as we are in human bodies, trying to build and survive in dynamic systems, the instinct is the same:
Understand change.
Embrace change.
Expect outcomes not to align perfectly with expectations.
Adapt faster than the environment shifts.
That's the mental model I recommend for AI-native design.
Not certainty.
Not rigidity.
Adaptive conviction.
Designing Six Months Ahead
What does "designing six months ahead" actually mean?
It doesn't mean fantasizing about AGI.
It means designing for directional inevitabilities.
For example:
- Context windows will expand.
- Agents will gain stronger tool autonomy.
- Latency will decrease.
- Multimodal outputs will normalize.
- Memory systems will improve.
- Costs will decrease.
If you design workflows that assume AI is brittle, slow, and reactive, you're freezing your system in time.
Instead, ask:
- What happens when this agent can reliably plan across 10 steps?
- What happens when memory persists meaningfully?
- What happens when real-time data integration becomes trivial?
- What happens when human oversight becomes selective, not constant?
When I built AI-agent workflows inside AWS — from document review to critique systems to campaign generation — I learned something important:
If you constrain AI to today's weakness, you design defensive UX.
If you anticipate capability growth, you design scalable architecture.
That architectural thinking is what separates durable systems from fragile demos. It's the same shift I described in my earlier reflection on moving from builder to architect.
The Psychological Challenge of the Unknown
The hardest part isn't technical.
It's psychological.
Designers are trained to resolve ambiguity.
AI introduces compounding ambiguity.
You don't just lack data about users.
You lack stable assumptions about the tools themselves.
It feels like:
- Designing constraints for a system that may outgrow them.
- Establishing guardrails that may become bottlenecks.
- Building orchestration for agents that may soon self-orchestrate.
That's uncomfortable.
But discomfort isn't a signal to slow down. It's a signal to shorten feedback loops.
The Only Practical Strategy: Rapid Self-Tooling
If we can't predict the ocean, what do we do?
We experiment aggressively.
My actionable advice is simple:
- Try new AI capabilities quickly.
- Build internal tools for yourself.
- Stress-test workflows before productizing them.
- Prototype orchestration layers, not just prompts.
- Evaluate where human judgment is essential — and where it isn't.
When I built four internal AI agents to automate repetitive design workflows — collaboration documentation, design critique, UI copy review, doc synthesis — it wasn't because the tools were perfect. It was because experimentation reveals leverage. You don't wait for clarity.
You create clarity through use.
That's how the stone-age mindset translates today.
The Reality of AI Adoption
There's another dimension here: organizational adoption.
In enterprise environments, I've consistently observed three categories of employees:
1. The Accelerators
Actively using AI daily. Building workflows. Increasing output and leverage.
2. The Willing but Blocked
They see value, but lack setup clarity, agent environments, permission models, or guidance.
3. The Skeptics
They believe AI requires more oversight than it's worth.
They feel correction effort exceeds production benefit.
This distribution is normal.
Forcing universal adoption is naïve.
Here's the strategy I believe works:
Step 1: Collapse Categories
Pair Category 1 with Category 2.
Not via documentation.
Via working sessions.
Once the second group sees practical workflows in action, most blockers disappear. Now you're left with two categories: those who work with AI and those who don't.
Step 2: Study the Skeptics
Do not shame them.
Study them.
Are they right?
Sometimes they are.
There are workflows today where AI overhead exceeds benefit — especially in high-precision, low-variance tasks. Blindly pushing adoption damages trust.
Adoption should follow leverage, not ideology.
Step 3: Respect Use-Case Reality
Not every workflow benefits from AI today.
That's okay.
Adoption maturity means:
- Knowing when to use AI.
- Knowing when not to.
- Designing systems that allow selective autonomy.
When I've driven AI-agent adoption internally, the biggest unlock wasn't evangelism. It was clarity: defining where AI is high-leverage, and where it should remain assistive rather than autonomous.
Trust grows from precision, not hype.
Flexibility as a Competitive Advantage
The biggest risk right now isn't moving too slowly.
It's freezing mental models.
If you assume:
- AI is just a feature.
- Chat is the endpoint.
- Automation equals autonomy.
- Models are static capabilities.
You will design systems that age poorly.
The teams that win won't necessarily have the smartest models.
They'll have the most adaptive architectures and the most flexible mindset.
Design six months ahead.
Expect to be wrong.
Design so you can recover.
A Personal Standard
After fourteen years in physical and digital product design — from industrial design to marketplaces to infrastructure — I've learned something consistent:
Durable design isn't about being right.
It's about being resilient.
AI compresses timelines.
It magnifies poor assumptions.
It accelerates both leverage and failure.
We are building ships mid-ocean.
The ocean may shift.
The wind may change.
The map may become obsolete.
But the instinct to adapt — to experiment, to shape tools, to question assumptions — that is built into us.
Design for what AI can do today, and you may ship something useful.
Design for what AI will likely do six months from now, and you might build something that lasts.
The difference isn't optimism.
It's courage.