Designing Six Months Ahead of Intelligence

7 min read

A few weeks ago, I listened to a talk from Boris Cherny, the creator of Claude Code. One line stayed with me: "If you're designing for what AI can do today, you're already behind."

That sentence unsettled me—not because it was dramatic, but because it was true.

AI capability curves are no longer linear. They are compounding. What feels impressive this quarter becomes baseline next quarter. Workflows that seem cutting-edge today risk looking embarrassingly manual six months from now.

The uncomfortable implication is this: we cannot design for the present state of AI. We have to design for where it's going. And that is a much harder problem.

/ Building a Ship While It's Sailing

Designing for future AI capabilities feels like building a ship while it's already sailing—and not even knowing whether the ocean will still exist in the short-term future.

The tools are evolving. The models are improving. Context windows expand, latency drops, agents become more autonomous, multimodal capabilities become standard, and tool use becomes increasingly native. Yet we are expected to make product decisions now.

This creates a real tension for designers and builders. If we optimize for today's limitations, we risk obsolescence. But if we optimize for future capability, we risk designing for something that does not yet exist.

So what is the right strategy?

I do not believe the answer is prediction. I believe it is posture.

/ The Stone-Age Mental Model

When I think about navigating this moment, I often think about our ancestors in the stone age.

They did not have certainty. They did not have roadmaps or product briefs. What they had was instinct.

They grabbed whatever materials were available. They experimented. They shaped tools to improve their ability to survive. And then they improved those tools over time.

They did not wait for the perfect material. They used what they had, tested it against reality, and refined it through iteration.

We are not that different.

As long as we exist as humans trying to build and survive within complex systems, the instinct is the same: understand change, embrace change, and accept that outcomes will not always align with expectations.

That is the mental model I recommend for AI-native design. Not certainty. Not rigid planning. But adaptive conviction.

/ Designing Six Months Ahead

So what does designing six months ahead actually mean?

It does not mean fantasizing about artificial general intelligence. It means designing for directional inevitabilities.

Context windows will expand. Agents will gain stronger autonomy. Latency will continue to decrease. Multimodal outputs will become standard. Memory systems will improve. Costs will fall.

If we design workflows that assume AI will always be brittle, slow, and reactive, we are effectively freezing our systems in time.

Instead, we should ask questions like: what happens when an agent can reliably plan across ten steps? What happens when memory becomes persistent and meaningful? What happens when real-time data integration becomes trivial? What happens when human oversight becomes selective rather than constant?

When I built AI-agent workflows inside AWS—from document review automation to critique systems and campaign generation—I learned that constraining AI to today's weaknesses leads to defensive UX. Anticipating capability growth, on the other hand, leads to scalable architecture.

That architectural thinking is what separates durable systems from fragile demos. It reflects the same shift from builder to architect that I described in an earlier reflection on designing AI systems that hold up over time.

/ The Psychological Challenge of the Unknown

The hardest part of this shift is not technical. It is psychological.

Designers are trained to resolve ambiguity. AI introduces compounding ambiguity.

You are not just designing for uncertain user behavior. You are designing for tools whose capabilities are also evolving. It can feel like establishing constraints for a system that may soon outgrow them, creating guardrails that might become bottlenecks, or orchestrating agents that may eventually orchestrate themselves.

That uncertainty is uncomfortable. But discomfort should not slow us down. It should encourage us to shorten our feedback loops.

/ Rapid Self-Tooling

If we cannot predict the ocean, the only viable strategy is experimentation.

My practical advice is simple: try new AI capabilities quickly, experiment with the technologies around you, build tools for yourself, and test them until they become genuinely useful.

This approach creates learning through action. Instead of waiting for perfect clarity, you generate signals through building.

When I built six internal AI agents to automate repetitive workflows—such as documentation review, research synthesis, and design critique—the goal was not perfection. It was discovery. Experimentation revealed where the real leverage existed and where human judgment still mattered. Clarity rarely arrives before experimentation. More often, experimentation creates clarity.

/ The Reality of AI Adoption

There is another dimension to this shift: adoption within organizations.

In most enterprise environments, I consistently see three categories of employees when it comes to AI.

The first category actively uses AI tools to enhance productivity and build new workflows. The second category is open to using AI but faces blockers—such as unclear setup processes, lack of proper environments for agents, or uncertainty about how to integrate the tools into existing workflows. The third category remains skeptical. They often believe AI requires more oversight than it is worth, or that correcting AI-generated outputs takes more effort than completing the work themselves.

This distribution is normal.

The mistake many organizations make is trying to push universal adoption immediately.

A more practical strategy is to first pair the first and second categories together. When experienced users work directly with those who are curious but blocked, the barriers tend to disappear quickly. As those users gain confidence, the organization gradually shifts to two categories instead of three.

The next step is to understand the skeptics—not dismiss them. Sometimes their concerns are valid. There are still workflows where AI introduces more friction than value. Forcing adoption in those cases damages trust.

Adoption should follow genuine leverage, not ideology.

/ Respecting Workflow Reality

Another important point is that not every workflow benefits from AI today.

That is perfectly acceptable.

Mature adoption means knowing when to use AI and when not to. It means designing systems that allow selective autonomy instead of forcing automation everywhere.

When I worked on introducing AI agents into internal workflows, the most important breakthrough was not evangelism. It was clarity—clearly defining where AI delivered meaningful leverage and where human expertise remained essential. Trust grows from precision, not hype.

/ Flexibility as a Competitive Advantage

The biggest risk right now is not moving too slowly. The real risk is freezing our mental models.

If we assume AI is just another feature, that chat is the final interface, or that automation always equals autonomy, we will design systems that age poorly.

The teams that succeed will not necessarily be those with the most powerful models. They will be the ones with the most adaptive architectures and the most flexible thinking.

Design six months ahead. Expect to be wrong. And design systems that allow recovery when you are.

/ A Personal Standard

After more than fourteen years in product and UX design—from industrial design to marketplaces to identity infrastructure—I have learned something consistent: durable design is not about always being right. It is about being resilient. AI compresses timelines. It magnifies poor assumptions. And it accelerates both leverage and failure.

In many ways, we are building ships in the middle of the ocean. The wind may change. The map may become obsolete. The ocean itself may evolve.

But the instinct to adapt—to experiment, question assumptions, and shape new tools—is deeply human.

Designing for what AI can do today may produce something useful.

Designing for what AI will likely be able to do six months from now might produce something that lasts.

The difference is not optimism.

It is courage.