🍸 Shaken, Not Stirred

5 min read

There's a reason James Bond orders his martini shaken, not stirred.

It's not about flair. It's about intent.

Shaking breaks things apart. Stirring preserves what already exists. When you're trying to understand what something is really made of—and whether it still works—you don't gently mix. You disrupt.

That framing became unexpectedly useful when I started reexamining how our design team actually worked.

Not how we said we worked, and not how our process diagrams claimed we worked. I wanted to understand how decisions, communication, and execution actually moved through the system.

/ Shaken: Breaking the Workflow Apart

When teams talk about improving design workflow, the instinct is often additive: introduce new tools, new rituals, or new meetings. That approach assumes the underlying structure is already sound.

I wasn't convinced it was.

Instead of optimizing the system as it existed, I decided to shake it. I took our current workflows—design-to-design collaboration, design-to-engineering handoffs, cross-functional alignment, async reviews, and decision loops—and broke them down into their smallest units.

I mapped every touch point, every handoff, every moment where context moved or decayed, and every place where effort quietly repeated itself.

Once everything was laid out as individual nodes instead of a clean process diagram, patterns emerged quickly. Repetitive work was masquerading as rigor. Communication pipelines were optimized for availability rather than clarity. Designers were compensating manually for gaps the system refused to own.

What looked like a tooling problem was actually a structural one.

/ Envisioning the Immediate Future

Rather than imagining a distant future of fully autonomous AI systems, I focused on a much more practical question: what does the immediate future of design work look like if AI is treated as infrastructure rather than novelty?

The goal wasn't to replace designers or simply move faster. The goal was to understand where cognitive load could safely be removed, where context consistently broke down, and which decisions truly required human judgment.

Framing the problem this way led to several workflow inventions—things we could test immediately with real work rather than speculative ideas.

/ Adding Ingredients: Agents, People, and Reality

This work didn't happen in isolation. Alongside my design partners and managers, I conducted team-wide and cross-team interviews to understand where friction actually lived.

The goal wasn't just to collect complaints. I wanted to understand why people had normalized certain kinds of friction and which parts of the workflow had quietly become manual workarounds.

From that research, I introduced and deployed four AI agents designed specifically for our workflow. I shared them with the design team and close cross-functional partners so they could experiment with them in their day-to-day work.

The goal wasn't adoption. It was exposure. I wanted people to experience what it felt like to collaborate with systems that could carry context, enforce constraints, and reduce repetitive effort.

The result was bittersweet.

People genuinely liked the agents. Many started using them quickly. But some also became stuck—not because the tools were difficult to use, but because the broader ecosystem was already crowded. There were too many AI tools available, many of them overlapping in capability.

More power didn't necessarily create more clarity.

/ Not Stirred: The Discipline of Restraint

That realization led me to give a piece of advice that felt counterintuitive in an AI-saturated environment:

Don't stir.

Don't mix multiple AI tools casually. Don't stack systems for the same purpose. And don't chase novelty in the middle of a workflow.

Instead, start by defining the outcome you want to achieve and the deliverable you need to produce. Then work backward to determine which tool best supports that goal.

If two tools solve the same problem, choose one. If a tool doesn't have a clear role in the workflow, it probably doesn't belong there.

This guidance wasn't about restricting experimentation. It was about maintaining coherence.

Systems break down when responsibility becomes ambiguous. AI workflows are no different.

/ Closing the Loop

Advice alone doesn't create sustainable change. To make the system work, I also established a feedback loop to capture how people were actually using the agents.

When teammates encountered friction or confusion, those moments became signals rather than failures. We tracked usage patterns, surfaced recurring issues, and iterated deliberately based on real behavior.

Overrides weren't treated as mistakes. Confusion wasn't framed as resistance. Both were design inputs.

Over time, this loop allowed the agents—and the workflows around them—to evolve in a way that remained understandable to the team.

/ What This Changed for Me

This work reinforced something I've come to believe strongly about senior design leadership.

Impact doesn't come from introducing more tools. It comes from deciding what not to mix.

Shaking a system reveals its structure. Stirring too early hides it.

AI makes it tempting to keep layering capabilities, chaining tools together, and accumulating complexity in the name of progress. But durable systems rarely emerge from accumulation. They emerge from intent, boundaries, and restraint.

When I think about how design teams should evolve in an AI-native environment, I don't think first about speed.

I think about clarity.

Shaken, not stirred.