Skip to content

I’ve been there before

I remember what it felt like when mobile apps changed everything. I was working in a digital products company, right in the middle of a fast-changing world of user experience. We moved from static pages to responsive, personalised interfaces. Suddenly, we weren’t just designing screens. We were shaping how people interacted, made decisions, and accessed services anytime, from their pockets.

It feels like that again now. But this time, it’s not the screen that’s changing. It’s what powers it.

AI Testing

Over the last few months, I’ve been building and experimenting with AI-powered Copilots and small agents at Hitachi Solutions. Some are just for me, like a meal planner that creates healthier, more sustainable shopping lists. But many support the way I work:

  • 📝 Writing more inclusive job descriptions
  • 📚 Planning internal training for consultants
  • ⚠️ Exploring risk dependencies across complex delivery programmes
  • 🧱 Simulating design sprints to train teams
  • ✏️ Speeding up workshop planning and content creation
  • 🎯 Automating follow-ups and actions from research sessions

Each one began the same way: not with tech, but with a need.

Designing the right AI experiences starts with the human, not the system

At Hitachi Solutions, we design AI solutions to solve real problems for our teams, clients, and the people who use public services.

You can’t treat AI like a visual interface. Designing AI is about shaping intent, defining interaction, and ensuring the outcome creates value. It’s about how well the system supports the human, not just what it does.

Some of the best tools are incredibly focused. They don’t try to do everything. They do one job clearly and confidently, whether that’s preparing a capability review summary, generating follow-up emails after a meeting, or suggesting research questions from a brief.

Think about how widely adopted apps like Google Calendar or Apple Notes succeed by not trying to do everything. They focus on doing one thing well, with minimal friction.

That clarity makes them easier to pick up and stick with. People understand what they’re for. They try it. They recommend it to others. And they keep coming back.

In my experience, the same principle applies to generative AI agents. When they’re designed to address a single, meaningful problem and do it well, adoption follows naturally. They’re easier to understand, easier to trust, and ultimately, more satisfying to use.

And if working in entertainment taught me anything, it’s this: people return to things they enjoy.

Shaping the behaviour, not just the interface

From dozens of small Copilot experiments across the business, we’ve landed on a simple, reliable approach:

🔍 Start with a real need

Not a trend or a wishlist. Start with a job someone struggles with – a time-consuming task or a frustrating process.

✍️ “Write” the experience before building it

Before I prototype, I script. In plain text. I shape the journey, how the agent should respond, what tone it should take, and what guardrails or frameworks it needs.

🧪 Test with ourselves first

I begin with real users from our teams: consultants, designers, and delivery leads. If it’s not useful after two or three uses, I iterate or stop.

🧩 Build modular tools, not magic boxes

Instead of relying on one large assistant that tries to cover everything, I focus on creating small, targeted agents – each solving a clear problem within a more complex journey. These agents are easier to test, improve, and integrate into real workflows. Most importantly, this approach offers a faster time-to-value; people can use them quickly, see the benefit immediately, and build trust over time.

♻️ Evolve based on feedback

We rely on qualitative feedback to spot friction and evolve the tool with fast iterations.

How we rapidly prototype AI experiences

We developed our own version of the Design Sprint focused on AI because we saw the gap: teams want to explore AI, but their levels of understanding vary widely. Some think of automation, others imagine actual robots and many don’t know where to begin.

Our AI Design Sprint creates space for that uncertainty. We give teams the context, the boundaries, and the ethical foundations they need to generate real, practical ideas and not just abstract ambitions.

It’s a focused 4-day process that helps teams:

  • Understand where AI can truly help users or staff.
  • Identify the most valuable problems to solve.
  • Design how the AI should behave (not just what model to use).
  • Prototype the experience, typically as a low-fidelity chatbot built in a no-code tool, to test how the idea lands and whether it delivers value.
  • Test with real people and learn what works (and what doesn’t).

We’ve used this with public sector clients, internal teams, and cross-disciplinary delivery groups. It’s become a go-to tool to take AI from buzzword to trusted solution.

This is the work, and we know how to do it

The tools are new. But the design mindset isn’t. We already know how to listen, explore, and create something real. Designing AI is just the next evolution of human-centred design, and at Hitachi Solutions, we’re well into it.

We’re not waiting for permission. We’re not stuck in strategy decks.

We’re building, testing, and learning. And we’re helping others do the same.

Final thought: Just start

If you’re curious about what AI can do, start small.

Build a tool that helps you.
Test it.
Adapt it.
Then build the next one.