Practise 3: Pluxee Stakeholder Workshop

SEMINAR 3

A short reflection on building a workshop that teaches Pluxee stakeholders to separate client wishes from real user needs through evidence, not impressions.

MUNI

Interview

University

Design Thinking

The problem with "the client wants a button"

In product development at Pluxee, requirements often arrive wrapped in subjective conviction. A sales rep forwards an email: "The client is screaming that they want a button." Engineering capacity gets spent on features no one uses, while critical UX problems quietly persist. I wanted to design a learning object a workshop that would teach stakeholders to tell the difference between a want and a need, and to do it with evidence rather than instinct.

Who the workshop is for

The audience was deliberately non-technical: Pluxee Sales, Account Managers, and Product Owners. People who talk to clients every day, but who rarely sit with analytics dashboards or support ticket queues. The goal was not to turn them into data analysts. The goal was to give them a method for triangulating three sources what the client says, what the support data shows, and what users actually do before anything reaches Jira.

The case-study format

I built the workshop as a case study in Miro. A simulated working situation, with one fictional villain at its center: an email from a colleague named "Eda," full of strong opinions and vague claims. "The portal is slow. Clients hate it. We need a speed-up button." From there, participants worked through three evidence cards a support report, Google Analytics data, field notes from real users and had to decide what was actually going on.

The two "aha" moments

The workshop pivoted on two confrontations with the data. The first: Eda's email claimed the portal was technically slow. The Analytics card told a different story eight minutes on page, frequent back-navigation, repeated clicks. The portal wasn't slow to load. The flow was inefficient. Participants killed the "speed-up button" request and proposed a redesign of how data was displayed in the table instead.


The second came from the field notes. A request for bulk-ordering features in the portal looked reasonable until participants realized they had been conflating two different people. The HR manager who pays is not the employee who uses the card. The employee in the field doesn't care about administration; they care about how fast the payment goes through and where the nearest partner restaurant is. Bulk-ordering dropped down the priority list. Geolocation moved up.

Evaluating with Kirkpatrick

I used the Kirkpatrick model to evaluate the workshop across four levels. Reaction was strong the "investigation" framing kept engagement high, and the visual contrast between impressions and facts in Miro worked as a genuine eye-opener. Learning was visible in the moments described above: participants moved from defending Eda's email to dismantling it. Behavior change showed up in the final exercise, where every participant filled in a structured Jira template


Problem, Evidence, Proposed solution


instead of the usual "I want feature X." And at the results level, even as a pilot, the workshop prevented at least two unfunded features from entering the backlog: an Excel export used by 2% of users, and a color change requested by a single account.


What I would change

The "Eda email" framing turned out to be the most powerful design decision. People love critiquing someone else's brief — and that distance lets them see their own habits more clearly. Splitting the room into impressions (the email) and facts (the cards) gave the whole exercise a clean visual logic.

The card I would rewrite is the mobile-app one. The original brief didn't separate the payer from the user clearly enough, and a few groups conflated them on the first pass. In the next version, I'm splitting that persona explicitly — HR as payer, employee as user — so the difference is built into the materials, not something participants have to discover by accident.

Closing thought

The workshop is not really about Jira templates or Miro boards. It's about installing a small mental checkpoint before a feature request leaves someone's mouth: Is this an impression, or is this evidence? If stakeholders ask that question even once before forwarding the next email, the workshop has paid for itself.

© FAQ
(WDX® — 07)
Clarifications
© FAQ
(WDX® — 07)
Clarifications
© FAQ
(WDX® — 07)
Clarifications

FAQ.

Defining outcomes through a transparent process and honest dialogue.

01

What services do you offer?

02

What is your typical process?

03

How do you identify what users truly need?

04

Why invest in research instead of jumping straight into design?

05

What is your primary goal when designing an interface?

06

What exactly is the "output" of your work?

What services do you offer?

What is your typical process?

How do you identify what users truly need?

Why invest in research instead of jumping straight into design?

What is your primary goal when designing an interface?

What exactly is the "output" of your work?