There’s a weird myth in digital teams that user research exists in a separate dimension to delivery.
It’s not blocked by delivery. It’s not accountable to it. It’s just… hovering nearby. Observing. Judging. Occasionally descending from the clouds to drop a 60-slide deck on people who are just trying to get a button to work with supreme judgement.
Then sprint planning happens, the room goes quiet, and someone inevitably asks: “So… when does the research bit happen? What do the tickets say?”
This is how I avoid that conversation entirely. I put user research on the backlog, in the sprint, and into the delivery flow. No fake tickets. No pretending insight arrives on a rigid, magical schedule.
I break it into five chunks. They’re not sexy. They are, however, honest and perfectly cromulent tickets.
1. Recruitment & Research Ops
(The work everyone forgets is actually work)
If recruitment isn’t on the board, it doesn’t exist. And if it doesn’t exist, it magically becomes “Jason’s fault” when a study slips because we couldn't find the right people.
I ticket the admin. I ticket the hell that is research:ops.
Defining participant criteria: Actually getting the team to agree on who we’re talking to before I start looking.
The Approach: Are we using a panel? Are we being scrappy? Are we begging people on LinkedIn? Spoilers, I don’t recruit via LinkedIn, that’s just for shovelling my my memes.
The Ethics & Logistics: Consent, incentives, accessibility needs, and scheduling that works for real humans, not just our calendars.
The sign up log: I create a centralised sign up sheet with timings and links to all the notes and prototypes.
This makes the invisible work visible. It also forces trade-offs early. When the PO asks for 12 users by next Friday, I point at the board and ask: “Ok - what type of person do you want to speak to and who is helping me with this?”
2. Planning
(Where research becomes useful or stays vague)
This is the bit I protect the hardest. Before we talk to anyone, I want total alignment on the "North Star" questions. WHAT ARE WE ACTUALLY WANTING TO FIND OUT?
If we can’t explain why we’re doing the research in two sentences, we aren’t ready to do it.
Planning tickets usually cover:
The Decision Map: “If we learn X, we’ll do Y.”
The Discussion Guide: Not a script—a thinking tool. I know some people like beastly scripts but I avoid this like the plague.
The Design Triangle: Lining up with my content and interaction designers to ensure everything makes sense and we’re all comfortable with what we’re doing.
This avoids the dreaded “interesting but not actionable” feedback. I don’t want trivia; I want to unblock a developer. Although I will listen to the trivia.
3. Interviews & The "Wash-Up"
(Don't just run sessions, ANALYSE AS YOU GO!)
I never use a ticket that just says “Run interviews -> Done.” That’s a trap and it’s reductive.
Each interview ticket includes the session itself, but also the immediate wash-up. After a session, I want a 15-minute debrief with whoever was observing and anyone in the team who would like to learn about what we saw.
What surprised us?
What assumptions are currently wobbling?
What didn't land at all?
Didn’t Jason ask some brilliant questions?
By the time we get to the final analysis, the team already feels the research. They’ve heard the quotes. They’ve seen the struggle. I’m not dragging them to an "insight session" later; they were there when it happened.
4. Analysis
(Not a solo sport)
Analysis is where research either earns trust or quietly dies in a folder. I break it up because analysis isn't one singular "moment”, it’s sustained supported learning and understanding shared by the team.
Synthesis: Me pattern-spotting and sense-making with a brilliant soundtrack from 1999-2012
The Sanity Check: Reviewing with the UCD team to make sure I’m not just seeing what I want to see and what I’ve found passes the sniff test.
Meaning-Making: Sitting with product and design to decide what the research means and how we move forward.
I’m not chasing a perfect academic model. I’m chasing the confidence to act. If the output doesn’t change a decision, what’s the point?
5. Playback
(Same research, different shapes)
Playback isn’t “The Presentation.” It’s a series of translations.
On the board, I split this into different cuts for different audiences:
The Sprint Review: What we learned this sprint.
The Design Crit: Presenting concepts and findings to design nerds to rake it over the coals.
The Leadership Summary: Shorter than you think, focused on what the point of it is and what we’re going to do about it.
The Repository: For future humans who join the project in six months when people inevitably move on.
If you only do one playback, you’re optimising for yourself, not the real world.
Why I Bother Doing It This Way
Making research visible on the board stops it from being mysterious. It sets realistic expectations and makes the trade-offs explicit.
But mostly, it makes research feel like delivery, not an abstract aside.
I don’t want to be the person who turns up at the end with a list of "insights" that everyone is too busy to implement. I want to be part of how the team makes better decisions under pressure.
It’s not clever. It’s sharing research through visibility.