I test product-market fit (PMF) the hard way: not by running expensive cohort studies or waiting for months of traction, but by getting prototypes and ideas in front of real people fast. Over the years I’ve leaned on guerrilla usability sessions — short, focused interviews and hands-on trials in informal settings — combined with a small set of actionable metrics. This combo tells you whether people understand, value and will pay for what you’re building before you invest heavily.
Why guerrilla usability + metrics works
Guerrilla usability is cheap, fast and brutally revealing. You learn how users interpret your product, where they get stuck, and whether the core value proposition lands — often within a few hours. Metrics add rigor: they let you track changes, compare variants and avoid making decisions based on gut feeling alone.
What I like about this approach is that it forces you to validate three things early:
Designing a guerrilla session
Start with a one-page test plan. I always include: objective, prototype fidelity, scripted tasks, recruiting criteria, success measures and time per session (I aim for 15–30 minutes).
Recruiting participants fast
I once recruited ten participants for a six-hour guerrilla day by posting a short message on my local university Slack, offering a £10 coffee card. Other quick channels:
Session script and what to say
Always start with a short consent and context statement: “This is not a test of you; it’s a test of this product. I’m trying to learn how to improve it.” Then give them a task, ask them to think aloud, and avoid leading questions.
Key metrics to collect during and after sessions
Not all metrics are created equal. Focus on a small set that map to comprehension, value and early behavioral intent. I track these in a shared spreadsheet and update them immediately after each session.
| Metric | What it measures | Early target |
|---|---|---|
| First-Task Success Rate | Can the user complete the core task without help? | ≥ 70% |
| Time to First Value (TTFV) | Time until user reaches an “aha” moment | < 3 minutes for consumer; < 10 for complex B2B |
| Willingness-to-Pay (WTP) | Expressed likelihood to pay or convert | 20–30% positive in early tests |
| Net Promoter Cue | “Would you recommend?” proxy (qualitative) | Multiple enthusiastic responses per 10 users |
How to run the session: a play-by-play
Here’s my preferred 20-minute session structure:
Record sessions (with consent) or take timestamped notes. I use a simple rubric: Success/Failure, Confusion Points, Quotes, and Suggested Fixes. The quotes often carry more weight than aggregated percentages.
Interpreting results — what to look for
Patterns > individual opinions. One disgruntled user is noise; several doing the same unexpected action is a red flag. Key signals:
From insights to action
After a guerrilla day, I hold a 30-minute synthesis session with the team. We map issues into quick fixes (copy, button labels, flow changes) and experiments (A/B tests, pricing anchor changes). Prioritize by impact and ease of implementation.
When to scale testing
Once the guerrilla sessions hit your targets consistently — improved first-task success, lower TTFV, rising WTP — move to slightly larger, more quantitative tests. Run a 1–2 week landing page validation, gated beta with email capture, or an ad-driven micro-conversion funnel using Google Ads or Meta Ads to test acquisition cost against LTV assumptions.
Tools I use
Guerrilla usability sessions aren’t a silver bullet, but they expose fatal flaws early and cheaply. Combine them with a lean metric set and you’ll avoid the classic startup trap: building something people don’t actually understand or want. Run the sessions, collect the metrics, and iterate ruthlessly — your product will thank you for it.