DIY Market Research: Learn What Your Customers Really Want
Audience note: This guide is written for late-20s to early-30s builders, creators, and indie founders who can already record video, capture screen shares, and edit content. You don’t need a research degree—just a plan, a mic, and a few hours.
Why do DIY market research?
– It derisks decisions. You replace “I think” with “customers showed.”
– It speeds you up. A lightweight study today prevents a month building the wrong thing.
– It sharpens your content and product positioning. Clear problems make clear messages.
What “market research” really means
– It’s systematic listening. You articulate a decision, gather evidence from the right people, and make a call you’re willing to defend.
– Think two buckets:
– Qualitative: small N, deep understanding (interviews, usability tests, diary studies).
– Quantitative: larger N, pattern validation (surveys, landing page tests, pricing experiments).
Start here: define your decision and hypotheses
– Decision examples:
– Which feature do we build first?
– Which audience do we prioritize this quarter?
– Which pricing tier should we launch?
– Write 3–5 hypotheses in this format:
– We believe [audience] needs [outcome] because they struggle with [pain].
– We’ll know we’re right when we see [behavioral metric].
– Example: We believe remote video editors want a faster rough-cut workflow because assembly cuts are tedious. We’ll know we’re right if 40%+ of trial users complete a 3-video rough cut in week 1.
Profile the audience and recruit participants
– Define 1–2 segments with must-have traits (job to be done, tools used, company size, region, budget, behaviors).
– Recruiting channels you can tap fast:
– Personal network: past coworkers, classmates, Twitter/LinkedIn followers.
– Communities: relevant subreddits, Discords/Slack groups, Facebook Groups, niche forums.
– Customer lists: newsletter subscribers, waitlists, previous buyers.
– Local: coworking spaces, meetups, coffee shops (post a flyer with QR sign-up).
– Incentives (United States norms, digital gift cards or cash):
– 20–30 minutes: $25–$50
– 45–60 minutes: $50–$100
– Diary study (3–5 entries across a week): $100–$200
– Screener basics (5–8 questions, mobile-friendly):
– Confirm segment (role, company type, usage frequency).
– Disqualify professionals who sell services to your target unless they are your target.
– Ask a behavior question (e.g., “In the past month, how many times did you [do task]?”).
– Collect availability and consent to be recorded.
Choose the right methods for your decision
1) Problem interviews (30–45 minutes, remote video)
– Goal: Understand pains, current workarounds, and triggers that create demand.
– Setup: Video call, record locally and to the cloud; ask permission on-record.
– Script outline:
– Warm-up: Tell me about your role/day. What does success look like?
– Context: Walk me through the last time you [job]. What led up to it?
– Pain depth: What made that hard? What did you try? What failed? Cost of the problem?
– Workarounds: Which tools or hacks do you use? What’s ugly but works?
– Triggers: What signals it’s time to fix this? Who else is involved?
– Close: If magic wand, what would the solution do for you next week?
– Tips:
– Ask for stories, not opinions. “Tell me about the last time…” beats “Would you use this?”
– Silence is a tool. Let them think; count to three before rescuing the pause.
2) Usability tests (20–30 minutes, screen share)
– Goal: See where people struggle with your prototype, site, or content flow.
– Tasks: “Sign up and complete X,” “Find pricing,” “Export a project.”
– Protocol: Give tasks, not instructions; ask them to think aloud; don’t explain UI.
– Success metrics: Task completion, errors, time on task, places of hesitation, quotes.
3) Diary studies (3–7 days)
– Goal: Capture real-world use and emotions over time.
– Prompts: “Record a 2-minute selfie when you attempt [task],” “Screenshot the moment something breaks,” “Rate your frustration 1–5.”
– Deliverable: A highlight reel with timestamped clips and a summary of patterns.
4) Surveys (5–10 minutes)
– Use after interviews to quantify patterns.
– Question examples (adapt as needed):
– Frequency: In the last 30 days, how often did you [task]?
– Importance vs satisfaction: How important is [outcome]? How satisfied are you today?
– Top pains: Which of these slowed you down most last week? (single choice)
– Willingness to pay: At $X per month, how would you rate the price? (too cheap/cheap/expensive/too expensive)
– Prioritization: If we built only one of these, which matters most?
– Avoid leading, double-barreled, and jargon-heavy questions. Keep answer scales consistent.
5) Social listening and review mining (asynchronous)
– Skim subreddits, app store reviews, competitor comment sections, GitHub issues, YouTube comments.
– Capture verbatim quotes with links, tags (pain, workaround, desired outcome), and frequency.
6) Landing page smoke tests and “fake door” tests
– Goal: Measure real interest before building.
– Steps:
– Create a simple page with a clear promise, benefits, and a single CTA (join waitlist, request access).
– Run small-budget ads ($50–$200) to the page or post organically in relevant communities (follow community rules).
– Track CTR (clicks/impressions), CVR (sign-ups/visits), and email quality (valid/invalid).
– Optional fake door in product: Add a disabled button for a not-yet-built feature with a “Join the beta” modal; track clicks.
7) Pricing experiments (fast, directional)
– Van Westendorp (price sensitivity meter): Ask at what price it is too cheap, cheap, expensive, too expensive; plot ranges to find a price window.
– Gabor-Granger (purchase intent at different price points): Randomly show one price and ask likelihood to buy; repeat across participants.
8) Competitor teardown (60 minutes)
– Create a 2×5 grid: value props, onboarding friction, feature depth, pricing, social proof.
– Record a screenshare while you sign up to 2–3 competitors; time the first wow moment and the first “ugh” moment.
Run your studies: scripts and checklists
Interview checklist
– Tech check: mic, camera, stable internet, local backup recording.
– Consent: “With your permission, I’ll record to capture details; your name won’t be shared outside our team.”
– Opening: “This is about your experience; there are no right answers. If something’s unclear, that’s on our design, not you.”
– Close: Ask for referrals to 1–2 peers like them; offer incentive immediately.
Usability test task examples
– “Create an account and set up your first project.”
– “Import a 2-minute clip and generate a rough cut.”
– “Find the pricing page and explain the differences between plans.”
– “Export your project and share a link.”
Diary study prompt pack
– Day 1: “Record a 2-minute clip showing your current setup and tools.”
– Midweek: “Show the last time you got stuck. What did you try?”
– Final day: “If you could change one thing about your workflow next week, what is it?”
Survey starter set (10 questions)
– Role/experience
– Frequency of job
– Top pain (single choice)
– Secondary pain (multiple choice)
– Importance of key outcome (1–5)
– Satisfaction with current solution (1–5)
– Willingness to pay at $X (too cheap/cheap/expensive/too expensive)
– Must-have feature (choose one)
– Likelihood to recommend current solution (0–10)
– Open-ended: “Describe the last time this really frustrated you.”
Analyze fast without getting lost
Create a research board (use any doc or spreadsheet)
– Columns: Participant, Segment, Method, Scenario, Pain (quote), Workaround (quote), Desired outcome, Triggers, Ideas, Severity (H/M/L), Frequency, Evidence link/timecode.
– Drop in short clips or timestamps. Your future self will thank you.
Affinity mapping (1 hour)
– Write one insight per sticky (digital is fine).
– Cluster by theme (e.g., onboarding confusion, export reliability, pricing opacity).
– Name each cluster with a short phrase from a verbatim quote.
Jobs-to-be-Done snapshots
– Use: When I [situation], I want to [motivation], so I can [expected outcome].
– Example: When I get raw footage from a client late at night, I want to assemble a decent rough cut in 15 minutes, so I can deliver a morning preview without pulling an all-nighter.
Prioritize with a simple score
– ICE: Impact (1–5) x Confidence (1–5) x Ease (1–5). Sort descending.
– Or Kano (for features):
– Ask paired questions: How do you feel if the feature is present? Absent?
– Classify: Must-have, Performance, Delighter. Build must-haves first; sprinkle one delighter.
Sample size heuristics (good enough for scrappy teams)
– Interviews: 5–8 participants per segment often surfaces the main issues.
– Usability tests: 5 participants per round; fix issues; test again.
– Surveys: 100–200 responses for directional insights; ~400 for +/-5% margin at 95% confidence.
Make the decision visible
– One-page research report template:
– Decision we needed to make
– Who we spoke to and how many
– What we saw/heard (3–5 key findings with 1–2 video quotes each)
– What we’re doing next and why
– What we’re not doing (for now) and what would change our mind
– When we’ll revisit the decision
Turn insights into experiments
A simple two-week sprint plan
– Day 1: Define decision, hypotheses, segments, screener, and schedule.
– Days 2–4: Conduct 6–10 interviews (30 minutes each), 5 usability tests.
– Day 5: Affinity map, write one-pager, pick top 2–3 ideas.
– Days 6–9: Build landing page or prototype; set up tracking; draft survey.
– Day 10: Launch smoke test, send survey, recruit for a short diary study.
– Days 11–12: Monitor metrics; capture early patterns; cut a 90-second highlight reel.
– Day 13: Synthesize quant results; revisit hypotheses; score opportunities.
– Day 14: Decision meeting: build, pivot, or park; plan the next sprint.
Shoestring budget example (under $300)
– Incentives: 8 x $25 = $200
– Ads to the landing page: $75
– Misc (gift cards, transcription, snack for a local session): $25
Stronger signal budget example (under $1,000)
– Incentives: 12 x $50 = $600
– Ads: $200
– Transcription or translation: $100
– Prototype tools/misc: $100
Practical tips for video-native researchers
– Always get on-record consent at the start: “Is it okay if I record to reference later? We won’t share your name publicly.”
– Back up recordings—local and cloud. Label files: YYYY-MM-DD_Segment_FirstName.
– Mark moments live: clap or say “timestamp” when something notable happens to create a waveform spike you can find in editing.
– Create a 60–90 second highlight reel per theme with captions; share with your team or community to build alignment.
– Use over-the-shoulder screen capture for usability tests and picture-in-picture for emotion cues.
Landing page and metric cheat sheet
– Core elements: headline (problem + promise), 3 benefits with outcomes, social proof (even if a pilot quote), single CTA.
– Benchmarks to start the conversation (these vary by niche; use as rough filters, not gospel):
– Ad CTR: 0.8–2% acceptable; >2% promising.
– Landing page conversion (to email): 10–30% acceptable; >30% strong.
– Waitlist quality: >70% valid emails; >30% respond to a follow-up question within 48 hours suggests real intent.
– Basic formulas:
– CTR = clicks / impressions
– CVR = sign-ups / visits
– Drop-off = 1 – completion rate
Ethics and privacy, the lightweight way
– Be transparent: who you are, what you’re studying, what you’ll do with data.
– Get permission to record and to use anonymized quotes.
– Store recordings securely; restrict access; remove identifiers in clips you share.
– For U.S.-based participants, avoid collecting unnecessary PII; if you do, state how long you’ll keep it and how to request deletion.
Common pitfalls and how to avoid them
– Asking for opinions about hypotheticals. Fix: Ask for stories about the last time.
– Recruiting only friends. Fix: Mix known contacts with strangers from relevant communities.
– Skipping incentives. Fix: Respect their time; even small gift cards boost show-up rates.
– Leading questions. Fix: Replace “Would you use X?” with “How do you do X today?”
– Overbuilding before testing demand. Fix: Run a smoke test or fake door first.
What to deliver at the end of your first DIY sprint
– A one-page brief with decisions and evidence.
– A highlight reel with 5–7 clips that show pains and desired outcomes.
– A prioritized list of top 3 opportunities with ICE scores.
– A go/no-go on your next experiment (prototype, pricing test, or launch).
Quick-start kit (copy this into your doc or notes)
– Hypothesis: We believe [audience] needs [outcome] because [pain]. We’ll know via [metric].
– Screener must-haves: [role], [tool used], [frequency], [region], [budget].
– Interview script: Context → Last time → Pain → Workarounds → Triggers → Magic wand.
– Metrics: CTR, CVR, completion rate, 1–2 activation metrics tied to value.
– Decision log: Decision, date, evidence, confidence, next review date.
Final thought
You don’t need a big budget to learn what customers really want. You need focused questions, the right five to ten conversations, one simple experiment, and the discipline to choose based on evidence. Start small this week, share the clips, make a call, and keep the loop tight.