Research Lifecycle
Every Rubikn engagement follows a 6-phase, decision-driven research lifecycle. The process is time-boxed (never open-ended), tied to a single business decision (never “exploration”), and ends with deployable assets and a decision workshop — not just findings.
This is the same lifecycle whether the engagement is a Competitive Proof Sprint, an ICP study, a Positioning Sprint, or a Growth Diagnostic. The phases flex in depth and duration, but the structure never changes. Consistency is how you get quality at speed.
Why a decision-driven lifecycle
Most research sprawls. A question leads to another question, the scope expands, the timeline slips, and the output is a 60-page deck that nobody acts on. Research without a decision anchor is just expensive curiosity.
The Rubikn Research Lifecycle exists to prevent that. Every engagement begins with a single question: What decision are we trying to make? Not “what do we want to learn” — what are we deciding, and by when? This constraint shapes everything downstream: which hypotheses to test, what data to collect, how much depth is enough, and what the deliverable needs to say.
The lifecycle draws on established research design principles — hypothesis-driven inquiry, triangulation of sources, structured sampling, pilot testing, and evidence grading — adapted for the realities of B2B SaaS: small teams, fast timelines, imperfect data, and decisions that can't wait for academic rigor.
The result is research that ships in 2–4 weeks, produces assets the team uses immediately (battlecards, positioning briefs, ICP profiles, growth scorecards), and ties every finding to a specific recommended action. No open-ended exploration. No research for research's sake.
The 6-phase research lifecycle
The core of every engagement. Each phase has defined inputs, outputs, and exit criteria. Work cannot progress until the phase is complete.
Every engagement starts with a Decision Brief — a short document that names the specific business decision this research must inform, why it matters now, what constraints exist (timeline, budget, organizational politics), and what “success” looks like. No research begins until this document is signed off.
From the Decision Brief, we build a Hypotheses Map: a set of testable beliefs tied to the decision, ranked by risk. These are not vague themes — they're specific, falsifiable claims that the research will prove or refute. The hypotheses determine what we study, what we ignore, and how deep we go.
Research questions are then crafted to test each hypothesis without leading or biasing the findings. This is where most internal research goes wrong: questions are written to confirm what the team already believes. Rubikn's questions are designed to challenge assumptions, not validate them.
- Decision Brief — what we're deciding, why now, constraints, success criteria
- Hypotheses Map — testable beliefs ranked by risk
- Research Questions — designed to prove or refute, never to lead
- Scope Guardrails — what's in, what's out, and why
The Research Plan specifies methods (interviews, CRM analysis, competitive audits, ad platform review, content analysis, customer surveys), sources, analysis approach, and timeline. Method selection is driven by the hypotheses — not by what's easiest or most familiar.
The Sampling Plan defines which segments, roles, or data populations to study, with quotas and inclusion/exclusion criteria. In B2B, sampling is rarely random — it's purposive. We sample for maximum variation within the relevant population (e.g., churned customers AND power users, not just whoever responds first).
If interviews or surveys are involved, we build a Recruiting Plan with a screener to ensure participants actually match the criteria. If the research is secondary (competitive analysis, CRM pattern mining, ad intelligence), we build a Source Plan documenting where evidence will come from and how recency/reliability will be evaluated.
- Research Plan — methods, sources, analysis approach, timeline
- Sampling Plan — segments, quotas, inclusion/exclusion criteria
- Recruiting Plan & Screener — if primary research
- Source Plan — where evidence comes from, recency/reliability criteria
Instruments are the actual tools used to collect evidence: interview guides, survey questionnaires, competitive claim extraction sheets, CRM analysis templates, evidence tagging schemas. Every instrument is designed to test specific hypotheses without introducing bias.
Before full fieldwork, we run a pilot — a small-scale test of the instrument with 1–2 participants or data sources. The pilot catches ambiguous questions, broken logic, missing response options, and workflow issues before they contaminate the full dataset.
The Pilot Summary documents what broke, what changed, and what's ready for production fieldwork. The Fieldwork Protocol establishes consent procedures, recording standards, versioning rules, and naming conventions so the evidence library is organized from day one — not cleaned up retroactively.
- Research Instruments — interview guides, extraction sheets, tagging schemas
- Data Capture Templates — evidence logs, claim sheets, scoring rubrics
- Pilot Summary — what broke, what changed, what's ready
- Fieldwork Protocol — consent, recording, versioning, naming conventions
Fieldwork is where research meets reality. Whether we're conducting customer interviews, mining CRM data, auditing competitor messaging, or analyzing ad performance — every data point is captured with consistent tagging, timestamped, and linked to its source.
We track fieldwork status and sample health throughout — monitoring which segments are covered, which are under-represented, and where emerging patterns might warrant adjusting the approach (theoretical sampling). Patterns are flagged as they surface, but conclusions wait until analysis.
Every piece of evidence is stored in an organized evidence library with source attribution, timestamps, and context notes. This traceability is what separates rigorous research from “we looked at some stuff” — and it's what allows the client to audit any finding back to its source.
- Fieldwork Tracker — status, sample coverage, completion rates
- Evidence Library — tagged, timestamped, source-attributed raw data
- Emerging Pattern Flags — early signals documented but not concluded
- Source Documentation — links, screenshots, transcripts, recordings
Before analysis begins, every dataset goes through a QA pass. The QA Log records checks run, issues found, and fixes applied. Common checks include: duplicate detection, source verification, recency validation, and cross-referencing claims against multiple sources (triangulation).
Cleaned data is delivered with a codebook explaining fields, tags, and any transformations applied. Nothing is a black box.
Analysis follows a structured synthesis process. The Insight Map lays out themes, the evidence supporting each theme, and a confidence rating tied back to the original hypotheses. Confidence is graded on a 3-level scale: High (multiple independent sources confirm), Medium (supported but not fully triangulated), Low (single source or indirect evidence).
The Analysis Memo translates findings into implications and clear, prioritized recommendations for the decision at hand. Every recommendation is tied to a specific finding with an explicit confidence level — so the decision-maker knows what's certain, what's probable, and what's a bet.
- QA Log — checks run, issues found, fixes applied
- Clean Dataset + Codebook — fields, tags, transformations documented
- Insight Map — themes, supporting evidence, confidence ratings per hypothesis
- Analysis Memo — implications, prioritized recommendations, confidence levels
Research that ends with “interesting findings” is research that failed. Every Rubikn engagement ends with three things: an Executive Readout, an Asset Package, and a Decision Workshop.
The Executive Readout is a tight deck or memo that states the decision, the evidence, and the recommended move — written for the person who needs to say yes, not the person who did the research.
The Asset Package contains tangible tools the team uses immediately: Reality vs. Rhetoric matrices, battlecards, positioning briefs, ICP profiles, growth scorecards, messaging architectures, landing page blueprints — whatever the engagement calls for. These aren't recommendations about what to build. They're the finished assets.
The Source of Truth is a single repository with datasets, transcripts, screenshots, and links so the record is fully traceable. Any finding can be audited back to its source.
The Decision Workshop is a focused session (typically 60–90 minutes) with a structured agenda, facilitation, and an action plan that names owners and next steps. The workshop isn't a presentation — it's a working session where the team makes the decision the research was designed to inform.
- Executive Readout — decision, evidence, recommended move
- Asset Package — battlecards, positioning briefs, ICP profiles, scorecards (varies by engagement)
- Source of Truth — full evidence repository, traceable and auditable
- Decision Workshop — facilitated, 60–90 min, with action plan and owners
Governing principles
Decision-anchored, not exploration-driven.
Every engagement is tied to a specific business decision. If we can't name the decision, we don't start the research. This prevents scope creep and ensures the output is actionable.
Hypothesis-first, not question-first.
We don't ask “what should we learn?” We ask “what do we believe, and what evidence would change our mind?” This keeps research focused and prevents the infinite-scroll problem.
Time-boxed by design.
Sprints run 2–4 weeks. The lifecycle is designed for this cadence. Unlimited timelines produce unlimited scope. Constraints produce clarity.
Triangulation over single-source.
No finding rests on one data point. We cross-reference across CRM data, customer interviews, competitive audits, and public evidence. Confidence ratings reflect how well-triangulated each finding is.
Assets, not just insights.
The output of research is a tool someone uses — a battlecard, a messaging brief, an ICP profile, a growth scorecard — not a report someone reads once and files.
Traceability end-to-end.
Every finding can be traced back to its source evidence. Every recommendation is tied to a specific insight with an explicit confidence level. The client can audit any claim.
Pilot before you scale.
Instruments are tested before fieldwork. This catches bias, ambiguity, and broken logic before they contaminate the full dataset. It costs a day and saves a week.
Confidence, not certainty.
Research rarely produces absolutes. We grade findings as High, Medium, or Low confidence — so decision-makers know what's solid and what's a bet. Honest uncertainty is more useful than false certainty.
Same lifecycle, different depth
Every Rubikn engagement follows the same 6-phase lifecycle. What changes is the depth and duration of each phase. A Competitive Proof Sprint emphasizes Phase 4 (fieldwork — deep competitive claim extraction) and Phase 6 (battlecard and messaging delivery). An ICP Sprint emphasizes Phase 2 (sampling — customer interview recruitment) and Phase 5 (analysis — behavioral segmentation). A Growth Diagnostic emphasizes Phase 4 (full-funnel data collection) and Phase 5 (constraint-based analysis).
| Phase | Proof Sprint | ICP & GTM | Positioning | Growth Diagnostic |
|---|---|---|---|---|
| 1. Decision & Hypotheses | Standard | Standard | Standard | Standard |
| 2. Research Design | Light | Deep (interview recruitment) | Medium | Medium |
| 3. Build & Pilot | Medium (claim sheets) | Medium (interview guide) | Medium (workshop prep) | Light |
| 4. Fieldwork | Deep (competitive extraction) | Deep (6–10 interviews + CRM) | Medium (buyer language mining) | Deep (full-funnel data) |
| 5. QA & Analysis | Medium | Deep (behavioral segmentation) | Deep (3-Lens framework) | Deep (constraint diagnosis) |
| 6. Deliverables & Workshop | Deep (battlecards, matrix) | Deep (ICP profiles, playbook) | Deep (positioning, messaging) | Deep (scorecard, roadmap) |
What makes this different
Open-ended exploration that expands with every finding.
Scope locked to a single decision. Guardrails set in Phase 1. If a new question surfaces, it goes on the backlog — it doesn't expand the current sprint.
A 50-page report with “findings” and “recommendations.”
Finished assets the team deploys immediately — battlecards, messaging briefs, ICP profiles, scorecards — plus a decision workshop where the team decides, not just reviews.
Findings presented as conclusions. Source evidence buried or missing.
Every finding links to source evidence with a confidence rating. The client can audit any claim back to the raw data. The Source of Truth repository ships with every engagement.
Questions about how we work.
If you want to see how the lifecycle applies to your specific situation — or want to review our methodology before engaging — we’re happy to walk through it.