Every claim tied to a source. Every source tied to a date.
Rubikn’s work is built to be auditable. Every competitive claim and “what they actually ship” statement is tied to a source, a capture date, and a confidence level — so your team can defend it in demos, reviews, and procurement.
This page documents how we control quality across every engagement. It’s the same standard whether the deliverable is a battlecard, a positioning brief, or an ICP profile.
Three principles that govern every deliverable.
Every claim links to a source, a screenshot, and a capture date. When pages disappear or campaigns change, the proof still holds. Your procurement team can trace any statement back to the original evidence.
Run the same data through the same taxonomy twice and get the same answer. Reproducibility is how you know the findings are solid enough to stake decisions on — not dependent on one analyst’s judgment.
Not all evidence carries the same weight. Every finding is tagged with a confidence tier — High, Medium, or Low — tied directly to the strength of the underlying evidence. No ambiguity, no hedging.
Lock scope and define what matters.
Every engagement starts by naming the decision this research serves. Not “learn about the market” — a specific decision: which competitor to target, which segment to enter, which positioning to defend. The scope determines what we collect, what we ignore, and how we measure success.
Document evidence as it surfaces.
Fieldwork demands discipline. Every claim needs its source link, a screenshot, and the date it was found — so the proof holds even when pages disappear or campaigns shift.
Clean the record, then triangulate.
After fieldwork closes, the real work begins. Strip out what doesn’t hold — outdated pages, unverifiable claims, sources too thin to trust. Then cross-check what remains against multiple signals so no single source carries the whole weight.
Removal Audit
Document what was removed and why it didn’t survive scrutiny. Every exclusion is logged — not silent.
Signal Cross-Check
Verify claims against at least two independent sources when possible. No single source carries a finding.
Secondary Validation
Test findings against reviews, analyst reports, and public documentation. If the claim only lives on the competitor’s marketing page, it’s flagged — not accepted.
Verify the work holds up under pressure.
Run the same data twice and get the same answer. Reproducibility is how you know the findings are solid enough to stake decisions on.
Not all evidence carries the same weight.
Rubikn uses a 4-tier evidence hierarchy to classify every source. The tier determines how much weight a finding carries in the final deliverable — and what confidence level it earns.
Direct Product Proof
Docs, release notes, pricing pages, integration directories, UI screenshots. What they actually ship. This is the hardest evidence.
Third-Party Validation
Review sites (G2, TrustRadius), analyst notes, customer case studies, security and compliance artifacts. Independent confirmation.
Market Signals & Positioning
Ads, webinars, social media, job postings. Shows what they emphasize and invest in — but it’s what they say, not necessarily what they deliver.
Community & Forum Chatter
Forums, Reddit, community posts, secondhand reports. Works only when triangulated with stronger sources. Never stands alone.
Three tiers, one honest answer.
Confidence lives on a scale tied to evidence. No ambiguity, no hedging — just clear language so your team knows what they can move on and what still needs watching.
Tier 1 proof stands alone, or Tier 1 backed by Tier 2 signals. This is what you defend when the stakes are real. Use it in demos, procurement decks, and board conversations.
Tier 2 evidence plus corroborating signals, but no direct product proof. Strong enough to shape strategy — not strong enough to bet everything on it. Flag for monitoring.
Tier 3 or Tier 4 only — market chatter, forum posts, secondhand reports. Mark these as hypotheses or watch items. Never present as settled fact.
Strip the bias, keep the facts.
No judgment language
We remove words that assume or evaluate — “struggling,” “failing,” “best-in-class.” The deliverable reports what’s observable: what they say, what they ship, where the evidence sits or doesn’t.
Let gaps speak
A competitor’s missing proof is reported as a gap, not an accusation. “No public case study found for [claim]” is evidence. “They’re lying about [claim]” is opinion. We deliver the first.
Blind scoring where possible
When scoring competitive claims, the analyst doesn’t know which competitor is being scored until the evidence is classified. This removes favoritism and anchoring bias.
How quality control shows up in your deliverables.
This isn’t a theoretical standard — it’s embedded in every asset we ship. Here’s how it maps to the deliverables your team actually uses.
Battlecards
Every competitor claim links to a source and confidence tier. Your sales team knows which claims are Tier 1 defensible and which are Tier 3 directional.
Reality vs. Rhetoric Matrix
Each cell in the matrix is scored using the evidence hierarchy. The matrix shows where competitor claims match their proof — and where the gaps are.
ICP Profiles
Interview quotes are tagged by theme, validated against CRM data, and confidence-rated. No single interview carries a finding.
Positioning Briefs
Every positioning claim is tested against competitive evidence. If a competitor already owns the position, the brief says so — it doesn’t pretend the position is available.
Questions about how we work.
If your team has specific questions about our quality control process — or wants to review our methodology before engaging — we’re happy to walk through it.