TrustPoint is a defined evidence standard issued and maintained by Digital Scorecard. This page sets out the conditions under which a TrustPoint may be issued and when it may not. TrustPoints are not marketing claims. They are time-bound, referenceable evidence artefacts.
A TrustPoint is a verified insight derived from structured research conducted across a company's existing customer base.
It reflects repeatable customer patterns, not individual opinions or selectively presented examples.
If these conditions are not met, no TrustPoint exists.
TrustPoint is issued and governed by Digital Scorecard. It is not a review platform, a testimonial service, or a marketing tool. It is a research standard — a formally structured, independently administered, and publicly verifiable record of customer outcomes.
All TrustPoints are filed with Digital Scorecard and listed in the TrustPoint Registry.
A TrustPoint is not:
If an insight is based on anecdotal evidence, unverified sources, or selective reporting, it does not qualify.
The critical distinction is independence of administration. A TrustPoint finding is only valid when the research is designed, administered, and verified by Digital Scorecard — not by the business whose customers are being surveyed. This independence is what gives TrustPoint findings their evidential weight.
A TrustPoint is issued only when all of the following conditions are met.
Data is collected strictly from verified customers. No panels, purchased respondents, or anonymous sources are used. Verification is conducted against client CRM or transactional systems.
The sample must be large enough to identify repeatable patterns across the customer base. Insufficient sample size results in non-issuance. Minimum thresholds are assessed relative to business type and customer base size.
Results must meet significance thresholds. The standard threshold is 95% statistical significance. The 90% threshold may be applied in specific B2B contexts and is explicitly disclosed when used.
The research design must include mechanisms to reduce sampling and response bias. This includes use of client CRM systems for sampling, balanced question design, and data weighting where appropriate.
Every TrustPoint report must state its boundaries, including segment differences, time-based limitations, and non-universal outcomes. A TrustPoint that does not acknowledge its own constraints is not issued.
Findings must be consistent across respondents and segments. A single strong response or a finding isolated to one customer segment does not qualify. The pattern must be repeatable.
Customer-reported observations are documented separately from interpretation or positioning conclusions. What customers said is kept distinct from what it means commercially. This preserves the evidentiary integrity of the finding.
Not all customer proof is equal. TrustPoint occupies a specific and deliberate position in the hierarchy of evidence — above consumer review platforms in methodological rigour and independence, below peer-reviewed academic research and clinical trials in formal academic standing.
This positioning is deliberate. TrustPoint gives businesses access to a standard of verified customer proof that was previously only achievable through academic or clinical research, without requiring the time or cost of a clinical study.
| Evidence Type | Independence | Methodology | Verifiability | AI Readability |
|---|---|---|---|---|
| Clinical trials | Full | Highest | Full | High |
| Peer-reviewed academic research | Full | Rigorous | Full | High |
| TrustPoint | Named external administrator | Structured, statistically validated | Full — public report URL + schema.org Dataset | High |
| Consumer review platforms (e.g. Trustpilot) | Platform-level | None (unstructured) | Partial | Moderate |
| On-site testimonials | None | None | None | Low |
| Marketing claims | None | None | None | Low |
TrustPoint reports are published with schema.org Dataset structured data embedded in the page source. This is not a cosmetic feature, it is a fundamental part of the standard.
Schema.org Dataset markup provides typed, structured identification of the data as a Dataset (not a marketing page), a named external creator organisation, a documented measurement technique, sample size, publication date, and a public report URL as an independent reference point.
This means AI research tools including Perplexity, Microsoft Copilot, ChatGPT with web access, and enterprise procurement AI systems can read and interpret TrustPoint data as structured, independently verified evidence rather than as unstructured marketing text.
A marketing claim that says “96% of customers sleep better” is text. A TrustPoint result embedded as a schema.org Dataset with a named external creator, documented methodology, verified sample size, and public URL is a machine-readable data record with provenance. These are categorically different signals to AI systems performing vendor due diligence.
The Differential Comparison Index (DCI) is a benchmark used to measure how well a product performs compared to other alternatives customers have tried.
It is calculated from verified customer responses comparing their experience with similar products on the market.
DCI is scored from 0 to 100:
– 60–100: Strong comparative performance — customers rate the product as better or much better than alternatives
– 40–59: Comparable performance — performs similarly to competing products
– 0–39: Underperformance — customers rate the product lower than alternatives
A high DCI score indicates that the product not only satisfies customers but outperforms other options they have used. This provides an independent competitiveness measure beyond standard satisfaction ratings.
Establishing your DCI is a game changer for how you market to your ideal clients. It is based on what your current customers consider and value, not what you claim.