Loading...
Every letter grade you see on Howsociable is derived deterministically from cited signals. No editor tweaks the score by hand. This page documents the exact formula — what feeds in, how each source is weighted, and where the letter-grade thresholds sit.
A “trust signal” is a single externally-verifiable data point — a Trustpilot aggregate, a Reddit thread, a BBB complaint count, a news article, or a result from our own 30-day test. Every signal an editor records has:
Nothing is paraphrased or generated. If we can't link to the origin, we don't cite it.
Not all sources are created equal. A moderated review on a regulated platform counts for more than an anonymous tweet. The rule of thumb: the harder it is to fake and the stricter the moderation, the higher the weight. Our own 30-day tests top the scale because we can guarantee they happened.
| Source | Default weight |
|---|---|
| HowSociable 30-day test | 10/10 |
| Better Business Bureau | 8/10 |
| Trustpilot | 8/10 |
| G2 | 8/10 |
| Trusted Reviews | 7/10 |
| Sitejabber | 6/10 |
| Reviews.io | 6/10 |
| Google Reviews | 6/10 |
| News / press | 6/10 |
| 5/10 | |
| Published refund policy | 5/10 |
| Independent blog | 4/10 |
| Forum | 4/10 |
| Domain WHOIS | 4/10 |
| Social media | 3/10 |
| Search engine | 3/10 |
| SSL certificate | 3/10 |
| Other source | 2/10 |
Admins may override any single signal's weight (capped at 10) when context warrants — for example, reducing a Trustpilot row if the profile has obvious fake-review patterns.
A positive signal contributes weight × 1.0 to the score. A negative signal contributes weight × −1.3. Neutral signals add context but don't move the score.
The ~30% penalty mirrors what Trustpilot, Sitejabber, and the BBB all do: critical reviews are costlier to fabricate, and satisfied buyers rarely write one. Weighting negatives harder is how these platforms prevent a product from drowning out legitimate red flags in a flood of five-stars.
Every signal is tagged with one category. We aggregate signals within each category (weighted sum, then normalized to 0-100) to produce the spider-chart you see on every Trust Report. This makes it obvious whether a service is strong on retention but weak on support, for example.
Are followers / engagement real humans, or bots that will be swept away?
How closely does what's delivered match what's advertised?
Is the price fair for what you actually receive?
Can you reach a human, and do they help?
Do the followers / likes stick, or drop off within days?
Does it arrive on time and at the promised volume?
How are your account credentials and personal data handled?
If something goes wrong, do they honour the refund policy?
How transparent is the company about who they are?
The overall score is the weighted average across every signal — each signal contributes effectiveWeight × polarityMultiplier. The result is normalized to a 0-100 scale where 50 = neutral, 100 = fully positive, and 0 = fully negative. That number maps to a letter:
| Letter | Score range |
|---|---|
| 95–100 | |
| 90–94 | |
| 85–89 | |
| 80–84 | |
| 75–79 | |
| 70–74 | |
| 65–69 | |
| 60–64 | |
| 55–59 | |
| 50–54 | |
| 40–49 | |
| 0 |
Thresholds mirror the spread used by Trustpilot / Sitejabber / BBB: most “good” companies cluster in the B to A- range. A+ is reserved for standout performers.
Below a minimum combined weight (roughly the equivalent of one high-reliability signal plus one mid-tier signal), we publish “Gathering data” instead of assigning a letter. Inventing a grade from one or two data points would be misleading — a statistical accident, not a judgment. Products in this state show up as Pending on the Trust Reports index.
That lives in the review itself. The Trust Grade is a deterministic signal aggregate, not an opinion piece.
No vendor can buy a grade. No affiliate relationship moves the score. Ad disclosures live on every affected page.
Trustpilot, Sitejabber and BBB are inputs into our grade — weighted alongside independent sources, news coverage, and our own tests. We never treat any single platform as ground truth.
Every grade recomputes whenever a signal is added, retracted, or re-weighted. The observedAt date on every source tells you how fresh each input is.
Every reviewed service, ranked by its cited Trust Grade.