A rating on a review site is only worth the process behind it. This page documents ours in enough detail that a reader, a competitor, or a regulator could reconstruct how we arrived at any individual score.
Rating new casinos is harder than rating established ones, because most of the evidence that normally carries the most weight, particularly long-run payout history and resolved complaint records, does not exist yet. We have built our framework around that reality rather than ignoring it. The result is a method that is more cautious than most sites in this segment and explicit about what it does and does not know.
The Problem With Rating New Casinos
A review published a month after a casino launches cannot claim, honestly, that the operator pays out reliably at scale. Nobody has requested a five-figure withdrawal yet. A review published six months in can start to say something meaningful, but only if the reviewer has actually looked.
Most sites in this segment skip that distinction. They publish the same rating format on day one that they use on year five, with the same language, the same confidence, and the same commercial urgency. We treat this as the core methodological problem of the niche, and the rest of this page describes how we try to solve it.
The Evidence Hierarchy
Every claim in a review is backed by one of four types of evidence. We use them in this order, with the strongest first.
Tier 1: Direct testing
Where practical, we register accounts, make deposits, complete identity verification, request withdrawals at multiple sizes, open support tickets across the available channels, and document the responses. This is the strongest evidence available to us because it is ours, it is timestamped, and it is repeatable.
Direct testing is not possible at every operator. Some restrict access by geography, some require commercial relationships we have declined, and some cannot be tested at meaningful withdrawal sizes without costs we are unwilling to bear. Where we have not tested directly, we say so in the review.
Tier 2: Verifiable documents
Licences with a registration number that can be checked against the issuing authority’s public register. Terms and conditions captured on a specific date. Payment method lists documented at the time of review. These are lower than direct testing because they describe what an operator says, not what it does, but they are stable and checkable.
Tier 3: Corroborated reports
Player experiences sourced from multiple independent places (specialist forums, review aggregators, direct reader contact) that describe consistent patterns. One player saying a withdrawal was slow is not evidence. Thirty players across four platforms describing the same withdrawal pattern over the same weeks is.
Tier 4: Single-source claims
Operator self-description, marketing copy, and uncorroborated individual reports. We read this material because it has to be read, but we do not let it drive ratings.
A rating that leans heavily on Tier 3 and Tier 4 evidence is explicitly flagged as provisional. That flag is not a marketing decoration. It changes the review’s language, its recommendation strength, and its position in our ranking.
The Five Scoring Dimensions
Every operator is scored on the same five dimensions. Weights are set according to which factors cause the most harm when they go wrong, not which are easiest to assess.
1. Licensing credibility (25%)
Scored against four specific checks: whether the licence number resolves to an active entry on the issuing authority’s public register; whether the registered company on that entry matches the brand or its disclosed parent; whether the licence has been subject to any published sanction or suspension; and the enforcement record of the jurisdiction itself. A Malta Gaming Authority or Isle of Man licence starts higher than a Curacao sub-licence, which starts higher than a licence from a jurisdiction with no accessible enforcement history.
Operators fail this dimension outright if the licence cannot be verified on a public register, if the registered company does not match the brand, or if the licence is under active sanction. Failure on any of these triggers the licensing floor (see below).
2. Payout evidence (25%)
Split into three sub-components, with the final score dominated by whichever is weakest.
- Stated policy: withdrawal processing times, minimum and maximum withdrawal amounts, method restrictions, and verification requirements as documented in the terms.
- First-hand testing: where possible, we run at least one small withdrawal and one larger withdrawal through identity verification and observe the elapsed time, any requests for additional documentation not disclosed upfront, and any unexplained holds.
- Reader and forum reports: pattern of complaints about withdrawal times, pending status durations, and cancellations, weighted by volume and independence of sources.
A casino can score well on stated policy and still fail the other two. The reverse is rarer but treated identically.
3. Terms and conditions (20%)
Scored on four specific components:
- Bonus fairness: wagering requirement against industry norms (35x is typical, 60x+ is flagged), maximum win caps on bonuses and free spins, game weighting disclosure.
- Withdrawal fairness: weekly and monthly withdrawal limits, pending periods, method restrictions tied to deposit method.
- Account provisions: dormancy rules, closure grounds, confiscation clauses.
- Clarity: whether the most restrictive terms are findable without navigating through multiple documents.
Terms that are fair but obscured score lower than terms that are fair and prominent. Terms that are unfair on any of the first three components cap the overall rating, regardless of performance elsewhere.
4. Operator group and transparency (15%)
Most rating frameworks omit this. For new launches it carries real weight, because the parent group’s track record is often the most reliable predictor of how the new brand will behave. We score on three specific things:
- Ownership disclosure: is the operating company identified in the terms, and does it match the licence registration? Hidden or deliberately obscured ownership caps this score.
- Group history: what other brands does the operator group run, and what is their combined payout and complaint record? A new brand from a group with a clean record starts higher than an identical brand from a group with unresolved complaints.
- White-label signals: is the site running on a platform shared with other operators, and is that disclosed? White-label is not disqualifying, but undisclosed white-label is.
Full marks require clear ownership, a documented group history, and open disclosure of the technical setup. Sites that meet none of these score zero on this dimension.
5. Player protection (15%)
Voluntary deposit limits, session timers, self-exclusion, cooling-off periods, and reality checks. None are mandated in the non-GamStop segment, which means their presence or absence is itself a signal about how the operator views its players. Operators with no protection tools at all score zero, not a low score.
Scored on a simple checklist: one point for each of deposit limits (daily, weekly, monthly), loss limits, session time limits, self-exclusion (at least one month, six months, and permanent options), cooling-off periods, reality check notifications, and prominent responsible gambling links in the footer. The total is normalised to a 15% weighting.
How Scores Translate Into Ratings
The weighted sum produces a raw score out of 100. That score is then subject to three overrides:
- Licensing floor. An operator failing any of the four licensing credibility checks cannot receive an overall recommendation, regardless of how it performs elsewhere.
- Payout floor. Verified non-payment, or a pattern of withdrawal delays corroborated across multiple independent sources, caps the overall rating at the lowest tier.
- New-casino provisional cap. Operators in their first six months of operation cannot reach the top rating tier regardless of raw score. The rating is explicitly flagged as provisional, and the review states which evidence is missing.
These overrides exist because an averaged score can disguise serious failures. A casino that pays slowly but licenses cleanly, terms generously, and protects players politely is still a casino that pays slowly, and the rating should reflect that.
What Happens After Publication
A rating is a snapshot. Operators in this segment change terms, change ownership, change payment processors, and change support providers without announcement. We monitor active listings continuously, with heavier attention to:
- Operators in their first year of trading, where behaviour is most likely to shift
- Operators with pending reader reports under investigation
- Operators whose licensing jurisdiction has issued recent enforcement actions
- Operators that have recently changed terms, bonus structures, or payment methods
When new evidence materially affects a rating, the listing is updated. When the change is significant enough that existing account holders should know about it, we say so at the top of the review.
Why Operators Get Removed
Removal happens when an operator falls below the minimum standard for inclusion, not because of a commercial dispute or a single negative report. The specific triggers:
- Verified non-payment of legitimate withdrawals
- A consistent pattern of withdrawal delays beyond stated processing times, corroborated across independent sources
- Licence suspension, revocation, or unresolved regulatory action
- Material changes to terms that introduce unfair conditions after sign-up
- Operator failure to respond to serious reader complaints we have escalated
- Evidence of identity misrepresentation, hidden ownership changes, or relaunches designed to shed prior complaint histories
When an operator is removed, the review is not deleted. Readers with existing accounts need to be able to find out what happened. The review is marked as delisted, the cause is stated, and it is left accessible.
What Our Framework Does Not Measure
It is worth being direct about what this framework does not capture. It does not measure game quality, software provider prestige, or aesthetic factors. It does not score bonus size, free spin count, or loyalty programme value. Those things matter to players, and we write about them in individual reviews, but they do not enter the safety rating. A casino can offer the best welcome bonus in the market and still be unsafe. A casino can offer a mediocre bonus and still be the right recommendation for a careful player.
The rating on this site answers one question: how much risk is this operator likely to expose you to, relative to other operators in the same segment. Anything beyond that, the individual review covers.
Holding Ourselves to This
Publishing a methodology is a commitment. A rating framework that looks rigorous on this page but is not applied consistently to every review is worse than no methodology at all, because it trades on credibility it has not earned.
Two checks on that risk:
- Every published review is tagged with which evidence tiers it draws on and whether it is provisional under the new-casino cap. If a review does not show those tags, it has not been produced under this framework and should not be treated as having been.
- If a reader believes a specific rating is not consistent with what this page describes, we want to know. Challenges to specific ratings, sent to our contact address, are read and responded to, and where the challenge is correct, the rating is updated.
In a market without a UK regulator, without mandatory dispute resolution, and without a minimum standard operators must meet before they open, the quality of the information available to players before they deposit is one of the only meaningful protections they have. A methodology page is only useful if it is honoured. We try.