Last updated: 14 April 2026

How We Rate Casinos

James Mitchell

By James Mitchell · 200+ casinos rated since 2018

The Fortunica score is a weighted average across eight criteria, each judged from data collected during the testing protocol on the How We Test page. Output is a number between 1.0 and 5.0. The number matters less than the breakdown — a 4.2 with fast withdrawals and weak support tells you something different from a 4.2 with the reverse pattern, and the review carries both.

The Eight Criteria, With Weights and Honest Reasoning

CriterionWeightWhy this weight
Withdrawal speed and reliability20%The single thing readers most need to know. Slow or refused withdrawals are the failure mode that costs players real money.
Bonus terms and value15%Headline marketing rarely matches the maths. This is where most casinos cheat.
Game library and providers15%Not just count — provider mix and RTP transparency matter more than raw numbers.
Licensing and player protection15%Determines whether you have any recourse when things go wrong.
Customer support10%Less weight than people expect, because by the time you need support badly, the licence and withdrawal speed already told you what you needed to know.
Payment options10%Method count, GBP support, fees, crypto handling.
Mobile experience10%Most UK gambling traffic is mobile. Browser parity with desktop is the bar.
UX and security5%Lowest weight because most operators are technically adequate. Differentiation here is rare.

The Formula

Final score = Σ (criterion_score × weight) / 100, where each criterion is scored 1–5 from the test data. The output is a single number to one decimal place. We don't run separate scores for "casual players" versus "high rollers" or "by payment preference" — one operator, one number. That's deliberate: the alternative is a scoring matrix that lets us never give a low rating because every operator is "ideal for someone".

Worked example. An operator scores: withdrawals 4 (good but not exceptional), bonus 3 (40x wagering with £5 max bet, average), library 5, licensing 4 (Curaçao with no major issues), support 3, payments 4, mobile 4, security 3. Weighted total: (4×0.20) + (3×0.15) + (5×0.15) + (4×0.15) + (3×0.10) + (4×0.10) + (4×0.10) + (3×0.05) = 0.80 + 0.45 + 0.75 + 0.60 + 0.30 + 0.40 + 0.40 + 0.15 = 3.85. Final score: 3.9.

What Each Score Range Means

ScoreWhat it means in practice
4.5–5.0Top tier. Fast withdrawals, fair bonuses, support that actually solves problems. I'd recommend without much qualification.
4.0–4.4Strong. One or two specific friction points but core experience is reliable. Read the cons in the review carefully.
3.5–3.9Acceptable for casual play. Watch for the specific weakness called out in the review.
3.0–3.4Below average. Real friction in payment, support or bonus terms. Consider the alternatives we cover.
Under 3.0Avoid. Major issues with at least one critical area: slow withdrawals, hostile T&Cs, opaque licence, or all three.

Automatic Score Caps for Specific Failures

Some failures cap the maximum possible score regardless of how the rest of the test went. These are the cases where a high overall score would mislead readers about a specific risk.

Documented withdrawal refusal in the past 12 months caps at 2.5. This includes operator-side refusals beyond reasonable KYC delays, partial-payment patterns that look like cashflow management, and any "winnings cap" enforcement that wasn't disclosed at deposit. We pull this from operator complaint pages on Casino Guru, AskGamblers and CasinoMeister, weighted by complaint volume relative to estimated player base.

Active CMS player complaints with no operator response after 30 days caps at 3.0. We accept that complaints get filed in bad faith sometimes; what we don't accept is operators ignoring complaint mediators entirely, because that's the system that actually protects players.

Hidden bonus terms enforced at withdrawal caps at 3.5. This is the specific case where a clause exists in the T&Cs but isn't visible at activation, and gets used to void winnings. We test for this by claiming the bonus, deliberately edge-casing one clause we're uncertain about, and seeing whether support cites a rule that wasn't visible to us as a player at that stage.

Failed KYC requests with no specific reason caps at 3.0. KYC can be slow for legitimate reasons. KYC that says "we cannot verify your documents" without specifying which document or why, and then fails to respond to follow-up, is a different problem.

Rating Disagreements — How We Resolve Them

This section exists because no two-person editorial team agrees on every score. We try to make our resolution process visible so readers can see how subjective judgement actually enters the system.

Here's the most common case. I run a test, I score the eight criteria, and Hareem (our fact-checker) reads the draft and disagrees on one criterion (usually support or mobile. Disagreement happens roughly once every fifteen reviews. The resolution rule we use: whichever of us has the more specific, citable evidence wins. If I rated support a 4 because my live chat queue was 90 seconds, and Hareem says it's actually 3 because the email response on her parallel ticket took six days, the email evidence wins because it's a longer, more representative sample.

For larger gaps — once or twice a year, a full point of disagreement on overall score — we re-test. Different reviewer, different time of day, different deposit method. Whichever pattern repeats wins.

One disagreement I lost in 2025 was worth flagging here. I'd rated an operator 4.0 for withdrawals based on an 18-hour cashout. Hareem pointed out that her test the same week showed 50 hours, and that the operator's pattern across the AskGamblers complaint thread suggested 18 hours was the lucky tail. Final published score for withdrawals: 3.0. The casino's review notes the variance explicitly.

Score Updates

Reviews get retested every six months as standard. The score updates and gets a "score updated [date]" note at the top of the article, with one or two sentences explaining what changed. If an operator changes ownership, licence, or T&Cs in a material way, we run a fresh full test and the article may shift up or down significantly.

Reviews older than 18 months without a re-test get flagged "stale" until updated. We try not to let many sit in stale state because outdated reviews are the failure mode that hurts readers most — operators change quickly, and a four-star rating from 2024 telling you about today's experience would be misleading.