In financial services, online identity verification is only “good” if it does two jobs at once: it reduces risk and it keeps onboarding moving. Product teams need predictable conversion. Risk and Compliance teams need evidence, consistency, and controls that stand up in audit.
The figures paint a clear picture. UK Finance reports APP fraud losses were £257.5m in the first half of 2025, up 12% year-on-year, and over £600 million stolen by fraudsters in the first half of 2025. Cifas also reports large volumes of fraud filings and flags the growing role of AI-enabled tactics.Automated KYC checks: measure outcomes, not just “pass rates”
If your automated KYC checks are working, you can explain - in numbers - what happens to every cohort:
Good programmes track completion rate, time-to-decision (pass/fail/refer), and the manual review rate. They also track “why” a user was referred, using consistent reason codes that Compliance can map to policy.
Together, this creates a clear, audit-ready trail which shows what data was used, when it was checked, and what rule produced the decision (a practical interpretation of e-KYC frameworks and assurance thinking).
Synthetic identity can be missed by singular checks. Instead, it is more likely to show up in patterns: thin files, inconsistent attributes, or identities that do not resolve across independent sources.
Therefore, “good” includes coverage and match depth: how often can you corroborate key attributes across independent data sources, and how often do you return a clean “no match” that can be routed into stronger step-up. (This is also where perpetual KYC solutions become practical - repeatable checks that refresh from authoritative sources when risk changes, rather than re-running full onboarding each time.)
A good online identity verification setup collects the minimum data needed to make a decision, then proves it can stick to that standard in day-to-day operations. The ICO is explicit that personal data should be “adequate, relevant and limited to what is necessary”, and not collected “on the off-chance” it might be useful later.
That gives you a practical set of metrics: average number of attributes requested per journey, the percentage of journeys that complete without ID upload (where your risk model and policy allow), and the proportion of users asked for documents or biometrics after an initial low-friction check.
On the back end, measure retention discipline: how many records sit beyond your defined retention periods, how quickly data is deleted when it no longer has a purpose, and how often sensitive identity data is accessed internally (access frequency is a strong proxy for whether “minimised” is real).
If you cannot report these consistently, you are both carrying privacy risk and creating operational risk when Legal, Compliance, or auditors ask what you collect, why, and for how long.
The EDPB’s guidance on data protection by design and by default backs the same idea: minimisation is something you implement in the system, not a policy line you point to after the fact.
OneID® helps teams meet these benchmarks by verifying via banks and trusted networks, returning only the attributes needed, and supporting high-assurance, auditable decisions without building a document-first experience into every journey.
If you need a sanity check against your current provider, book a short session with OneID® and we’ll help you define the metrics that matter for your risk appetite and onboarding funnel.