Skip to content

Research from the local Ombudsman dataset

Why most published CIFAS complaints still fail at the Ombudsman

The dataset behind this project contains 1,657 raw records matching CIFAS marker disputes. After repeated case references were removed, that became 1,313 unique published decisions. In that deduped set, 936 were not upheld and 377 were upheld.

1,657

Raw records

1,313

Unique published decisions

71.3%

Not upheld in the deduped set

The public record is not hopeless. It is just stricter than most people expect.

External reporting points in much the same direction as the local dataset. Which? said the Ombudsman had received 1,155 complaints referencing Cifas and that uphold rates had been fairly consistent, at around 31%. That is not the same universe as the published-decision set, but it lands close enough to tell the same story: removal is possible, but it is not routine.

01

Evidence usually beats explanation

Published fraud-marker decisions often turn on whether the institution can justify the filing with evidence that actually matches the standard. Where a bank can point to a coherent fraud report, the complaint becomes much harder. Where the evidence does not get beyond suspicion, the filing becomes vulnerable.

02

The weakest cases often attack the wrong thing

A lot of failed complaints are written as moral defences rather than evidence challenges. The more effective line is not simply 'I am innocent' but 'what material did you rely on, what case type did you choose, and does the record satisfy the published standard of proof?'

03

Process matters as much as the underlying facts

Published decisions show repeated problems around wrong categories, thin investigations, and confusion between victim and suspect. Director and company-linked disputes also run into jurisdiction issues if the complaint arises from a business relationship rather than a personal consumer one.

04

Preparation still decides a lot of outcomes

Some institutions plainly hold material the complainant never sees in full. But many files also reach the Ombudsman in poor shape: weak first complaints, missing timelines, no DSAR trail, and no focused challenge to the filing standard or data-accuracy issue.

What the published decisions actually reward

Decision pattern

Cases tend to improve when the complaint focuses on the filing itself: evidence quality, the chosen marker category, whether the institution investigated properly, and whether the data can still be described as accurate.

Ombudsman warning

The Ombudsman now says AI-generated legal-sounding complaints can be inaccurate or irrelevant. That is a reminder that the point is not to sound theatrical. It is to be clear, relevant, and evidence-led.

Useful reading: Monzo decision DRN-6003188 shows how strong institutional evidence can defeat a customer's explanation. TSB decision DRN-2936455 and Monzo decision DRN-4936783 show the opposite pattern: where the record does not get beyond suspicion or fails the "clear, relevant and rigorous" test, the marker can come off.

The next question is not "How do I explain myself?"

It is "What evidence did they rely on, what category did they file, and does the record actually satisfy the standard they were supposed to meet?" That is the point at which a serious removal challenge begins.