A few weeks ago the Centre for Health Economics at York University produced a report looking at some of the statistics in QOF. It looks in some detail at both disease prevalence and to some degree at exception reporting. They are particularly interested in the difference in behaviour between high scoring practice and lower scoring ones, although they also look at social and societal differences between practices.
They only looked at Scottish practices due to the rather better data that was available for them, which has got to be a pat on the back for ISD Scotland.
I won't go into detail about the mechanics of the analysis - you can read it yourself although I would warn you that some knowledge of statistics is needed. It is not a light read. health economics papers rarely are. Most of the really interesting findings are related to the differences between 2005 and 2006 in practices that did, and did not, get maximum points in a given area.
The results are interesting. In general terms those practices who hit the top indicator thresholds in the first year increased their prevalences in the second year relative to those practices which did not. Conversely those practices who did not reach the top thresholds tended to increase the amount of exception reporting they did.
Now there is probably nothing too surprising in that. It would be a rather worrying situation for an incentive scheme not to lead to changes in behaviour in the direction of the incentive. That is exactly what is happening here. Practices are tending to most work in the areas that lead to the greatest incentive. There are certainly issues with the underdiagnosis of chronic diseases and there are probably many people who could be exception reported and are not.
The report talks a lot about "gaming". It does not define this however and I struggle to find a good definition on the Internet. Perhaps the most benign definition would be, in this context "undertaking actions to increase revenue that would not improve patient care". Actually this would encompass all exception reporting. This is not a bad definition as they define altruism as precisely the converse (personally I think that is professionalism but lets not get bogged down in semantics)
The authors of the report do not look so kindly on gaming. They define it thus:
However, exception reporting also gives GPs the opportunity to exclude patients who should in fact be treated in order to achieve higher financial rewards. This is inappropriate use of exception reporting or "gaming".
You can see where we are going here, can't you? By page 15 they are just calling it cheating.
That is not to say that I disagree with their mathematical analysis. I actually think it is rather brilliant and represents an attempt to model QOF mathematically in a way that has not been seen before - in public at least.
However they fall over in the conclusions. They cannot see any reason for these variations except cheating and dishonesty. Now that is one possible explanation for their findings but it is not the only one by any means. They seem to have very little idea of how exceptions are actually used. They don't see practices a living organisations with priorities. If you incentivise them to look for more patients they will find them - there certainly seem to be plenty undiagnosed with diabetes and hypertension. If they are going to get extra cash for a more efficient exception reporting system then they are likely to do that. It could simply be an indication of priorities.
None of this needs dishonest exception reporting or fraudulent diagnosis, simply an understanding of where the statics come from. So are GPs cheating lying scoundrels? We some might be but there is no solid evidence of this on a large scale. It is reassuring (as a GP) to read their first conclusion.
The fact that practices could have treated substantially fewer patients (12.5%) without falling below the upper thresholds for indicators and thereby reducing practice revenue is compatible with altruistic motivation.
Not so bad after all!