Welsh data now online

The QOF data for Wales in 2006/7 is now available. It actually came out about six weeks ago but I missed it at the time and heard via a reader last week.

This completes the data for 2006/7 although I do still need to update the downloads section of the site over the next couple of days.

What's the point?

A little nihilistic maybe as questions go but when applied to QOF it would be nice to think that all this effort is doing the patients good. After all paying GPs and keeping administrators gainfully employed is all very well but it would be nice to think that it was actually achieving some health outcome.

Well there is, as yet, very little evidence of actual improvements in patients outcomes and at least some evidence of very little improvement. It is simply too early to say for sure. An article in this weeks BMJ (subscription required outside of NHS) goes rather further and suggests that harm may actually coming about because of the targets.

The quality and outcomes framework diminishes the responsibility of doctors to think, to the potential detriment of patients, and encourages a focus on points scored, threshold met, and income generated.

Pretty severe stuff but it is a feeling anecdotally shared by a reasonable number of GPs and indeed some patients (not suitable for those offended by swearing). Indeed there are quite a number of points made that I would broadly agree with. There are weaknesses in the approach of QOF, in particular in the application of treatment to groups rather than individual circumstance, although that is a problem Evidence Based Medicine has been struggling with for years - although to describe the QOF as fully evidence based is to rather push the definition.

This debate has some time to run.

Exception reporting in England - all new!

In all of the general excitement(!) of the release of the 2006/7 QOF data it would be quite easy to miss the QOF exception bulletin produced by the Information Centre for England for the same year. Not perhaps the most gripping of documents but very useful none the less. It is rather dry with plenty of statistics but relatively little comment and no exploration of the reasons behind individual indicators. If you are not familiar with exception reporting in QOF it may be worth looking back at past exception articles.

I am not going to repeat any of the data there, rather to try to provide a little background to help understand what is going on. Page 11 (and to their credit the page numbered 11 is also the 11th page of the PDF - certainly not universal) shows a table of the top ten excepted indicators. There is also the bottom ten but I will concentrate, as I imagine most people will, on the highest figures.

Top of the list is CKD 3 (CKD and hypertension with BP less than 140/85) which has an exception rate of nearly 30%. The equivalent indicator for hypertension alone (BP5) does not even reach the top ten. What is going on here? Well firstly hypertension is very difficult to control in kidney disease so maximum tolerated can quite easily be reached. There is, however, a bigger and more technical issue. Following diagnosis of a condition a patient is automatically excepted for the next nine months if they don't meet the target. This was a new indicator this year and was not really a commonly made diagnosis before. With a simple assumption that practices started work on this QOF a year before (April 2006) then three quarters of the patients could have been excepted if they did not hit the target ( 9/12 ). Suddenly 30% seems fairly good. We can expect to see this drop next year.

Next is CHD 10 (beta blockers in CHD) which has always had a high exception reporting component. Rises a bit this year may be due to the advice that beta blockers are not much use after a year following a heart attack. They are also used much less first line for hypertension than previous due to new research. QOF is looking a bit dated here. Expect a rise again next year.

At third is AF 02 (ECG to diagnose atrial fibrillation) at 21%. Once again this indicator is for quite a short period - looking back over a year. Thus in this case 25% could be excepted automatically. Still fairly high though.

The timescale issue is also true of Asthma 8 (reversibility) at 20%, Stroke 11 (referred for investigation) at 18% and Dep 2 (depression scoring) at 17%. Again these only apply since first of April 2006.

MH 6 (comprehensive care plan) actually seems quite low at 17% due to the mental health register containing everyone who has ever had a psychosis or bipolar disorder - whether they still have the condition or not. MH 9 (annual review) is much the same at 15%.

Finally in the top ten is Epilepsy 8 (fit free for a year) at 17%. This reflects the difficulty in controlling some forms of epilepsy combined with a general lack of problem seen by some patients with occasional fits.

What is interesting is that only Epilepsy and beta blocker indicators have some clinical relevance in the exception reporting. All of the others (eight out of ten) say more about the business rules and the administrative nature of the indicator rather than patients or practices. So the take home message has to be don't place too much importance on exception reporting rates.

English data now online

The English data is now on the QOF database joining the Scottish and Northern Ireland data which has also been tidied up a little. The English data was a little delayed by the postal strike eventually arrived safely.

There are a couple of "virtual" indicators, largely relating to prevalence. I have created two depression indicators relating to the prevalence of people requiring screening for depression and those who have a history of depression recorded in the past. I would avoid putting too much weight on the latter as historical coding may be really quite variable between practices. In fact as practices were not specifically working towards these virtual indicators they should all be used with some caution.

I have also been asked about smoking prevalence. There is therefore a virtual indicator here too. Here it relates to the number of smokers amongst those covered by the smoking area (those with CHD, LVD, stroke, asthma, COPD and hypertension) who have been asked. This is not the only way to do it and is purely a judgement call on my part. In particular it may not correlate with some of the "official" registers and is not the one used for payment.

You may also notice that I do not put prevalence information for the palliative care domain on the main prevalence list. For one thing this prevalence is not used for payment. Secondly the numbers are generally so small as to be unreliable and thirdly they are so small they are suppressed for confidentiality reasons in many cases by the departments of health.

There are about half a dozen English practices without names or addresses. There was not a comprehensive look up table included with the English data this time so I have used several different sources. I will try to correct these in time.

Finally we are still awaiting the Welsh data. I have heard nothing official but will keep asking!

Resources for Primary Care Research

I have had a few emails over the last few months about using QOF data for research and trying to break down some of the data. Unfortunately QOF is quite limited in what can be divined about individual patient treatment. There is a little more potential for breaking down populations with some of the composite registers this year but things are still pretty limited.

For those looking at used primary care data there is an excellent report on all of the sources of primary care data available. A user’s guide to data collected in primary care in England is a summary of all of the data sources, including QOF, with details of their uses and limitations. It is published by the Eastern Region Public Health Laboratory - one of the rather unsung chain of public health laboratories.

This has to be essential reading for anyone conducting or even contemplating doing research or analysis on primary care data. I can't actually see that a printed version is available or I would get a copy for my bookshelf - but get it on your computer now!

UK Prevalence Data

Although we don't have full practice level data for Wales and England yet there is some national level data. We can work out prevalence in all four of the countries and for the UK as a whole. They are listed below. Smoking is not in the table as it is not listed at the national level but should be available when the practice level data comes through.

On the subject of practice level data there is some more information on the information centre website. They are planning to send out CDs so I will apply for one. Unfortunately there is a postal strike over the next week which may affect delivery somewhat. There should certainly be some demand. The 2006 full data database has been downloaded from this site over eight hundred times.

No news from Wales as yet.

England Scotland N Ireland Wales UK
Asthma 5.78% 5.48% 5.75% 6.53% 5.79%
Atrial fibrillation 1.29% 1.27% 1.25% 1.61% 1.30%
Cancer 0.91% 0.92% 0.79% 0.93% 0.91%
Chronic kidney disease 2.39% 1.82% 2.44% 2.28% 2.34%
COPD 1.43% 1.86% 1.53% 1.94% 1.49%
Coronary heart disease 3.54% 4.55% 4.18% 4.28% 3.67%
Dementia 0.40% 0.55% 0.52% 0.42% 0.41%
Depression Screening 7.24% 7.50% 7.56% 7.39%
Depression Ever 6.25% 6.13% 7.27% 6.55%
Diabetes mellitus 3.66% 3.52% 3.17% 4.21% 3.66%
Epilepsy 0.60% 0.72% 0.74% 0.73% 0.62%
Heart failure 0.78% 0.88% 0.81% 0.51% 0.78%
Hypertension 12.51% 12.61% 11.68% 14.26% 12.58%
Hypothyroid 2.55% 3.14% 2.90% 3.13% 2.63%
Learning disabilities 0.26% 0.41% 0.32% 0.30% 0.28%
Mental health 0.71% 0.79% 0.75% 0.72% 0.72%
Obesity 7.42% 7.01% 8.38% 9.64% 7.53%
Palliative care 0.09% 0.10% 0.10% 0.10%
Stroke and TIA 1.61% 1.97% 1.62% 1.97% 1.66%

Scottish and Irish data ... and that's it.

The data for Scotland and Northern Ireland was released last Monday and is now on the site. It has been a little more awkward uploading the data this year due to the changes in the areas and the appearance of areas without prevalences (palliative care) and depression having two different prevalences. I hope this makes some sense when viewing the data but nothing is set in stone and bright ideas welcome!

Wales also released some data this week but this did not go down to practice level and is therefore not particularly useful for many purposes. The statistical release was described as release one so there may be more although the site also suggests that there will not be an update for a further year. I am enquiring about further data.

Even more oddly is the English data. The Information Centre has spreadsheets of data at national, SHA and PCT level but not practice level. Practice level data is available but only one practice at a time through their own web interface. I have emailed them asking about spreadsheets but have not year heard back. In fairness they were probably quite busy on Friday.

I will keep you informed about their replies.

New Business Rules (v10)

There is presumably some schedule behind the production of new business rules for QOF. These are the rules that govern the data extraction from practice systems and are negotiated across all four countries. For this reason they tend to be a bit of a camel.They pop up every six months or so, and the version numbers seem to increase by 0.5 each time. Counter intuitively it is the ones ending in .5 that are the big ones but with version ten of the business rules being recently released what is new?

Well not a lot. This has its downsides. Mental health is still a bit of a mess with its Hotel California register (once you are on it you can never leave). For the most part this will be something of a relief to practices who don't fancy changing all of their codes again.

There are a few changes worth noting. Firstly smoking exception codes have disappeared, but only for Records 22. The exception codes (for informed dissent and unsuitability) are still there for high risk groups counted in the smoking indicators.

Also in relation to smoking patients under 20 with asthma are no longer in the high risk group. I don't know why, especially as patients of that age with diabetes, heart disease or strokes are still in there, but there you go.

More important changes have been made to dementia assessment. There is now a specific code for annual review ( 6AB ) and the old, vaguer, codes no longer count.

In a similar vein the old LVD exception codes no longer apply (those starting 9h1 ) and have been superseded with 9hH codes.

My suggested action plan for practices would be

  • Check the review codes for dementia (especially on templates) since April and make sure they are 6AB
  • Check the exception codes for heart failure (templates again) and make sure you are using 9hH codes

Happy coding!

Osteoporosis and Crystal Balls

Waiting, waiting. We are waiting for this years data but just around the corner is also the report from the review group as to what they would like to see in next year's QOF.

Well a rather heavy hint has arrived in the form of Evaluation of standards of care for osteoporosis and falls in primary care commissioned by the Information Centre from the Kings Fund. (it was published co-incidentally with the National Library for Health's Osteoporosis & Fragility Fractures National Knowledge Week which I seem to have missed).

The King's Fund document is a very thorough review of current information in practice systems about osteoporosis (basically not a lot) and the possibilities of generating some useful QOF targets. It seems to be possible. It is however a relentlessly practical document - for which its authors deserve a lot of credit. It is acknowledged that it is very difficult to work out differences in coding from differences in practice. New codes and a proper definition of treatment are required. The huge (and probably undefinable) strain on investigative resources in secondary care are also highlighted. One final conclusion stands out as understanding the problems with QOF.

A preferred set of codes would need to be agreed and disseminated to GPs at least three months before implementation.

You would not normally think that you needed to point out that design needs to come before implementation, but in the wake of last year's mental health mess apparently you do.

Only one problem remains - what goes out for this to come in? No word yet and very little time if it is to be implemented properly next year.

2006/7 Data Publication Dates

I do get asked quite a bit when the new data is due to come onto the site. Well all the data comes from the various departments of health in the four countries. The current plans from England and Scotland are to release the data sometime in September. Something of a relief for me, at least, as I have to get all of the new data into the database.

As an aside the English GP patient survey 2007 has been released. As this is down to practice level and is in a reasonably friendly format I will try to put this onto the site in addition. It also includes interesting figures such as rurality (really horrible word!) and deprivation factors. For largely presentation reasons however this is unlikely to precede the full QOF data (it will be linked from the 2006/7 QOF data).

Gaming, and report writing

A few weeks ago the Centre for Health Economics at York University produced a report looking at some of the statistics in QOF. It looks in some detail at both disease prevalence and to some degree at exception reporting. They are particularly interested in the difference in behaviour between high scoring practice and lower scoring ones, although they also look at social and societal differences between practices.

They only looked at Scottish practices due to the rather better data that was available for them, which has got to be a pat on the back for ISD Scotland.

I won't go into detail about the mechanics of the analysis - you can read it yourself although I would warn you that some knowledge of statistics is needed. It is not a light read. health economics papers rarely are. Most of the really interesting findings are related to the differences between 2005 and 2006 in practices that did, and did not, get maximum points in a given area.

The results are interesting. In general terms those practices who hit the top indicator thresholds in the first year increased their prevalences in the second year relative to those practices which did not. Conversely those practices who did not reach the top thresholds tended to increase the amount of exception reporting they did.

Now there is probably nothing too surprising in that. It would be a rather worrying situation for an incentive scheme not to lead to changes in behaviour in the direction of the incentive. That is exactly what is happening here. Practices are tending to most work in the areas that lead to the greatest incentive. There are certainly issues with the underdiagnosis of chronic diseases and there are probably many people who could be exception reported and are not.

The report talks a lot about "gaming". It does not define this however and I struggle to find a good definition on the Internet. Perhaps the most benign definition would be, in this context "undertaking actions to increase revenue that would not improve patient care". Actually this would encompass all exception reporting. This is not a bad definition as they define altruism as precisely the converse (personally I think that is professionalism but lets not get bogged down in semantics)

The authors of the report do not look so kindly on gaming. They define it thus:

However, exception reporting also gives GPs the opportunity to exclude patients who should in fact be treated in order to achieve higher financial rewards. This is inappropriate use of exception reporting or "gaming".

You can see where we are going here, can't you? By page 15 they are just calling it cheating.

That is not to say that I disagree with their mathematical analysis. I actually think it is rather brilliant and represents an attempt to model QOF mathematically in a way that has not been seen before - in public at least.

However they fall over in the conclusions. They cannot see any reason for these variations except cheating and dishonesty. Now that is one possible explanation for their findings but it is not the only one by any means. They seem to have very little idea of how exceptions are actually used. They don't see practices a living organisations with priorities. If you incentivise them to look for more patients they will find them - there certainly seem to be plenty undiagnosed with diabetes and hypertension. If they are going to get extra cash for a more efficient exception reporting system then they are likely to do that. It could simply be an indication of priorities.

None of this needs dishonest exception reporting or fraudulent diagnosis, simply an understanding of where the statics come from. So are GPs cheating lying scoundrels? We some might be but there is no solid evidence of this on a large scale. It is reassuring (as a GP) to read their first conclusion.

The fact that practices could have treated substantially fewer patients (12.5%) without falling below the upper thresholds for indicators and thereby reducing practice revenue is compatible with altruistic motivation.

Not so bad after all!

Encouraging news from Leaminton Spa

Readers outside the Leamington Spa area may not have seen this article in the local paper giving the rather good news that more practices than last year had gained maximum points (1000 this year, 1050 last year.

This is a little surprising as there are certainly more targets to reach this year. If this were repeated across the country it would certainly be a major achievement for practices.

Full details will be released by the NHS in September. Last years details are on this site. In the meantime congratulations to practices in Warwickshire.

QOF reduces admissions - or does it?

I like to be positive here. It is nice to find positive things about the QOF. I was very interested to see reports that higher QOF scores in asthma were associated to a reduction in emergency asthma admissions. Good news - or was it?

The original report (1.7M pdf) was produced by Asthma UK. The report, to be fair, is a glossy affair putting a political message rather than a scientific paper. There are virtually no figures, although some, partially processed, have been put in a couple of appendices. There are some graphs but even these do not seem to support some of the conclusions given.

There is undoubtedly a great variation in the number of emergency admissions with asthma. The greatest factor appears to be latitude with the number going up as you go north and pages six and seven make this clear. So far, so good. There is then a brief pause for a full page photograph of a nurse clinging to a bag and mask and a name and shame list for PCTs. The high admitters tend to be city PCTs and the lower admitters leafy southern PCTs, a fact not commented on. The next page is titled "Why the Divide?". It starts with the sentence The difference in hospital admissions across England is unlikely to reflect differences in the number of people with asthma.. Asthma UK appears to be saying, without offering any real justification that the number of people admitted with asthma is unrelated to the number of people with asthma. Intuitively it seems incredible and unfortunately no evidence if given to back up this bold statement. In fact it is printed above a graph showing pretty much the opposite.

Lastly we get to the correlation with QOF points. There certainly seems to be a weak correlation between QOF score and asthma admissions in 2004/6 - the first year of QOF. This may be something of an underestimate as they use QOF score rather than total QOF achievement. Why should that make a difference? Well QOF scores are capped at 70%. Any extra achievement above this is not counted. In 2004/5 over a third of practices got every single point in the asthma section of QOF. The extra achievement of these practices has been thrown away in the analysis.

In any event all that we can say is there is a correlation. Cause and effect is impossible to suggest without at least some data from previous years.

I would love to see some data that QOF is making a difference. I was disappointed that this report shows little other than a large variation in asthma admission around the country. It does not answer the questions of why half as well as proper peer reviewed study (no mention of QOF though!).

Prevalence data for England and Northern Ireland

Prevalence data is starting to get out! In the table below you can see the data from England and Northern Ireland. The English data was taken from QMAS at the start of April and the NI data from their official prevalence bulletin. I would recommend the NI bulletin for further reading as there are a lot of nice charts showing the spread of the prevalence. When comparing the data with previous years it is important to remember that there have been big rule changes in mental health and smaller one in LVD. Also of note is that the palliative care prevalence is for information only and does not change the cash value of points as the others do.

There are couple of figures in the NI bulletin I don't understand - mainly the depression 2 and LVD 3 listings. I can't quite see the relevance but I will ask!

Prevalence Area England Northern Ireland
CHD 3.551 4.196
LVD 0.790 0.818
Stroke 1.615 1.619
Hypertension 12.466 11.651
Diabetes 3.629 3.138
COPD 1.425 15.33
Epilepsy 0.590 0.745
Thyroid 2.490 2.872
Cancer 0.897 0.778
Palliative Care 0.087 0.090
Mental Health 0.716 0.753
Asthma 5.771 5.78
Dementia 0.400 0.526
Depression* 7.004 6.5
Kidney Disease 2.242 2.307
Atrial Fibrillation 1.295 1.252
Obesity 7.223 7.989
Learning disabilities 0.256 0.316
Smoking - for recording** 19.557 18.55

* I am not sure exactly what this depression figure means. I think it is the number of people eligible for depression screening.

** This is the number of people eligible to be asked regularly about their smoking. It is the combined prevalence of diabetes, hypertension, heart disease, COPD and stroke

Measuring Prevalence

One of the questions often asked by GPs about QOF data is what their prevalence data should be. There are three ways to measure the prevalence of a condition. First you can ask doctors, second ask patients and thirdly you can get out there and thoroughly examine a random group of people.

The data from QOF on this site is definitely in the first camp. I have come across some interesting disease prevalence models which try to compare QOF data against data from the Health Survey for England which is largely, as the name suggests a survey of the asking patients variety. It does however features some objective measurement by nurses as well.

There is only analysis for heart disease and hypertension. In heart disease there is a small reduction in prevalence in QOF compared to the HSE estimation. This is probably down to a lack of coding. The differences in hypertension prevalence are much larger with the HSE prevalence over double the QOF prevalence.

Now I have to admit that I boggled at this for a while. Could it really be that a quarter of all my patients had hypertension? Well the answer by strict interpretation seems to be "Yes". In fact the data, including the difference between the diagnosed and the actively treated has been observed for some time.

Now I don't propose to go through the rights and wrongs of this but the fact remains that prevalence varies widely possibly predictably depending who you ask. We don't have the official figures for this year's prevalence but kidney disease seems certain to come in well under expectations.

So before making comparisons make sure that data is all coming from the same sources. We await the official figures (traditionally Wales has been early with them but not this year).

Waiting for the Data

So far this month the data has been submitted to QMAS - which stumbled and recovered a couple of days later. PCTs should have received the final data and I have had a short holiday.

I came back from my holiday to an email asking about the status of the data currently showing on QMAS. This is not available to the public as yet as it has not been finalised. We can expect the final data, for England at least, around September. I can see from emails and server logs however that many users of this site work in the NHS itself and so may have access to the QMAS data. Can it be relied upon?

It is useful to know the stages the data passes through. Data from practices should have been signed off on QMAS by now. The signing of process signifies the agreement with the submitted data by the practice and the guidance describes it as a legal declaration that the submission is accurate. In the vast majority of cases this will lead directly to payment without further changes, often within a few weeks.

In some cases there is disagreement from the PCT, most often querying a claim. This normally relates to some disagreement over the interpretation of an indicator although occasionally it may be used to customise QOF to some local and special circumstance - particularly in the clinical areas where the data is automatically extracted. It is this process which must be completed by June, although it is rare that it takes this long.

In any case PCTs should be aware by now of any disputed figures. If you are working in the PCT it is hopefully a simple matter to find out who is dealing with these disputes, make them a coffee and ask if they are aware of any possible changes to the data. If there is no dispute the data can probably be considered as final.

The rest of us will just have to wait.

QMAS trips up

Now if there was one thing that seemed to be reliable in the new work of NHS IT it was the QMAS computer. It was hailed as a great success by Connecting For Health. It was an early win. Yes, it wasn't in their original brief and it was not an especially complex system, but it did have over 10,000 users and it worked well. Until today.

There have been no problems for the last two years but today it has failed under load for much of the day. Nine thousand practices are trying to sign off their data. All this has snowballed for the simple reason that the more error reports that are sent, the more people press OK. Users are also managing to lock themselves out of the system.

In terms of getting data published there should not be a significant delay, but the timetables for payment are a lot tighter. We can hope for a better day tomorrow and perhaps an explanation of why it worked well for two years and then went wrong today. Now if I can just get my Glastonbury tickets..

Site downtime

The site will be down for about half an hour after 10pm on the 26th of March for those nice people at Mythic Beasts to do some essential work on the server. Sorry!

Making exception: 3 - Is analysis possible?

Scotland and England have both now published some data on exception reporting. It is not co-incidence that these two countries published the data as both use the QMAS software. For the second year of running this collected data on exception reporting from practices. Indeed on the QMAS website practices and PCTs could see the breakdown of exceptions by reason. This could be compared with the national average.

Now this was a fairly positive development. Where people can see what others are doing they tend to fall into line. I have always been of the opinion that exceptions levels should be unexceptional. However in the published data England does not get down to practice level and Scotland does not break down the reasons for exceptions. There are a couple of reasons for this, one being that each patient can only be included in one exemption e.g. a new dissenting patient would only classify as one of these. This could be dealt with at the analysis stage an at least produce comparable results if it were not for a more significant problem. The English document states

The testing of patient exceptions on national QOF systems (such as QMAS) is primarily focused on ensuring that data values used for achievement calculations are accurate for payment purposes. Therefore any testing of the order of sequencing (ie the order whereby Different GP clinical information systems may follow different sequencing without this impacting on payment accuracy.

To translate into English this is simply to state that the method of deciding which exception applies was not actually tested on systems deployed to GP surgeries. Different computer systems may work this out differently. There is no way of checking as there is no set of business rules published for exception finding.

This hits plans for looking at individual practices quite hard. It become impossible to see whether the exceptions for an individual practice are entirely down to rapid practice turnover or mass patient dissent.

Analysis is still possible though. The Scottish data goes down to practice level and gives figures for exemptions and exceptions. Exemptions are simply those on the register to whom a particular criteria does not apply. An example would be a non smoker who would be exempt from smoking cessation advice. In theory the denominator of an indicator plus the exceptions plus the exemptions should add up to the register size. In practice it doesn't exactly due the the difference in the dates they are measured but it does get there roughly!

I expect we will see more advance analysis of the Scottish data in the coming weeks and months but it is certainly possible to identify practices at either end of the exception spectrum. Being out of ordinary does not automatically mean bad though.

In publishing the English data there is not a practice breakdown, but rather look at the individual indicators. Unsurprisingly there is more exception reporting achievement indicators than with monitoring one. There are, of course, more possible exceptions in these areas. Top of the list for exceptions is the use of beta blockers in CHD. Simply they are contraindicated in asthma, COPD and peripheral vascular disease - all more common in patient with CHD. Next at 18.8% was flu jabs in asthma, probably reflecting guidance from the chief medical officer that it was not indicated in large numbers of asthmatics. Epilepsy 4 at 16.8% reflects the fact that it is not always possible to completely control epilepsy no matter how many drugs you can persuade the patient to take. The rest of the top ten is more about flu jabs and getting to target.

In short there does not seem to be any evidence of systematic manipulation of exception reporting. More than that is difficult to say, other than the whole of the exception data is much less exciting than many people hoped, or possible feared!

Making exception: 2 - How?

Last week I looked at the reasons for exception reporting. In this entry I will go into some detail about how exception reporting actually works in practice. In particular how the business rules work out the exceptions and how practices decide what codes to enter.

When it comes to the business rules the exceptions fall into three main groups. Firstly there patients who are recently registered with the practice or recently diagnosed are automatically excepted. There is no need for practice intervention in this - the number of potential exceptions are simply dependant on the practice turnover and the number of new diagnoses. The length of the exemption is three months for most 'process' areas (e.g. blood pressure measurement) and nine months for 'outcome' measures (e.g. blood pressure below 150/90)

Secondly are the exceptions which apply to a whole domain. These are generally speaking due to reasons of patient dissent or unsuitability (e.g. hypertension in the terminally ill). Patient dissent is taken as being either actively expressed or a failure to respond to three invitations to review.

Thirdly exceptions may apply to a specific indicator. Patients may decline to have a flu jab or be allergic to a particular drug. Alternatively they may be on the maximum possible dose of treatment drugs and there simply is no further treatment.

To add to the complication each of the exceptions only count if the target is missed. If the patient subsequently makes the target in an area then the exception is ignored in that area. Thus a new patient will only be exempted from a target about having blood pressure measured until they actually have it measured or three months, whichever comes sooner.

In actually applying the codes there are further complications around whether the codes need to be repeatedly entered each year or not, but the above explanation should be enough to understand the basic process.

The latter two types of exception are controlled by the insertion of Read codes by the practice. Now it would be nice to think that the practice sat down in April and worked out who would be inappropriate to test or treat and entered the codes appropriately. They might invite all their patients to the surgery for review and code those who declined.

In reality, of course, it doesn't work like that. Most GPs don't particularly enjoy exception coding - it somehow feels like failure. Well it certainly doesn't in my surgery or any that I know of. Explicit dissent is recorded throughout the year until about January time then the figures are looked at closely. It is then that unsuitable patients are coded and the letters sent out. If the maximum threshold is crossed then we can all relax and stop exception coding.

So much for anecdote, but is there any sign that this is happening over a wide area? The answer is yes, at least in Brighton. A study there showed everyone getting much the same level of achievement in the areas that they looked at. There was a difference in that deprived areas had a much higher level of exception reporting. This could be interpreted as an increased level of exception reporting in reaction to targets being more difficult to reach. The alternative, and less politically correct interpretation, would be that patient in deprived areas are more resistant to treatment.

In the model presented here the two drivers to exception reporting are thus the practice list turnover and the practice's desire to seek out codes - the latter may be driven by likely achievement levels. There is also likely to be a direct population consent effect similar to that we see with immunisation uptake around the country.

Next time I will look a the currently published data and, using what we have explored so far, look at how they can be analysed. I will also look at what level of detail we can look at.

Making exception: 1 - Why?

Exceptions have to be one of the most contentious issues in the QOF. Considered essential to many practices and built into the very fabric of the QOF. However it seems that few people other than GPs actually like them.

PCTs hate them as they have the vague and ultimately unprovable feeling that they may be being cheated. Statisticians hate them because it makes it very difficult to say what the real results are for the practice population. Certainly this latter argument misses the point somewhat. The point, of course being that they make the QOF rather saner than it would be otherwise.

This is not to say that there are not things that can be learnt through the exception reporting and it is those issues that I will explore over a series of articles. The actual nitty gritty of dealing with them I will leave to another day and for the moment concentrate on the question of "What are they for?

  1. Patients who have been recorded as refusing to attend review who have been invited on at least three occasions during the preceding twelve months.
  2. Patients for whom it is not appropriate to review the chronic disease parameters due to particular circumstances e.g. terminal illness, extreme frailty.
  3. Patients newly diagnosed within the practice or who have recently registered with the practice, who should have measurements made within three months and delivery of clinical standards within nine months e.g. blood pressure or cholesterol measurements within target levels.
  4. Patients who are on maximum tolerated doses of medication whose levels remain sub-optimal.
  5. Patients for whom prescribing a medication is not clinically appropriate e.g. those who have an allergy, another contraindication or have experienced an adverse reaction.
  6. Where a patient has not tolerated medication.
  7. Where a patient does not agree to investigation or treatment (informed dissent), and this has been recorded in their medical records.
  8. Where the patient has a supervening condition which makes treatment of their condition inappropriate eg cholesterol reduction where the patient has liver disease.
  9. Where an investigative service or secondary care service is unavailable.

The current criteria are listed in the box. Their actual number seem to vary from source to source but this is more about layout than content. What is quite apparent is that they are designed to keep the QOF relevant. Some are about not penalising practices for patients informed decisions, something that had controversially not been included in the childhood vaccination targets. In a similar vein other exceptions are there to make sure that GPs are not encouraged to give treatments that are inappropriate or even harmfull. Finally some of the exceptions allow some time for the number to be produced after diagnosis or registration.

All of these codes provide a valuable services to prevent inappropriate care being incentives. There have been some calls for the abolition of exception codes though. There are some who would argue for the abolition of the codes, sometimes arguing that the fact that the points only score up to an achievement of 90% or less does the same job. The reality is that this latter mechanism is a blunt instrument, unresponsive to local circumstances. If anything it is these top thresholds that should be abolished with a continuation of the scoring up to 100%. It is a bizarre system that encourages clinicians to get to 90% and then stop.

It is however very clear that GPs are not stopping at the upper thresholds. Most of the achievement on this site is well over these thresholds. Exception reporting is essential in removing undue pressure on patients to conform to the medical model. It is probably the easiest target of cheap shots against QOF but the alternatives are likely contain more perverse incentives.

Having made the case for their existence, next time I will look at how they are implemented in practice by both practices and the business rules.

Exception reporting data for Scotland

ISD Scotland has published some of the exception reporting data for Scotland. Once again they are ahead of the rest of the UK. It is, as they point out as frequently as they can, difficult data to use in any sensible way.

Over the next week or so I am planning a series of short articles about exception reporting and how it may be interpreted.

Prevalence Thresholds

In a comment on my prevalence day post it was suggested that I could publish exactly what the cut off levels were for each area in each country. These are simply 5% of the maximum prevalence in each area. Always happy to oblige - and the table is below for 2005/6.

(if you have no idea what I am talking about it might be worth reading the original article first)

EnglandN. IrelandScotlandWales

Also interesting is the number of practices that fell beneath the threshold in each area. What is particularly interesting in that the seems almost exclusively English problem. Being so much larger than the other countries an 'outlier' practice is more likely. In the smallest country - Northern Ireland - there are virtually no practices below the threshold at all. The number in the table are the proportion of practices below the threshold who will be rounded up.

EnglandN. IrelandScotlandWales

New Data and Services

I have been doing a little bit of work around the site over the last couple of weeks and the changes are now ready to be formally announced.

The first is this blog, which allows more frequent updates than the email alerts. The email alerts will continue as before as an infrequent alert to a data update.

There is also a new custom Google search function available on the search page. This will search QOF related sites specifically. Any suggestions of additional sites would be appreciated.

There is some improvement to the data on the site for 2005/6 - partly in the Scottish data which now includes there September update and some corrections of obvious errors in the Welsh data.

And finally over on DH Consultation Feed site there has been a complete rewrite of the plumbing to make it more reliable. This site offers takes the DH consultation page and repackages it as an RSS feed. A limited audience perhaps but useful to some!

Happy Prevalence Day!

Those of us whose day has not be entirely filled with the opening of cards it is also national prevalence day. Although the computers will not do the actual calculation for another month today is the day that is taken as a baseline for all of the prevalence calculations. My thoughts drift back a year to a nursing home in London.

Before I explain why I think this way I should probably explain the significance of prevalence. When the contract was first presented the value of points to a practice depended solely on the number of patients (or at least notional Carr-Hill patients). There was some fuss at time, not least pointing out that this was a disincentive to diagnose as the targets would be harder for the same amount of money. Thus an adjustment was put in. It could not be make the value of a point directly proportional to the prevalence as this would basically be a return to item of service payments. In the end the value of the point was proportional to the square root of the prevalence.

There was one other factor which was largely ignored at the time. It was that any practice with a prevalence below 5% of the prevalence of the maximum was adjusted to have exactly that 5% prevalence. Those who want more detail can read the full guide.

This is where the Nightingale House practice comes in. It is a small practice attached to a nursing home and, as such, had a lot of patients with mental health problems - in fact a huge 35.4% of their patients had severe and enduring mental health problems. This was not entirely surprising with their particular population although this is vastly higher than the national average of 0.6%

Unfortunately, and I must emphasise again through no fault of the practice, this made chaos of the prevalence formula. 5% of 35.4 is 1.77 and 97% of practices had a prevalence of less 1.77%. The upshot was that the vast majority of practices were standardised to the same prevalence and all differentiation was lost. This could amount to several thousands of pounds in difference. Practices with low prevalence gained, most practices with high prevalence lost and the square root ensured that even Nightingale House got paid less per patient than anyone.

Mental health was the most prominent example but there is a similar, if smaller effect in stroke, thyroid disease and LVD. The rules are different for mental health this year as well so we shall see if the QOF payment of every GP in England still depend on a nursing home in London.

Setting boundaries

As you might gather from my map section I quite like health geographics and find it a good way to view information. Of course the google maps approach is all very well but it is basically a lot of tables laid out over a map. What is much better is to the see the outlines of PCT areas. You can see neighbouring areas and it is much easier to see patterns.

I was quite interested to see a range of interactive maps produced by the regional public health observatories. There is a still pituture above of one of them and even on the small version you can see trends quite clearly. You can find the original here (as a techical aside you will need an SVG viewer such as the Adobe one. The SVG viewer in Firefox does not appear to work for various reasons with this site).

Now this is clearly the best way to do this. You could argue with the choice of SVG to present the maps but it is a very capable and open technology that has yet to find its feet. So why do we only see the 2005 data on these maps? We it appears it all comes down to money. Ordinance Survey wants lots of it to provide the data on PCT boundaries. The economics suggests that the department of health should receive money from the Treasury, pay it to the Ordinance survey who will then send it back to the Treasury.

Ironically the data on PCT boundaries comes from the Department of Health in the first place, although there is a degree of processing at the OS end.

There can be few better examples of how really quite reasonable projects are held back by the instance of the government on selling information to everyone, including itself. There is data sitting in one government department that would benefit the work in another, but it can't be used. Even the OFT believes that freeing information would make economic sense although there are some attempts to justify the policy also.

I have had several offers to geocode the data on this site's map to allow practice level data. To be honest I have been afraid to take them up. If you want to know the grid reference of practices in the UK the Post Office wants your cash.

What would you change?

I have discovered - perhaps a little later that I should have - that NHS Employers and the GPC are asking for comments on what should go into the QOF from April 2008. They last did this two years ago and go over five hundred responses - largely it must be said from single issue pressure groups. Interesting this time they are also asking for view on sections that are already in the framework. It is probably useful for respondents to look at the results of the last review. There are many areas here that were considered and rejected. At that review there was also the luxury of being able to add things without taking anything out. This is unlikely to be repeated. There are already areas that are considered uneconomic to pursue (ethnicity recording for example) and to dilute incentives further would be counter productive. It is also probably worth noting that there is considerable grass roots annoyance of "Ivory Towers" academic and research instruments making their way into QOF without a terribly strongly relevant evidence base. GPs will await the result of the review with some anxiety.

Prevalence Models from Darlington

The public health intelligence unit in Doncaster have produced a tool to analyse prevalence data at the practice level. Models have been developed for all of the disease areas based on deprivation, sex and age. Most impressively this includes all of the new areas in the current year such as obesity and atrial fibrillation. This is delivered as an Excel spreadsheet and contains a deprivation measurement for every local authority in England along with instructions for producing more accurate figures for individual practices. This is certainly the first attempt that I have seen to model the QOF areas systematically. They don't claim too much for these models as they have developed from the literature and their correspondence with the QOF areas does not seem to have been fully tested. It is also not clear how much practice prevalence would be expected to be explained by these social factors. In statistical terms it would be interesting to know what the residual variance is at a national level. This is not to put down their achievement, which is considerable, but rather that there is quite a lot of opportunity for further work. They report that updates are likely, although not assured. Which leads me to one of my pet rants. As a spreadsheet there is no terribly easy way to upgrade. If all of your data is in one place then cutting and pasting it into the new version might work. However the lack of logic/data separation in spreadsheets make this by no means guaranteed. There is of course not a lot of other choice when it comes to simple application delivery though. Delivering databases is no easier and there is, as yet, no way for external applications to directly access QMAS. Perhaps tools like this would help to drive some demand.

What would David Cameron do?

Not the first question I would ask myself in any difficult situation but certainly has some relevance to QOF. Last week the Conservatives announced their new health policies. The one that interests us is "Outcomes not Targets".

Now it is hard to argue with that slogan. Outcomes are what matters. When it comes to drugs we are most interested in whether lives are saved or improved rather than exactly how much the blood pressure is changed or whatever. There is also no denying that most health targets are based more on process than outcome. In the QOF we have lots of targets about how often people have their blood pressure taken or an inhaler check and only one or two about any outcomes such as cholesterol or blood pressure readings.

So what to the Conservatives plan to do about it? Well all that is on their website is a show as a PDF - so there is not a lot of detail - but they do talk about putting the EQ-5D into the framework. If you are not familiar with the tool then follow that link for an example of the form. It is pretty short 5 question survey and at least appears to be easy to administer.

What is not clear is what happens next. For all the literature that I can find this is very much a research tool. There is not a lot of evidence for its use in general practice and what there is measures the patient rather than the practice. It would also be very difficult to separate out the effects of primary care, secondary care and social services.

It is probably a bit much to expect the fine details of implementation from a mid term opposition policy review. The answers to some questions would be nice though.

How is this translated into a points score and then (the final outcome) into cash? The score varies quite dramatically with age, sex and socioeconomic group. Paying for high health status would simply favour practices with younger and more affluent patients. Paying for health status improvements would reward practices starting from a low baseline rather than those who have worked hard in the past. Practices could also be 'rewarded' for a new housing estate or 'punished' for a factory closing. I will write and ask.

The Crazy World of Mental Health

For the vast majority of practices the clinical data on this site has been automatically extracted from their computer systems. There are various ways that this is done but it is all under the control of the business rules. These determine which codes indicate success or failure in each area and are thus crucial to practices. The original set of rules had a few quirks but these were fairly quickly ironed out and gave two good years of service. With the new, more complex, items in 2006/7 some more rules were needed. Nowhere was the change and complexity greater than in the mental health area. The diagnostic criteria changed from merely having a "severe and enduring" mental health problem to being exclusively psychotic and bipolar disorders. Also the individual criteria became quite involved - the worst being
MH7: The percentage of patients with schizophrenia, bipolar affective disorder and other psychoses who do not attend the practice for their annual review who are identified and followed up by the practice team within 14 days of non-attendance.
There have been three sets of rules this year. Version 8.0 - in which the rules for MH7 were completely incomprehensible. Version 8.5 still had problems which I listed at some length and version 9.0 which was released shortly before Christmas. There is no doubt that version 9.0 is much better and solves many of the previous problems, albeit sometime in a somewhat cumbersome way. It does, however still have some problems and has introduced a new challenge to practices. Perhaps the most dramatic change is the abolition of the explicit mental health register. In the original QOF patients were given the option of 'opting out' of the mental health register. Under the new rules entry to the register will be coded on diagnosis rather than an explicit code. This is arguably a better way of doing things - in fact this is the way that the rest of QOF does things. But this is a very late rule change. Systems suppliers and central systems are not yet upgraded. Practices may have a very short time to make sure their data fits the new rules. Finally there is still no provision for the recovery from mental illness. Whilst much illness is lifelong there is a considerable amount that is short lived. Read code 212T means "Psychosis, schizophrenia + bipolar affective disorder resolved" which seems to suggest that active follow up would not be needed. Unfortunately the rules do not look for this code so some of my patients who only had a brief period of postnatal psychosis in the 1970s are included on the register. The solution has been to code them as 9h91 "Excepted from mental health quality indicators: Patient unsuitable". An ugly bodge maybe, and one that will possibly need repeating annually, but it does illustrate the use of exception reporting as a pressure valve for problems in the business rules. I await version 9.5 with interest.

The Billion Pound Database

There is a lot of discussion in the media about the increased income of GPs with even Mrs Hewitt wading into the debate. Once you get past the headline figures and the fact that a good part of the increase goes straight back to HMG as increased pension contributions you are left with the QOF income. If there was one condition which came with the investment in the NHS during Labour's second term it was that there should be verifiable results. These were always difficult to find for general practice and so the QOF was formed. Payment for the QOF follows sending in the statistics. Its is these numbers which receive payment and most practices have put in quite a lot of work to get them right. I have sat in innumerable meetings discussing discussing how things should be coded, spent many hours going back through records to make them visible to QMAS, performed over a dozen practice visits and have been on the receiving end of a couple. All of this takes time and I, in common with most of the UK population, don't work for nothing. Most of the work requires a reasonable amount of clinical knowledge so can be difficult to delegate. Like many GPs I am self employed - I don't have a salary for my work, I have profits. The government has decided to pay for statistics. Many GPs have spent hours polishing those statistics to a high shine for inspection and assessment. It is no surprise that profits have risen. The bigger question of the effect on clinical care is more difficult to assess as, almost by definition, there was not much data before QOF. What data there is greatly affected by the lack of incentive to code things. The only direct comparison that I have found is an audit of diabetes which shows some improvement but the effect is not terribly dramatic. In the end what the government really wanted was the statistics and they got them. So can you either browsing this site or downloading the billion pound database.

New! News!

There is quite a lot about the Quality and Outcomes framework that needs a home. There are, for instance, changes in the rules or announcements of prevalence figures that don't get to feature in the main dataset but may well be of interest to people browsing the site. This will also be a home to other bit of QOF information that I have written from time to time and have again lacked anywhere decent to put them.