QOF Data 2017 online

I am delighted to say that the QOF for 2016-17 is now online. There is no data for Scotland this year as this is no longer being used for payment in Scotland. It is apparently still being extracted and I will consider putting in a freedom of information request. We already have some idea from England about what happens to data after payment is stopped. Briefly things that are considered useful continue and things that are thought to be useless stop).

QOF has been in quite a stable state for the last few years so there is nothing much in the way of changes. This year there has been more reporting of the sub-register of heart failure for patients with a diagnosis of left ventricular systolic dysfunction. This has been a subregister for some years and is used for calculating the prevalence adjustment for the indicators relating to the prescribing of ACE inhibitors (or A2IIs) and β blockers. I have made this explicit this year.

The general trends continue with an increase in the prevalence of diabetes across the whole of the UK but a decrease in cardiovascular disease, which is generally encouraging.

There will definitely be data next year as the QOF is ongoing. What happens after that is not clear. However we are now three and a half months before the start of the 2018-19 year and there does not seem to be any big plan around so personally I would predict small changes only with bigger promises for later.

Read to Snomed CT Map Explorer

Introducing the Read to Snomed Map Explorer

One thing that the QOF has brought into General Practice is that coding matters. From the beginning of QOF it mattered because of payments. These codes were then used by practice systems for QOF reminders and for more general clinical reminders. Now it is not only payments under QOF and Enhanced Services that use the codes but also other extractions for parts of the contract or central analysis.

Currently most practices use Read codes version 2 which contain up to five letters to express a code. From next April these will disappear to be replaced with Snomed CT. The codes for these are much longer and unlikely to be remembered by clinicians from day to day in the same way that the Read codes were. Practices using version 3 of Read Codes (also known at CTV3) have a bit longer before their transition.

A lot of the work has already been going on in the background. Snomed CT can, at least partially, trace its roots back to Read codes and so the translation is simpler than it might otherwise have been. The list of coded data and descriptions seen on practice systems will not look substantially different in April next year.

What will look quite different is the way that these codes ("concepts" in Snomed jargon) are linked together. Read codes are a bit of a mess in some areas and Snomed is generally rather better. This is largely due to the rigid structure of Read and the ability to revise and improve Snomed as time goes on. Things can be clarified and improved over time.

This does lead to a very different hierarchy in Snomed. A search for asthma and all of its child concepts will produce quite different results in Snomed than Read and this is what I wanted to explore.

In my Map Explorer you can type in a code or its description and see which of its child codes no longer apply and which new concepts will appear under it. It is entirely based on Read codes so does not actually include any of the extra concepts supplied by Snomed CT. The simplest way to use the search is to type words from the code description into the search box. You can also type in the read code but the format is fairly specific. It needs to be five characters (if there are less they can be replaced by dots) then a dash, then two numbers. The last two numbers are for the various different terms that can be applied to each read code. They are referred to as synonyms but in practice can have different meanings.

For example "Cigarette Smoker" is 137P.-00 but 137P.-11 is "Smoker". They are regarded as synonyms for Read code but their different meanings are separated in Snomed CT. In Snomed a cigarette smoker "is a" smoker but not vice versa.

I have largely written this as something I am interested in exploring myself and I am aware that it is a little rough around the edges. The search in particular is based entirely on MySQL full text search as I don't have the knowledge or expertise to do any better. Feedback very welcome. If you would like to know more about Snomed itself there is loads of fairly technical stuff on the official Snomed site. I am trying to get some more information together for the non specialist.

QOF indicators consultation 2016

The NICE indicators consultation is currently live and will be open until the end of the month. This is a combination of CCG indicators and QOF indicators as well as some other indicators for GPs which will not carry incentives.

You can read my response here but there is nothing particularly inspiring. Perhaps of most concern is a suggestion of a atrial fibrillation screening scheme that is not supported by the National Screening Committee or even the NICE guidelines although the consultation document does make it sound like the NICE guidelines say something else. Ooops!

Retired QOF Indicators

QOF in 2014/15 was quite a bit smaller than it had been the year before. This was largely due to the removal of clinical indicators and the funding being moved into the global sum. Other points were moved to the new admission avoidance DES.

One thing that did not receive much publicity at the time was that NHS England planned to continue to monitor some of the indicators that had been removed. These results have now been published. Things did not go entirely to plan and for technical and other reasons just over half of the practices actually have data available. HSCIC refer to these as "Indicators no longer in QOF" or INLIQ. This is now on the site and can be identified by the grey colour of the data in the table.

The HSCIC does warn about comparing this data with previous years as they say that the dates and rules may differ. In practice they don't actually vary very much at all. There is a more important reason to be a little cautious and that is these indicators are no longer curated by the practices. Whilst exception reporting still applies practices are far less likely to enter exception codes where there is little reason to do so. The biggest drops occur where there has been little clinical benefit to patients in the views of GPs.

Somerset CCG is a special case. Lots of indicators were effectively retired there in 2014/15 although prevalence was still counted and other indicators continued as quality measures. There is therefore a lot more grey in the Somerset statistics than in the rest of England.

The indicators themselves remain the same. Internally (and when I run the downloads) each indicator now has an "active" flag. If an indicator is not active then it is presented in a grey font. This gives maximum flexibility as things may changes rapidly and differently across the country in the next couple of years.

Calculating prevalence

For ease of comparison all of the prevalences on this site are based on the whole practice registered list, not just those in the correct age group - this applies to areas such as diabetes or epilepsy. This is largely because countries other than England do not list the specific number of patients on the practice list over, say, seventeen years old. It is also the prevalence that is used for adjustment of point value.
I was asked this week why the whole list is used for prevalence adjustment rather than the age adjusted subgroup. Is this unfair on practices? Well the answer is no, it is actually more fair the way it is, but for some quite complicated reasons. We have to look at some maths.
Point value = £ 160 × PracPrev AvgPrev × PracList AvgList Point value = £ 160 × ( Register PracList ) ( AvgReg AvgList ) × PracList AvgList Point value = £ 160 × Register PracList × AvgList AvgReg × PracList AvgList Point value = £ 160 × Register AvgReg Point value = £147 times {{PracPrev} over {AvgPrev}} times {{PracList} over {AvgList}} newline newline Point value = £160 times {{({Register} over {PracList})} over {({AvgReg} over {AvgList})}} times {{PracList} over {AvgList}} newline newline Point value = £160 times {{{Register} over overstrike{PracList}} times {overstrike{AvgList} over {AvgReg}}} times {overstrike{PracList} over overstrike{AvgList}} newline newline Point value= £160 times {{ Register } over { AvgReg }}

We have started from saying that the point value is modified by the practice prevalence relative to the average practice prevalence. Then the point value is modified by the relative size of the practice list overall. The second line expands this a bit by using the register size and the list size against the averages. It is true that this is not exactly how the average prevalence is calculated but it is pretty close.

After simplifying the formula there is a lot we can cancel from the top and bottom until we get to the final formula which basically says that the practice gets a set amount per person on the register but that this drops as the national average register size rises. Nothing else matters.

We can try again using an 'Eligible' denominator for the register.

Point value = £ 160 × PracPrev AvgPrev × PracList AvgList Point value = £ 160 × ( Register Eligible ) ( AvgReg AvgEgble ) × PracList AvgList Point value = £ 160 × Register Eligible × AvgEgble AvgReg × PracList AvgList Point value = £ 160 × AvgEgble Eligible × Register AvgReg × PracList AvgList Point value = £160 times {{PracPrev} over {AvgPrev}} times {{PracList} over {AvgList}} newline newline Point value = £160 times {{({Register} over {Eligible})} over {({AvgReg} over {AvgEgble})}} times {{PracList} over {AvgList}} newline newline Point value = £160 times {{{Register} over {Eligible}} times {{AvgEgble} over {AvgReg}}} times {{PracList} over {AvgList}} newline newline Point value = £160 times {{{AvgEgble} over {Eligible}} times {{Register} over {AvgReg}}} times {{PracList} over {AvgList}}

There is much the same process here but there is a lot less to cancel out. That is not necessarily a bad thing but we can see how this formula behaves. If we assume a practice of average list size then the last term will be one. If it has an average register size for diabetes then the middle term will be one as well. Interestingly in this case the point value would vary with the proportion of over 17 year olds on the practice list (i.e. Eligible would change without changing the overall list size). This is not what we want to see at all as the practice list makeup would alter income without any change to the actual number of patients treated.

So that is why the overall list size is used to calculate prevalence.

Release notes 2015

All four countries now have data on the site for the year 2014/15. The four countries have continued to diverge in their requirements and common ground is becoming smaller. Where indicators are broadly equivalent I have tried to make them comparable.

There are a couple of things worth noting below.

Wales

Wales was first to publish this year, about two weeks ahead of Scotland and a month ahead of England and Northern Ireland, which was impressive. There are no significant issues with their statistics. They have continued to publish data about local practices groups so I have used these on the site. I can't find any official codes for these so they have a "QDB" code which is entirely made up by me.

Scotland

Most of the Scottish data is fairly straightforward. There seems to have been a bit of muddle about the actual identifiers for indicators and I have used those in the final publication. QS002 was used last year and then for a completely different indicator this year so I have used QS002A for the new one.

England

English data imported fairly easily this year, much helped by a "raw data" publication. There is a new "sub-region" which was used in publication which adds another level to the hierarchy on the site with the exception of Wessex. On the spreadsheets Wessex CCG was listed as its own sub region and to avoid a horrible loop it simply skips the sub region stage.

Northern Ireland

As things stand there is no prevalence data for Northern Ireland where there is no other indicator in that disease area. This affects obesity, epilepsy and learning disability and there is not a lot of smoking data either. I will update this if more information becomes available.

There are now Local Commissioning Groups in Northern Ireland. I have used these in much the same way as the old boards although they cover different areas, most notably by having a separate group for Belfast.

2015 Data publication dates

All of the data from this site comes from the various governments around the UK. Some of the publication dates have been announced. It takes me a little bit of time to translate this onto the site but for interest here are the dates currently available.