A look at GP at Hand

One of the things that I was interested to look at when the QOF data came out last year was how GP at Hand performed. A lot has been written over the past year or so about this service, which uses a chatbox app as the first point of contact. For all of the QOF year in question the service had restrictions which have since been lifted about registering patients with long term conditions. This has led to concerns that GP at Hand has "cherry picked" patients who are younger and fitter, leaving other practices to deal with patients who have more pathology.

This has been denied by GP at Hand. Actually, as we will see, there is little doubt that they have a younger patients but they argue that the is resource neutral under the Carr-Hill formula which adjusts the practice Global Sum according to the age and sex of patients. This was introduced in 2004 along with the rest of the GMS contract. At the time it caused significant swings in income with particularly large reductions in income to practices with large numbers of younger patients. Practices which served university students were particularly badly affected. GP at Hand claims that it gets only 65% of the average GP funding per patient.

There is no significant adjustment in the Carr-Hill formula for how sick patients are. This has largely been done through the QOF although the effect has been quite variable over the years as the QOF has waxed and waned. I wanted to see if the QOF data let us answer the question of whether the patients at GP at Hand are healthier than we would expect.

We can start with a quick look at the QOF figures. In the year 2017/18 GP at Hand was based in a single practice at Lillie Road in Fulham. There are very low levels of disease prevalence there. In nine areas they are below the first centile - i.e there are in the bottom 1% of practices for the prevalence of that condition. In only two areas are they above the bottom five percent of practices - depression and mental health.

The data also shows the practice list size. If we look back to the previous year we can see that the list size increase from 2,500 in April 2017 to 24,000 in April 2018. This is such a huge rise that it is pretty much impossible to compare year on year. This is not the same practice that it was a year before. Even if none of the original patients left during the year they form a fairly insignificant number of patients at the end of the year.

As an aside I wondered where are all of these patients are coming from? GP at Hand will register patients from a wide area due to their chatbot technology. We can get a clue if we look at the total registered list size for Hammersmith and Fulham CCG. This has steadily risen over the years with a typical rate of 4-6000 patients. In the year 2017/18 there were an extra 24473 patients in the CCG. I don't know much about London but unless there has been a lot of house building it seems that most of GP at Hand's patients came from outside the CCG area.

CHD prevalence at Lillie Road

We can see from the QOF data that prevalence has plunged at Lillie Road over the year. Some of the register have barely risen despite the huge rise in patient numbers. The cancer register has risen from 51 to just 74. The number of patients with dementia has actually fallen from fourteen to twelve. That, however, is not very useful as we have already seen that the patients are completely different to the previous year. We are not comparing like with like. Clearly the new patients at the surgery are pretty healthy, but are they unusually healthy? We need more data.

Helpfully NHS Digital publish practice list sizes monthly and these are broken down by age and sex (insert your own joke here). We can use this to create profiles of practices and other organisations. Here is a population pyramid for England (which is all that NHS Digital cover).

It may not be a pyramid that that pharaohs would be proud of but there are distinct trends in the population. We can use the data for Lillie Road to see if this is similar to their population, or at least if it is very different. We can produce pyramids for Lillie Road practice and it is remarkably different to the UK population as a whole. The vast majority of their patients are between the ages of 20 and 45 with men tending to be a little older than women on the list. With such a radically different population it would seem rather unfair to compare the surgery against English averages. They are certainly not average!

It is worth checking as well whether this is something about the population Hammersmith and Fulham CCG although we have already seen that most of the Lille Road patients come from outside the area. The pyramid below included all practices except Lillie Road. The wikipedia page for Hammersmith and Fulham suggests this a borough full of young and single people and this is borne out in the population figures. There is also quite a lot more women than men registered with a GP although it is possible that this is due to fewer men registering with a GP. Contraception and cervical screening can be a reason for young women to join a practice more actively than men when moving around.

This is still quite different to Lillie road although it has the emphasis on young adults with very small numbers of children. Lillie Road demographics are not similar to its neighbours. Again it is going to be difficult to make comparisons. Lillie Road seems to be unlike any other type of practice that we already have.

Or maybe not. I mentioned the global sum earlier and that the effect that is having on Lillie Road may be similar to university practices. What about them? I typed the word "university" into the search box on my website and looked at the practices that appeared in the result. After taking out a couple of results that were either not actually practices or were out of England I came up with a list of 26 practices. I then put their populations together and produced a (final, I promise) population pyramid.

Now we seem to be getting somewhere. The shape if familiar although the lines a bit sharper. In general people are even younger in university practices and the chart appears as an even more exaggerated version of Lillie Road. There is also likely to be a degree of selection in universities as young people with chronic health problems may find it more difficult to access university. The effects of both these factors are likely to push down the rate of disease in these populations and, by comparison, this is likely to make the pathology at Lillie Road appear higher. I am not too worried about that as we are trying to see if pathology is lower than we would expect at Lillie Road and most of the biases are in its favour: they will minimise the appearance of cherry picking.

Let's look at the prevalence for the university practices and for Lillie Road. All of these figures are in percentages of the practice population with each of the conditions.

Area Lillie Road University Practices p value
1 Atrial Fibrillation 0.2 0.26 0.13
2 Asthma 3.4 3.1 0.0044
3 Cancer 0.31 0.47 0.00049
4 Coronary Heart Disease 0.25 0.38 0.0008
5 Chronic Kidney Disease 0.25 0.38 0.0013
6 COPD 0.3 0.21 0.0039
7 Dementia 0.05 0.087 0.07
8 Depression 3.6 5.8 <0.0001
9 Diabetes 1 0.99 0.81
10 Epilepsy 0.24 0.22 0.567
11 Heart Failure 0.083 0.1 0.567
12 Hypertension 2.5 2 <0.0001
13 Learning Disability 0.088 0.082 0.86
14 Mental Health 0.77 0.35 <0.0001
15 Obesity 2.6 2.7 0.62
16 Osteoporosis 0.017 0.029 0.37
17 Peripheral Arterial Disease 0.054 0.067 0.52
18 Rheumatoid Arthritis 0.13 0.1 0.21
19 Stroke/TIA 0.17 0.24 0.021

Eyeballing the data does not suggest much of a difference. In some areas, such as depression the university practices have a higher prevalence and in others, including severe mental health problems Lillie Road is ahead. We can see the same information on a bar chart. The biggest differences are in depression. University practices are a little ahead in diseases related to ischaemic heart disease and dementia. I will cut Lillie some slack in the latter as they are fast growing and patients with dementia, or indeed cancer can be less likely to change their surgery although it is also likely that they are going to be less enthusiastic smartphone users. This is splitting hairs as University practices have about a tenth of the UK prevalence of dementia. These are small differences in small numbers. Using Pearson's Chi-Squared test only nine areas reach significance. Four are higher in Lille and five in the University practices.

I am not a statistician and this is a dig around the data rather than a formal analysis. I was looking for to see if there were obvious anomalies. We don't really know how the existing patients at the practice reacted to the change of management. It is possible there "old" and "new" populations being treated side by side but there is no evidence for this. I have certainly not found evidence of "cherry picking". The practice is no more unusual than a university practice catering primarily for students.

But before we get too used to the idea it is worth remembering that university practices are quite unusual. Their population pyramid is dramatically different to the country as a whole. Lillie Road is still an outlier even if it is similar t some other outliers. It would be quite strange to believe that the success here would automatically translate into other populations. These are patients with very low levels of chronic disease and attract relatively low levels of funding.

University practices are peculiar unusual.

I have made no attempt to review the quality of care delivered at this practice. QOF is a pretty blunt instrument for this. Their point score is good at a whisker under 550 out of 559 points. The rate of growth at Lillie Road seems to be slowing but they are also available at more sites across London so that is certainly not the whole story. I hope I have been able to cast a little light on this atypical, but perhaps not entirely unique, practice.

While you are here I would ask nicely that, if you found this interesting, you might take a look at my book "Starting Snomed: A beginner's guide to the Snomed CT medical terminology". It is an easy introduction to this powerful new tool that will be coming to practices this year. It is available now on Amazon and is also available for Kindle and all of the various offers that come with that. Thank you.

EMIS and QOF Business Rules v39

Over the weekend EMIS released v39 of the QOF Business Rules live onto practice systems. There have been reports of changes to practice figures. In some cases these have been quite substantial. I have done a bit of digging and there seem to be at least two separate things going on either of which can have signficant effect on the figures.

First I would just like to say that mistakes happen. This is a huge project and it is almost unimaginable that it would work first time perfectly. Having said that some fairly urgent work needs to be done to make sure that patients are identified correctly.

The root of the problems is that most of primary care is expected to move from Read codes to Snomed CT over the next few months. I am a big fan of Snomed CT and hope to release a book in a few weeks with an introduction for users. However it is quite different to Read codes. All clinical data will be translated ("mapped" in the jargon) from Read to Snomed but this process is sometimes not exact. Read has lots of problems and solving these in a modern coding system will mean some things are changed. This change is generally for the better but all change breaks stuff.

Version 39 of the business rules is the first to use Snomed CT. The objective is that all practices will be using this by the end of March so it makes sense that Snomed is used here. This has meant trying to translate all of the Read codes searches to Snomed CT. This is more complicated than simply translating the codes as it is the structure and relationships in Snomed that are key. For example asthma is listed under COPD in Read whereas it is correctly separated in Snomed. This makes the searches different in each.

This is where the first problem arises. To take an example "Post concussion syndrome" now puts a patient onto the dementia register. This is clearly rubbish but the problem is within Snomed. Postconcussion disorder is listed as a type of dementia which will put the patient on the register. This can be dealt with by specifically excluding it in the business rules but was missed this time. As I said there will be some errors in the first version but hopefully this will be rectified by NHS Digital soon, although a fairly comprehensive review of the thousands of included concepts is probably needed. Snomed also needs fixing although this is like to take a bit longer. Snomed has two releases of its international edition a year and business rules will need reviewing with each new release.

The second problem is down to the fact that practices have not moved to Snomed yet. One of the features of Read is that each code could have several "synonyms". The quotes are there because these synonyms quite often carried different meanings. For instance H30 was supposed to mean "Bronchitis Unspecified" but it could also mean "Recurrent Wheezy Bronchitis". These synonyms map to difference concepts in Snomed CT which seems reasonable. The former maps to Bronchitis and the latter to Chronic Asthmatic Bronchitis. This is included in the COPD register, presumably as a form of chronic bronchitis. However, as we are not using Snomed yet EMIS has translated these back to Read codes. The EMIS business rules system does not seem to know about Read synonyms - they have never been part of QOF business rules. The effect has just been to put everybody with a H20 code onto the register including Bronchitis Unspecified.

For similar reasons patients that have a record of the code "Tired all of the time" are being put onto the depression register.

There is inevitably going to be some pain on transferring from Read to Snomed. There is more of this sort of thing to come. In the next few days I would like to see NHS Digital fixing the rules and EMIS adding synonym support to their business rule calculator. In the longer term there is some fixing to do in Snomed although one of its great strengths is that fixing is possible, unlike the rigid structure of Read.

QOF Data 2017 online

I am delighted to say that the QOF for 2016-17 is now online. There is no data for Scotland this year as this is no longer being used for payment in Scotland. It is apparently still being extracted and I will consider putting in a freedom of information request. We already have some idea from England about what happens to data after payment is stopped. Briefly things that are considered useful continue and things that are thought to be useless stop).

QOF has been in quite a stable state for the last few years so there is nothing much in the way of changes. This year there has been more reporting of the sub-register of heart failure for patients with a diagnosis of left ventricular systolic dysfunction. This has been a subregister for some years and is used for calculating the prevalence adjustment for the indicators relating to the prescribing of ACE inhibitors (or A2IIs) and β blockers. I have made this explicit this year.

The general trends continue with an increase in the prevalence of diabetes across the whole of the UK but a decrease in cardiovascular disease, which is generally encouraging.

There will definitely be data next year as the QOF is ongoing. What happens after that is not clear. However we are now three and a half months before the start of the 2018-19 year and there does not seem to be any big plan around so personally I would predict small changes only with bigger promises for later.

Read to Snomed CT Map Explorer

Introducing the Read to Snomed Map Explorer

One thing that the QOF has brought into General Practice is that coding matters. From the beginning of QOF it mattered because of payments. These codes were then used by practice systems for QOF reminders and for more general clinical reminders. Now it is not only payments under QOF and Enhanced Services that use the codes but also other extractions for parts of the contract or central analysis.

Currently most practices use Read codes version 2 which contain up to five letters to express a code. From next April these will disappear to be replaced with Snomed CT. The codes for these are much longer and unlikely to be remembered by clinicians from day to day in the same way that the Read codes were. Practices using version 3 of Read Codes (also known at CTV3) have a bit longer before their transition.

A lot of the work has already been going on in the background. Snomed CT can, at least partially, trace its roots back to Read codes and so the translation is simpler than it might otherwise have been. The list of coded data and descriptions seen on practice systems will not look substantially different in April next year.

What will look quite different is the way that these codes ("concepts" in Snomed jargon) are linked together. Read codes are a bit of a mess in some areas and Snomed is generally rather better. This is largely due to the rigid structure of Read and the ability to revise and improve Snomed as time goes on. Things can be clarified and improved over time.

This does lead to a very different hierarchy in Snomed. A search for asthma and all of its child concepts will produce quite different results in Snomed than Read and this is what I wanted to explore.

In my Map Explorer you can type in a code or its description and see which of its child codes no longer apply and which new concepts will appear under it. It is entirely based on Read codes so does not actually include any of the extra concepts supplied by Snomed CT. The simplest way to use the search is to type words from the code description into the search box. You can also type in the read code but the format is fairly specific. It needs to be five characters (if there are less they can be replaced by dots) then a dash, then two numbers. The last two numbers are for the various different terms that can be applied to each read code. They are referred to as synonyms but in practice can have different meanings.

For example "Cigarette Smoker" is 137P.-00 but 137P.-11 is "Smoker". They are regarded as synonyms for Read code but their different meanings are separated in Snomed CT. In Snomed a cigarette smoker "is a" smoker but not vice versa.

I have largely written this as something I am interested in exploring myself and I am aware that it is a little rough around the edges. The search in particular is based entirely on MySQL full text search as I don't have the knowledge or expertise to do any better. Feedback very welcome. If you would like to know more about Snomed itself there is loads of fairly technical stuff on the official Snomed site. I am trying to get some more information together for the non specialist.

QOF indicators consultation 2016

The NICE indicators consultation is currently live and will be open until the end of the month. This is a combination of CCG indicators and QOF indicators as well as some other indicators for GPs which will not carry incentives.

You can read my response here but there is nothing particularly inspiring. Perhaps of most concern is a suggestion of a atrial fibrillation screening scheme that is not supported by the National Screening Committee or even the NICE guidelines although the consultation document does make it sound like the NICE guidelines say something else. Ooops!

Retired QOF Indicators

QOF in 2014/15 was quite a bit smaller than it had been the year before. This was largely due to the removal of clinical indicators and the funding being moved into the global sum. Other points were moved to the new admission avoidance DES.

One thing that did not receive much publicity at the time was that NHS England planned to continue to monitor some of the indicators that had been removed. These results have now been published. Things did not go entirely to plan and for technical and other reasons just over half of the practices actually have data available. HSCIC refer to these as "Indicators no longer in QOF" or INLIQ. This is now on the site and can be identified by the grey colour of the data in the table.

The HSCIC does warn about comparing this data with previous years as they say that the dates and rules may differ. In practice they don't actually vary very much at all. There is a more important reason to be a little cautious and that is these indicators are no longer curated by the practices. Whilst exception reporting still applies practices are far less likely to enter exception codes where there is little reason to do so. The biggest drops occur where there has been little clinical benefit to patients in the views of GPs.

Somerset CCG is a special case. Lots of indicators were effectively retired there in 2014/15 although prevalence was still counted and other indicators continued as quality measures. There is therefore a lot more grey in the Somerset statistics than in the rest of England.

The indicators themselves remain the same. Internally (and when I run the downloads) each indicator now has an "active" flag. If an indicator is not active then it is presented in a grey font. This gives maximum flexibility as things may changes rapidly and differently across the country in the next couple of years.

Calculating prevalence

For ease of comparison all of the prevalences on this site are based on the whole practice registered list, not just those in the correct age group - this applies to areas such as diabetes or epilepsy. This is largely because countries other than England do not list the specific number of patients on the practice list over, say, seventeen years old. It is also the prevalence that is used for adjustment of point value.
I was asked this week why the whole list is used for prevalence adjustment rather than the age adjusted subgroup. Is this unfair on practices? Well the answer is no, it is actually more fair the way it is, but for some quite complicated reasons. We have to look at some maths.
Point value = £ 160 × PracPrev AvgPrev × PracList AvgList Point value = £ 160 × ( Register PracList ) ( AvgReg AvgList ) × PracList AvgList Point value = £ 160 × Register PracList × AvgList AvgReg × PracList AvgList Point value = £ 160 × Register AvgReg Point value = £147 times {{PracPrev} over {AvgPrev}} times {{PracList} over {AvgList}} newline newline Point value = £160 times {{({Register} over {PracList})} over {({AvgReg} over {AvgList})}} times {{PracList} over {AvgList}} newline newline Point value = £160 times {{{Register} over overstrike{PracList}} times {overstrike{AvgList} over {AvgReg}}} times {overstrike{PracList} over overstrike{AvgList}} newline newline Point value= £160 times {{ Register } over { AvgReg }}

We have started from saying that the point value is modified by the practice prevalence relative to the average practice prevalence. Then the point value is modified by the relative size of the practice list overall. The second line expands this a bit by using the register size and the list size against the averages. It is true that this is not exactly how the average prevalence is calculated but it is pretty close.

After simplifying the formula there is a lot we can cancel from the top and bottom until we get to the final formula which basically says that the practice gets a set amount per person on the register but that this drops as the national average register size rises. Nothing else matters.

We can try again using an 'Eligible' denominator for the register.

Point value = £ 160 × PracPrev AvgPrev × PracList AvgList Point value = £ 160 × ( Register Eligible ) ( AvgReg AvgEgble ) × PracList AvgList Point value = £ 160 × Register Eligible × AvgEgble AvgReg × PracList AvgList Point value = £ 160 × AvgEgble Eligible × Register AvgReg × PracList AvgList Point value = £160 times {{PracPrev} over {AvgPrev}} times {{PracList} over {AvgList}} newline newline Point value = £160 times {{({Register} over {Eligible})} over {({AvgReg} over {AvgEgble})}} times {{PracList} over {AvgList}} newline newline Point value = £160 times {{{Register} over {Eligible}} times {{AvgEgble} over {AvgReg}}} times {{PracList} over {AvgList}} newline newline Point value = £160 times {{{AvgEgble} over {Eligible}} times {{Register} over {AvgReg}}} times {{PracList} over {AvgList}}

There is much the same process here but there is a lot less to cancel out. That is not necessarily a bad thing but we can see how this formula behaves. If we assume a practice of average list size then the last term will be one. If it has an average register size for diabetes then the middle term will be one as well. Interestingly in this case the point value would vary with the proportion of over 17 year olds on the practice list (i.e. Eligible would change without changing the overall list size). This is not what we want to see at all as the practice list makeup would alter income without any change to the actual number of patients treated.

So that is why the overall list size is used to calculate prevalence.