The data for the QOF in England has now been published on the site for they year 2020-21. You won't need me to tell you that it was quite an unusual year. Inb QOF terms most of the indicators were suspended through the year. Prescribing indicators were suspended a bit later on through the year than other but, in the end, all that remained were the flu and cervical screening indicators. These all had their points doubled.
The other area that was still active was the prevalence adjustment. Effectively this meant that practices were still paid for the number of patients on their disease registers. It still paid to add patients to disease registers.
Of course there were a lot of other complications and this is probably one of the reasons that NHS Digital did not publish points totals this year. There were several new indicators in there as well and points would not really have made a lot of sense and certainly would not have tallied with payments. However the data was still collected and is presented on the site. As with many things 2020-21 is going to stand out a bit on the charts. Please do have an explore of the data.
Although most of the indicators are down the figures are a testement to the huge amount of work that was done by practices to some of the most vulnerable members of our practice lists. Even with all of the restrictions that were necessary through the year large numbers of patients continued to get appropriate care for chronic disease.
I hope to have data from Northern Ireland in a few weeks. QOF no longer takes place in Scotland and Wales has a very minimal system and I could not find published data last year.
QCovid is the latest predictive formula from QResearch. Currently based at the University of Oxford and headed by Professor Julia Hippisley-Cox it has been doing this sort of thing for a while now. QRisk is well known and respected but there are several other score derived from a big bank of patient records. I think that we have to assume that they know what they are doing.
They are also reasonably open about their methods. They published in the BMJ way back in October with all of the factors listed. There are a number of medical conditions along with deprivation scores and demographic information. There is no mention of gestational diabetes in that paper. In fact the only mentions of pregnancy at all are in reference to previous shielding criteria (pregnancy with significant heart disease) and that there were too few events to include pregnancy in the analysis. The latter is quite telling in itself given that they looked at over 4300 deaths in their initial analysis - a large effect is likely to have been spotted.
They have also published the algorithm itself. The maths are complicated and largely beyond me to follow but it is easy to see that the inputs do not include gestational diabetes, only types one and two.
When it came to implementation NHS Digital said that the categories for diabetes were :
Type 1 diabetes
Type 2 diabetes (including other forms such as gestational diabetes)
Gestational diabetes is high blood sugar (glucose) that develops during pregnancy and can resolve after giving birth. Women who have had gestational diabetes are at increased risk of developing type 2 diabetes or having undiagnosed diabetes.
Some patients with past gestational diabetes have been identified in combination with other factors by the QCovid model as being potentially at high risk from COVID-19.
Somewhere along the line Gestational Diabetes has been classified as being the same as type two diabetes – even when the GD has resolved. It is not clear where this decision came from. There is no sign that it was intended by the QResearch team.
The Royal College of Obstetrics and Gynaecology tweeted
We're aware some postpartum women with a history of gestational diabetes (GD) have been asked to shield. We're investigating this as there's no previous suggestion that once women are no longer pregnant, a history of GD alone would be linked to severe illness from COVID-19.
The effect of this is likely to increase the risk assessment of some pregnant women and those who have given birth in the past. Of course that does not mean that they should shield. For the most part these are likely to be relatively young women with a low Covid risk. However the next part is the shielding criteria.
The detailed criteria are on a site that I cannot access when not at work but it seems to be an absolute risk over over 0.5% of death in the first wave (that was the data they used to create the formula) or a relative risk of 10 times above a person without risk factors or the same age.
I put a woman of 35 years into the calculator. She was white, had a BMI of 31 (150cm,70kg) and a postcode of SN1 2DQ (that is my surgery postcode in the centre of Swindon). I ticked the box for Type 2 diabetes.
The absolute risk of death was 1 in 9709 but the relative risk of 17. Risk of hospital admission was 1 in 558 – relative risk of 7.7. The risk of death is enough to trigger shielding. Nearly ten thousand people would have to shield perfectly to prevent one death. The relative risk was about ten times less than for the population as a whole.
(Without the Type 2 diabetes the relative risks were 1.7 and 2 respectively). If you want to know more about absolute and relative risk see this article.
The effect of the inclusion of a history of gestational diabetes as equivalent to type II diabetes and the relative risk criteria for shielding has had a significant effect on these women.
My personal view is that, although these women would not normally be on the flu vaccine programme it would be reasonable to offer them a Covid vaccination soon, as part of cohort 6. This is the cohort to which most patients with diabetes would belong and is being vaccinated now. I feel that the argument for shielding is much weaker.
If you are reading this having received a notice to shield out of the blue then don’t panic. Your absolute risk may be quite low. Have a try on the calculator to see. We are in lockdown now so still be careful but not paranoid. If you are offered the vaccine though, I see no reason to put it off. Go for it!
NHS Digital have clearly been busy through the lockdown as they released the QOF data a full two months earlier than in the last few years. I am pleased to say that all of the data is now on the website.
Things are pretty much as previous years although there are some new indicators, particularly around diabetes. There are also the ever changing NHS organisations as we have new CCGs and some variations in Primary Care Networks.
For most practices this is likely to have been the basis for payment - despite the assurance of the preservation of income in the light of the Covid-19 situation. Another effect of the Covid was a huge increase in the number of salbutamol inhalers issued in March which has bumped up the asthma register considerably. This is likely to drop back next year as the increase was almost entirely limited to March with a drop in April and a return to normal levels in May and June.
I hope to have data from Northern Ireland soon after it is published although this is less comparable with English practices now as indicators have diverged. I am also expecting the limited amount of data from Wales that we have seen in the past few years - essentially this is only disease prevalence data now.
There are a lot of statistics around Covid-19 and its likely impact over the next few weeks and months. There have been many charts online from expert as well as people who are playing with the numbers. Predicting the future is always difficult and epidemiology seems to mostly be the study of confounding factors. It can be easy to produce a simple model - and much more complicated to implement it.
I am certainly not an epidemiologist and so I have not published any numbers so far. I have played with a few simple models largely to see how they worked but nothing that had not been done to a much higher standard by other people.
Recently I had need to estimate some figures for my practice. I am making no predictions about how the pandemic will play out. There are no predictions in here. I have taken predictions from other people to work out the effect on my practice. In fact I have done this for every practice and PCN in England and it really is not much more work. It does make the spreadsheet work harder though!
There are various estimates of the total numbers of deaths and, whilst they influence the result we can model that fairly late. A quick way to get a ball park figure is to simply divide the deaths by the number of practices. There are almost exactly 7000 practices in England and, at the time of writing, 14,399 deaths. That is pretty close to 2 deaths per practice. Every death is a bad thing but we are clearly not seeing huge numbers in individual practices.
There are numerous other estimates. I have seen 40,000 deaths as an estimated UK total which would work out at about 5.5 deaths per practice in total. I will use this total, but it is pretty easy to convert to other numbers if see a figure which appears more reliable.
I have not make any allowance for practice size. There are a shade over 60 million patients registered with practices in England and so a quick bit of division suggests that we would expect .66 deaths per thousand patients. Thus a practice with 10,000 patients would expect around 6-7 deaths from Covid-19. By this stage we are getting to something that practices can use to estimate workload. It is unlikely that the figure of 40,000 is spot on but you can say, for instance, that you could plan for double that whilst hoping for a lower figure.
Can we refine this any more? There are many risk factors for death from Covid-19. As the disease has not been around for very long there have not been many good studies. One of the best was a look at mortality in China by Imperial College. This looked at age as a risk factor and have published this in ten year bands. Helpfully the age and sex makeup of the population is also published. This can come down to the year by year level but the five yearly bands are quite enough and still run to more than a third of a million lines on a spreadsheet.
There is also some information about disease risk factors such as diabetes and heart disease. We do have some of that information at practice level from the QOF. Could that be used to refine the risk level? Unfortunately probably not. The data for age related risk and the risk from co-morbidities has been calculated separate and not as independent factors. In reality the increasing age is a risk factor for diabetes and heart disease and so if we corrected for both we would likely be correcting twice. The risks are not independent. In the future there may be studies which look at these as individual variable and this would allow us to use the QOF information on top of the age related risk.
The process I used was to multiply the population in each year group by the mortality risk. So if a practice had 100 patients in a group and the risk was 1% I would count 100. If the risk was 15% I would count 1500. I add all of these together and then scale back to the national population to produce "Covid adjusted" list size. This is the list size of completely average people you would have to produce the same total mortality. This works a bit like the Carr-Hill formula.
The major assumption here is that all ages will have a similar rate of developing the disease. This has not been shown in the paper and hopefully shielding and social distancing will give a lower rate of disease in the elderly. On the other side the risk in care homes seems, at least from media reports, to be particularly high. I have also assumed that the infection rate is the same across England. That is certainly not the case at the moment but I think that it is probable that it will become more similar as we get towards the end of the pandemic.
With the adjusted list size you can then do what we did above to allocate the deaths in proportion. You can adjust the national deaths and the others will change, although this is a linear relationship. Increasing to 80,000 will just double the deaths for each practice and you could probably do that in your head.
I hope that you find this data to be useful. We are using this at our practice as a basis for planning services. Whilst the number will not be precise they give a rough estimate of what we should be providing. Other workload is likely to be proportional to mortality and so can get some guide to likely volume of work that we will be seeing. There is likely to be a lot of local variation. The final figures for a practice may be double or half of what is shown here but equally it would be surprising if they were out by a factor of ten. We can at least approximate what our response should be.
It is that time of year again and the QOF data for England and Wales is now on the website. This is a relatively quiet year as there have been no changes to the indicators in England (unlike 2019/20 when there quite a number of changes). Data from Wales is quite limited as they basically only have disease registers now and a couple of indicators concerning the administration of the flu vaccine.
Primary Care Networks do not currently feature in the results. NHS England do not seem to acknowledge their existence in statistics just yet and so even nearly four months after they were officially formed (at time of writing) there is no record of who they are or the practices that make them up. It would be possible to crowdsource some data but even OpenPrescribing, who have things like staff and a budget, think that it would be too much to do. I will try to put the data on when there is eventually a list, although quite how they fit into the hierarchies is not clear.
Also at time of writing there is no data available for Northern Ireland.
There is some change to the Welsh data this year. I try not to change previous years but this year NHS Wales published codes for their GP clusters as well as the health boards. I had always just made up my own in the past. The health board codes that I used were taken from the id of their page on an old version of NHS Wales website. I have updated the codes for these organisations to the official ones which should make integration with other data sources easier. Old links to the pages should still work as well - there is some translation in the software.
There is a constant prediction that QOF is going away. In fact it has been renewed for an, apparently, five year term. There is a lot more to be done.
Whilst you are here, if you are interested in how medical information is coded you are probably aware of the roll out of Snomed CT across the NHS. For a gentle introduction to Snomed I have written a book Starting Snomed - available on Amazon and on Kindle. If you have Prime and a Kindle it is included in your package!
One of the things that I was interested to look at when the QOF data came out last year was how GP at Hand performed. A lot has been written over the past year or so about this service, which uses a chatbox app as the first point of contact. For all of the QOF year in question the service had restrictions which have since been lifted about registering patients with long term conditions. This has led to concerns that GP at Hand has "cherry picked" patients who are younger and fitter, leaving other practices to deal with patients who have more pathology.
This has been denied by GP at Hand. Actually, as we will see, there is little doubt that they have a younger patients but they argue that the is resource neutral under the Carr-Hill formula which adjusts the practice Global Sum according to the age and sex of patients. This was introduced in 2004 along with the rest of the GMS contract. At the time it caused significant swings in income with particularly large reductions in income to practices with large numbers of younger patients. Practices which served university students were particularly badly affected. GP at Hand claims that it gets only 65% of the average GP funding per patient.
There is no significant adjustment in the Carr-Hill formula for how sick patients are. This has largely been done through the QOF although the effect has been quite variable over the years as the QOF has waxed and waned. I wanted to see if the QOF data let us answer the question of whether the patients at GP at Hand are healthier than we would expect.
We can start with a quick look at the QOF figures. In the year 2017/18 GP at Hand was based in a single practice at Lillie Road in Fulham. There are very low levels of disease prevalence there. In nine areas they are below the first centile - i.e there are in the bottom 1% of practices for the prevalence of that condition. In only two areas are they above the bottom five percent of practices - depression and mental health.
The data also shows the practice list size. If we look back to the previous year we can see that the list size increase from 2,500 in April 2017 to 24,000 in April 2018. This is such a huge rise that it is pretty much impossible to compare year on year. This is not the same practice that it was a year before. Even if none of the original patients left during the year they form a fairly insignificant number of patients at the end of the year.
As an aside I wondered where are all of these patients are coming from? GP at Hand will register patients from a wide area due to their chatbot technology. We can get a clue if we look at the total registered list size for Hammersmith and Fulham CCG. This has steadily risen over the years with a typical rate of 4-6000 patients. In the year 2017/18 there were an extra 24473 patients in the CCG. I don't know much about London but unless there has been a lot of house building it seems that most of GP at Hand's patients came from outside the CCG area.
CHD prevalence at Lillie Road
We can see from the QOF data that prevalence has plunged at Lillie Road over the year. Some of the register have barely risen despite the huge rise in patient numbers. The cancer register has risen from 51 to just 74. The number of patients with dementia has actually fallen from fourteen to twelve. That, however, is not very useful as we have already seen that the patients are completely different to the previous year. We are not comparing like with like. Clearly the new patients at the surgery are pretty healthy, but are they unusually healthy? We need more data.
Helpfully NHS Digital publish practice list sizes monthly and these are broken down by age and sex (insert your own joke here). We can use this to create profiles of practices and other organisations. Here is a population pyramid for England (which is all that NHS Digital cover).
It may not be a pyramid that that pharaohs would be proud of but there are distinct trends in the population. We can use the data for Lillie Road to see if this is similar to their population, or at least if it is very different. We can produce pyramids for Lillie Road practice and it is remarkably different to the UK population as a whole. The vast majority of their patients are between the ages of 20 and 45 with men tending to be a little older than women on the list. With such a radically different population it would seem rather unfair to compare the surgery against English averages. They are certainly not average!
It is worth checking as well whether this is something about the population Hammersmith and Fulham CCG although we have already seen that most of the Lille Road patients come from outside the area. The pyramid below included all practices except Lillie Road. The wikipedia page for Hammersmith and Fulham suggests this a borough full of young and single people and this is borne out in the population figures. There is also quite a lot more women than men registered with a GP although it is possible that this is due to fewer men registering with a GP. Contraception and cervical screening can be a reason for young women to join a practice more actively than men when moving around.
This is still quite different to Lillie road although it has the emphasis on young adults with very small numbers of children. Lillie Road demographics are not similar to its neighbours. Again it is going to be difficult to make comparisons. Lillie Road seems to be unlike any other type of practice that we already have.
Or maybe not. I mentioned the global sum earlier and that the effect that is having on Lillie Road may be similar to university practices. What about them? I typed the word "university" into the search box on my website and looked at the practices that appeared in the result. After taking out a couple of results that were either not actually practices or were out of England I came up with a list of 26 practices. I then put their populations together and produced a (final, I promise) population pyramid.
Now we seem to be getting somewhere. The shape if familiar although the lines a bit sharper. In general people are even younger in university practices and the chart appears as an even more exaggerated version of Lillie Road. There is also likely to be a degree of selection in universities as young people with chronic health problems may find it more difficult to access university. The effects of both these factors are likely to push down the rate of disease in these populations and, by comparison, this is likely to make the pathology at Lillie Road appear higher. I am not too worried about that as we are trying to see if pathology is lower than we would expect at Lillie Road and most of the biases are in its favour: they will minimise the appearance of cherry picking.
Let's look at the prevalence for the university practices and for Lillie Road. All of these figures are in percentages of the practice population with each of the conditions.
Area
Lillie Road
University Practices
p value
1
Atrial Fibrillation
0.2
0.26
0.13
2
Asthma
3.4
3.1
0.0044
3
Cancer
0.31
0.47
0.00049
4
Coronary Heart Disease
0.25
0.38
0.0008
5
Chronic Kidney Disease
0.25
0.38
0.0013
6
COPD
0.3
0.21
0.0039
7
Dementia
0.05
0.087
0.07
8
Depression
3.6
5.8
<0.0001
9
Diabetes
1
0.99
0.81
10
Epilepsy
0.24
0.22
0.567
11
Heart Failure
0.083
0.1
0.567
12
Hypertension
2.5
2
<0.0001
13
Learning Disability
0.088
0.082
0.86
14
Mental Health
0.77
0.35
<0.0001
15
Obesity
2.6
2.7
0.62
16
Osteoporosis
0.017
0.029
0.37
17
Peripheral Arterial Disease
0.054
0.067
0.52
18
Rheumatoid Arthritis
0.13
0.1
0.21
19
Stroke/TIA
0.17
0.24
0.021
Eyeballing the data does not suggest much of a difference. In some areas, such as depression the university practices have a higher prevalence and in others, including severe mental health problems Lillie Road is ahead. We can see the same information on a bar chart. The biggest differences are in depression. University practices are a little ahead in diseases related to ischaemic heart disease and dementia. I will cut Lillie some slack in the latter as they are fast growing and patients with dementia, or indeed cancer can be less likely to change their surgery although it is also likely that they are going to be less enthusiastic smartphone users. This is splitting hairs as University practices have about a tenth of the UK prevalence of dementia. These are small differences in small numbers. Using Pearson's Chi-Squared test only nine areas reach significance. Four are higher in Lille and five in the University practices.
I am not a statistician and this is a dig around the data rather than a formal analysis. I was looking for to see if there were obvious anomalies. We don't really know how the existing patients at the practice reacted to the change of management. It is possible there "old" and "new" populations being treated side by side but there is no evidence for this. I have certainly not found evidence of "cherry picking". The practice is no more unusual than a university practice catering primarily for students.
But before we get too used to the idea it is worth remembering that university practices are quite unusual. Their population pyramid is dramatically different to the country as a whole. Lillie Road is still an outlier even if it is similar t some other outliers. It would be quite strange to believe that the success here would automatically translate into other populations. These are patients with very low levels of chronic disease and attract relatively low levels of funding.
University practices are peculiar unusual.
I have made no attempt to review the quality of care delivered at this practice. QOF is a pretty blunt instrument for this. Their point score is good at a whisker under 550 out of 559 points. The rate of growth at Lillie Road seems to be slowing but they are also available at more sites across London so that is certainly not the whole story. I hope I have been able to cast a little light on this atypical, but perhaps not entirely unique, practice.
While you are here I would ask nicely that, if you found this interesting, you might take a look at my book "Starting Snomed: A beginner's guide to the Snomed CT medical terminology". It is an easy introduction to this powerful new tool that will be coming to practices this year. It is available now on Amazon and is also available for Kindle and all of the various offers that come with that. Thank you.
Over the weekend EMIS released v39 of the QOF Business Rules live onto practice systems. There have been reports of changes to practice figures. In some cases these have been quite substantial. I have done a bit of digging and there seem to be at least two separate things going on either of which can have signficant effect on the figures.
First I would just like to say that mistakes happen. This is a huge project and it is almost unimaginable that it would work first time perfectly. Having said that some fairly urgent work needs to be done to make sure that patients are identified correctly.
The root of the problems is that most of primary care is expected to move from Read codes to Snomed CT over the next few months. I am a big fan of Snomed CT and hope to release a book in a few weeks with an introduction for users. However it is quite different to Read codes. All clinical data will be translated ("mapped" in the jargon) from Read to Snomed but this process is sometimes not exact. Read has lots of problems and solving these in a modern coding system will mean some things are changed. This change is generally for the better but all change breaks stuff.
Version 39 of the business rules is the first to use Snomed CT. The objective is that all practices will be using this by the end of March so it makes sense that Snomed is used here. This has meant trying to translate all of the Read codes searches to Snomed CT. This is more complicated than simply translating the codes as it is the structure and relationships in Snomed that are key. For example asthma is listed under COPD in Read whereas it is correctly separated in Snomed. This makes the searches different in each.
This is where the first problem arises. To take an example "Post concussion syndrome" now puts a patient onto the dementia register. This is clearly rubbish but the problem is within Snomed. Postconcussion disorder is listed as a type of dementia which will put the patient on the register. This can be dealt with by specifically excluding it in the business rules but was missed this time. As I said there will be some errors in the first version but hopefully this will be rectified by NHS Digital soon, although a fairly comprehensive review of the thousands of included concepts is probably needed. Snomed also needs fixing although this is like to take a bit longer. Snomed has two releases of its international edition a year and business rules will need reviewing with each new release.
The second problem is down to the fact that practices have not moved to Snomed yet. One of the features of Read is that each code could have several "synonyms". The quotes are there because these synonyms quite often carried different meanings. For instance H30 was supposed to mean "Bronchitis Unspecified" but it could also mean "Recurrent Wheezy Bronchitis". These synonyms map to difference concepts in Snomed CT which seems reasonable. The former maps to Bronchitis and the latter to Chronic Asthmatic Bronchitis. This is included in the COPD register, presumably as a form of chronic bronchitis. However, as we are not using Snomed yet EMIS has translated these back to Read codes. The EMIS business rules system does not seem to know about Read synonyms - they have never been part of QOF business rules. The effect has just been to put everybody with a H20 code onto the register including Bronchitis Unspecified.
For similar reasons patients that have a record of the code "Tired all of the time" are being put onto the depression register.
There is inevitably going to be some pain on transferring from Read to Snomed. There is more of this sort of thing to come. In the next few days I would like to see NHS Digital fixing the rules and EMIS adding synonym support to their business rule calculator. In the longer term there is some fixing to do in Snomed although one of its great strengths is that fixing is possible, unlike the rigid structure of Read.