QOF Consultation Response

The Deparment of Health have launched a consulation on "Role of incentive schemes in general practice". You have until one minute to midnight on the 7th of March to submit any comments you have, and please do so. I have copied my response below. Most of this was written before submission and so there are a couple of cases of free text where there was not a box to put it in.

Do you agree or disagree that incentives like QOF and IIF should form part of the income for general practice?


The QOF, and to a considerably smaller extent, the IIF, have contributed to a rise in data quality and some measures of quality of care. They should be considered together with Enhanced Services as part of the funding of general practice.

Whilst QOF and IIF are considered incentives and enhanced services (ES) are considered as commissioned this is not a distinction that survives to the provider level. If a commissioned service payment or an incentive payment is less than the cost of providing the service, then there will not be a business case for the practice to provide the service.

IIF has not proven successful and has been largely retired this year. If it survives, then it should be included in a single framework with ES and QOF. All of these must be considered within a larger commissioning framework. It makes little sense in the current contract that a single process can be paid through several different mechanisms.

Do you agree or disagree that QOF and IIF help ensure that sufficient resources are applied to preventative and proactive care?


Most of the QOF is based around care of chronic conditions and the secondary prevention of complications of chronic disease. Public health measures and primary prevention have not been successful when these have been tried previously.

It should be considered a chronic disease management framework. Proactive care can be tackled in annual checks, but this is management of disease rather than prevention. There is an element of secondary prevention here but this has not been proven in rigorous studies.

Public health measures can be, and are, commissioned through enhanced services.

Would relative improvement targets be more effective than absolute targets at delivering improvements in care quality while also addressing health inequalities?

No. I would disagree strongly here.

Differential targets would appear unfair to practices and introduce perverse incentives which could be damaging to patient care.

Relative improvement targets would mean that practices could be paid different amounts for the same work. This would be unfair on practices. For the majority of QOF indicators practices start from zero at the beginning of the QOF year and build their achievement throughout the year until the following April. It is not the case that practices start from a higher level if they have had a higher level of achievement in the past, although they may have better processes in place.

This would also act as a brake on innovation. Practices which target work at a disease area and increase achievement in areas that subsequently become introduced into the QOF would be penalised with higher thresholds. Where an indicator remains in QOF for a few years then practices may be incentivised to vary their achievement from year to year, perhaps on a two year cycle, to maximise income.

The introduction to this question mentions an upper threshold of 85% leaving 15% of patients without incentives attached to them. The simplest solution is to move the upper threshold to 100% whilst leaving the lower threshold and the gradient the same. The main reason for not doing this would be the increased funding that may be required. There is no compelling reason to leave that 15% of patient without incentive.

If the upper and lower thresholds are too close together then this may reduce the incentive for practices and generate perverse incentives. In the current QOF the range between lower and upper thresholds for childhood vaccinations is small, although it has increased slightly in the most recent year. For practices with low achievement there is little incentive as they could have little chance of reaching the lower threshold. High performing practices also have little financial reason to improve. For other practices, the incentive is low for many patients but the patients between the thresholds can be worth hundreds of pounds each.

There are, of course, differences in populations and locations for practices which will influence how easy it is to deliver care. This is dealt with currently with a prevalence adjustment which will vary payment according to the specific disease burden. Adjustments for other social factors are properly contained in the global sum adjustment, although this has not been reviewed for nearly 20 years. Adjustments cannot reasonably be incorporated into an incentive scheme.

In what other ways could we use incentive schemes to address health inequalities?

Inequalities happen at a personal level and contracts operate at a practice level or above. Reconciling the two is likely to be very difficult. Solutions, such as prioritising practices with harder to reach populations is better done at the global sum level. Practices are unlikely to be willing to offer a differentiated service to different groups of registered patients.

There is potential for target enhanced services with specific focus but this should not be part of IIF or QOF.

To what degree, if any, do you think that ICBs should influence the nature of any incentive scheme?

Integrated Care Boards should be consulted on a national framework. This should be the core of any scheme. There may be potential to use a menu, as was the case of National Enhanced Services, but this is likely to be small.

There has been some attempt at local commissioning with Enhanced Services which, as I discussed earlier are broadly equivalent to QOF at the provider level. These varied in quality due to inexperience of commissioners. They have suffered from poor infrastructure support. The use of a menu of nationally supported services may help in that specific area.

There will always be practices at the geographical edges of ICBs that will have services which vary with their neighbours. There may be a perception of unfairness there and there may be a negative impact on services which are not incentivised in a particular area. They could be considered uncommissioned.

Do you agree or disagree that a PCN-level incentive scheme like IIF encourages PCN-wide efforts to improve quality?

Agree, but only to a small extent.

Incentives will work best if they are closer to the person whose behaviour you are trying to influence. PCNs are effectively a management structure so there is some sense in applying incentives there if it is the management that you are trying to influence.

PCNs are not the most efficient way to incentivise individuals – especially as some PCNs can be extremely large and incentives somewhat distant from clinicians.

What type of indicators, if any, within incentive schemes do you think most help to improve care quality? (Select all that apply)

Clinical coding (for example, accurate recording of smoking status in a patient record)

This is an effective use of incentive but the effect on care quality is not clear.

Clinical activity (for example, undertaking an annual asthma review)

This is probably all that you can do.

Clinical outcomes (for example, stroke rates)

Clinical outcomes are much too far removed from activity to be an effective incentive. We have mild version of this problem in the current QOF around shingles vaccination where there can be a decade between the incentivised action and the incentive being paid. This may be to someone else entirely, even in a different practice. It is also practically quite difficult to deal with the effects of patient death or emigration (in the latter case that would include moving to Scotland, Wales or Northern Ireland)

Quality improvement (QI) (for example, local project to improve patient experience or staff wellbeing)

This becomes so vague as to just be a commissioning effort which is better dealt with in an ES. These are pure process box ticks.

Do you think there is a role for incentives to reward practices for clinical outcomes measured at PCN or place level?


The incentives become so detached from the individual action that they cease to be incentives. Keep the incentive close to the person taking the action.

Do you agree or disagree that there is a role for incentive schemes to focus on helping to reduce pressures on other parts of the health system?

Neither agree nor disagree

The likely effect of any improvement in patient care is a reduced pressure on the health service but this should not be the primary motivation for this – that should be patient health. That is also likely to be a variable outcome which will only be apparent with large numbers of patients and difficult to attribute to any single actor.

Do you agree or disagree that incentives should be more tailored towards quality of care for patients with multiple long-term conditions?

Neither agree nor disagree

Whilst the aim is laudable specific indicators are likely to be difficult to create. They can end up being so vague as to be useless. They tend to say “do a review”, from which is very difficult to establish any evidence of benefit. The evidence for specific interventions in multimorbidity are poor.

Do you agree or disagree that patient experience of access could be improved if included in an incentive scheme?


Creation of standards in this area would be very difficult in a way that is equitable. Of note is that there has been an attempt in this area in the past in the Patient Experience indicator. From 2009 to 2011 the PE 7 indicator was based on the number of patients who responded to the GP practice survey. This indicator did not last as it was not felt to be effective.

Any change patient perception of access is likely to require substantial resources, and this may be more that would be appropriate to commit to an indicator. Any real incentive is likely to require substantial change to the contract including payments for each type of patient interaction or appointment. Whilst this may be a direction that NHS England would wish to consider it is not something that would be part of an incentive framework.

Do you agree or disagree that continuity of care could be improved if included in an incentive scheme?


There is no current indicator to measure continuity of care. Even survey results and patient perceptions have not been validated as measures. Any incentive payments could have the potential to produce unexpected incentives or results.

Any proposed indicators here would need to be piloted but the chances of producing a simple, clear and specific indicator are small.

Do you agree or disagree that patient choice could be improved if included in an incentive scheme?


Once again, the production of an effective indicator is likely to to be very difficult. Choice is often provided at referral management centres which are managed by ICBs. Wherever it happens it is difficult to measure. Choice is likely to vary significantly between different areas of the country. Choice is easier in areas with higher densities of providers. This will be an area that is better dealt with at the ICB level.

Do you agree or disagree that the effectiveness of prescribing could be improved if included in an incentive scheme?


Prescribing is an area that has been the subject of indicators in both the QOF and IIF in the past. A stable multi-year approach is most successful – incentives for practices to change are higher if there a snowball effect on treatment. We have seen this effect in QOF around the use of statins and the use of medication in left ventricular systolic heart failure.

An example of how this does not work was the transfer to edoxaban which was included in the IIF in 2022/23. This was a purely financially driven indicator which was dropped after a year as the financial situation changed. The changes from this indicator were quite small. 

Prescribing data from OpenPrescribing

If you think there are any other areas that should be considered for inclusion within an incentive scheme, please list them here.

There is some potential for safe prescribing measures to be used. This is mostly avoiding things so would effectively be an “upside down” indicator where a lower percentage is scored more highly.

What opportunities are there to simplify and streamline any schemes for clinicians, and reduce any unnecessary administrative burden, while preserving patient care?

The best way to avoid “tick box” indicators are simply not to introduce them in the first place. Whilst there are many aspects of practice that it might be considered desirable to incentivise in practice it is important to also consider the quality of the indicator for that area. An important area with poor quality or indirect indicators should not be included – other contractual mechanisms should be used to deliver these improvements.

The difference between item of service payments and incentive payments is small and there is they should be considered in the same way. Bringing all performance related searches and payments, including immunisations, into a single framework would greatly reduce the boxes that need to be ticked. For example, there are currently four or five payments linked to influenza vaccinations which are claimed through three different systems. This is time consuming, costly to administer and entirely unnecessary.

QOF Data for England 2020/21

The data for the QOF in England has now been published on the site for they year 2020-21. You won't need me to tell you that it was quite an unusual year. Inb QOF terms most of the indicators were suspended through the year. Prescribing indicators were suspended a bit later on through the year than other but, in the end, all that remained were the flu and cervical screening indicators. These all had their points doubled.

The other area that was still active was the prevalence adjustment. Effectively this meant that practices were still paid for the number of patients on their disease registers. It still paid to add patients to disease registers.

Of course there were a lot of other complications and this is probably one of the reasons that NHS Digital did not publish points totals this year. There were several new indicators in there as well and points would not really have made a lot of sense and certainly would not have tallied with payments. However the data was still collected and is presented on the site. As with many things 2020-21 is going to stand out a bit on the charts. Please do have an explore of the data.

Although most of the indicators are down the figures are a testement to the huge amount of work that was done by practices to some of the most vulnerable members of our practice lists. Even with all of the restrictions that were necessary through the year large numbers of patients continued to get appropriate care for chronic disease.

I hope to have data from Northern Ireland in a few weeks. QOF no longer takes place in Scotland and Wales has a very minimal system and I could not find published data last year.

What I know so far about QCovid, Shielding and Gestational Diabetes

QCovid is the latest predictive formula from QResearch. Currently based at the University of Oxford and headed by Professor Julia Hippisley-Cox it has been doing this sort of thing for a while now. QRisk is well known and respected but there are several other score derived from a big bank of patient records. I think that we have to assume that they know what they are doing.

They are also reasonably open about their methods. They published in the BMJ way back in October with all of the factors listed. There are a number of medical conditions along with deprivation scores and demographic information. There is no mention of gestational diabetes in that paper. In fact the only mentions of pregnancy at all are in reference to previous shielding criteria (pregnancy with significant heart disease) and that there were too few events to include pregnancy in the analysis. The latter is quite telling in itself given that they looked at over 4300 deaths in their initial analysis - a large effect is likely to have been spotted.

They have also published the algorithm itself. The maths are complicated and largely beyond me to follow but it is easy to see that the inputs do not include gestational diabetes, only types one and two.

When it came to implementation NHS Digital said that the categories for diabetes were :

Type 1 diabetes

Type 2 diabetes (including other forms such as gestational diabetes)

Gestational diabetes is high blood sugar (glucose) that develops during pregnancy and can resolve after giving birth. Women who have had gestational diabetes are at increased risk of developing type 2 diabetes or having undiagnosed diabetes.

Some patients with past gestational diabetes have been identified in combination with other factors by the QCovid model as being potentially at high risk from COVID-19.

Somewhere along the line Gestational Diabetes has been classified as being the same as type two diabetes – even when the GD has resolved. It is not clear where this decision came from. There is no sign that it was intended by the QResearch team.

The Royal College of Obstetrics and Gynaecology tweeted

The effect of this is likely to increase the risk assessment of some pregnant women and those who have given birth in the past. Of course that does not mean that they should shield. For the most part these are likely to be relatively young women with a low Covid risk. However the next part is the shielding criteria.

The detailed criteria are on a site that I cannot access when not at work but it seems to be an absolute risk over over 0.5% of death in the first wave (that was the data they used to create the formula) or a relative risk of 10 times above a person without risk factors or the same age.

I put a woman of 35 years into the calculator. She was white, had a BMI of 31 (150cm,70kg) and a postcode of SN1 2DQ (that is my surgery postcode in the centre of Swindon). I ticked the box for Type 2 diabetes.

The absolute risk of death was 1 in 9709 but the relative risk of 17. Risk of hospital admission was 1 in 558 – relative risk of 7.7. The risk of death is enough to trigger shielding. Nearly ten thousand people would have to shield perfectly to prevent one death. The relative risk was about ten times less than for the population as a whole.

(Without the Type 2 diabetes the relative risks were 1.7 and 2 respectively). If you want to know more about absolute and relative risk see this article.

The effect of the inclusion of a history of gestational diabetes as equivalent to type II diabetes and the relative risk criteria for shielding has had a significant effect on these women.

My personal view is that, although these women would not normally be on the flu vaccine programme it would be reasonable to offer them a Covid vaccination soon, as part of cohort 6. This is the cohort to which most patients with diabetes would belong and is being vaccinated now. I feel that the argument for shielding is much weaker.

If you are reading this having received a notice to shield out of the blue then don’t panic. Your absolute risk may be quite low. Have a try on the calculator to see. We are in lockdown now so still be careful but not paranoid. If you are offered the vaccine though, I see no reason to put it off. Go for it!

QOF Data for England 2019/20

NHS Digital have clearly been busy through the lockdown as they released the QOF data a full two months earlier than in the last few years. I am pleased to say that all of the data is now on the website.

Things are pretty much as previous years although there are some new indicators, particularly around diabetes. There are also the ever changing NHS organisations as we have new CCGs and some variations in Primary Care Networks.

For most practices this is likely to have been the basis for payment - despite the assurance of the preservation of income in the light of the Covid-19 situation. Another effect of the Covid was a huge increase in the number of salbutamol inhalers issued in March which has bumped up the asthma register considerably. This is likely to drop back next year as the increase was almost entirely limited to March with a drop in April and a return to normal levels in May and June.

I hope to have data from Northern Ireland soon after it is published although this is less comparable with English practices now as indicators have diverged. I am also expecting the limited amount of data from Wales that we have seen in the past few years - essentially this is only disease prevalence data now.

Mortality Estimates for Practices

There are a lot of statistics around Covid-19 and its likely impact over the next few weeks and months. There have been many charts online from expert as well as people who are playing with the numbers. Predicting the future is always difficult and epidemiology seems to mostly be the study of confounding factors. It can be easy to produce a simple model - and much more complicated to implement it.

I am certainly not an epidemiologist and so I have not published any numbers so far. I have played with a few simple models largely to see how they worked but nothing that had not been done to a much higher standard by other people.

Recently I had need to estimate some figures for my practice. I am making no predictions about how the pandemic will play out. There are no predictions in here. I have taken predictions from other people to work out the effect on my practice. In fact I have done this for every practice and PCN in England and it really is not much more work. It does make the spreadsheet work harder though!

There are various estimates of the total numbers of deaths and, whilst they influence the result we can model that fairly late. A quick way to get a ball park figure is to simply divide the deaths by the number of practices. There are almost exactly 7000 practices in England and, at the time of writing, 14,399 deaths. That is pretty close to 2 deaths per practice. Every death is a bad thing but we are clearly not seeing huge numbers in individual practices.

There are numerous other estimates. I have seen 40,000 deaths as an estimated UK total which would work out at about 5.5 deaths per practice in total. I will use this total, but it is pretty easy to convert to other numbers if see a figure which appears more reliable.

I have not make any allowance for practice size. There are a shade over 60 million patients registered with practices in England and so a quick bit of division suggests that we would expect .66 deaths per thousand patients. Thus a practice with 10,000 patients would expect around 6-7 deaths from Covid-19. By this stage we are getting to something that practices can use to estimate workload. It is unlikely that the figure of 40,000 is spot on but you can say, for instance, that you could plan for double that whilst hoping for a lower figure.

Can we refine this any more? There are many risk factors for death from Covid-19. As the disease has not been around for very long there have not been many good studies. One of the best was a look at mortality in China by Imperial College. This looked at age as a risk factor and have published this in ten year bands. Helpfully the age and sex makeup of the population is also published. This can come down to the year by year level but the five yearly bands are quite enough and still run to more than a third of a million lines on a spreadsheet.

There is also some information about disease risk factors such as diabetes and heart disease. We do have some of that information at practice level from the QOF. Could that be used to refine the risk level? Unfortunately probably not. The data for age related risk and the risk from co-morbidities has been calculated separate and not as independent factors. In reality the increasing age is a risk factor for diabetes and heart disease and so if we corrected for both we would likely be correcting twice. The risks are not independent. In the future there may be studies which look at these as individual variable and this would allow us to use the QOF information on top of the age related risk.

The process I used was to multiply the population in each year group by the mortality risk. So if a practice had 100 patients in a group and the risk was 1% I would count 100. If the risk was 15% I would count 1500. I add all of these together and then scale back to the national population to produce "Covid adjusted" list size. This is the list size of completely average people you would have to produce the same total mortality. This works a bit like the Carr-Hill formula.

The major assumption here is that all ages will have a similar rate of developing the disease. This has not been shown in the paper and hopefully shielding and social distancing will give a lower rate of disease in the elderly. On the other side the risk in care homes seems, at least from media reports, to be particularly high. I have also assumed that the infection rate is the same across England. That is certainly not the case at the moment but I think that it is probable that it will become more similar as we get towards the end of the pandemic.

With the adjusted list size you can then do what we did above to allocate the deaths in proportion. You can adjust the national deaths and the others will change, although this is a linear relationship. Increasing to 80,000 will just double the deaths for each practice and you could probably do that in your head.

I hope that you find this data to be useful. We are using this at our practice as a basis for planning services. Whilst the number will not be precise they give a rough estimate of what we should be providing. Other workload is likely to be proportional to mortality and so can get some guide to likely volume of work that we will be seeing. There is likely to be a lot of local variation. The final figures for a practice may be double or half of what is shown here but equally it would be surprising if they were out by a factor of ten. We can at least approximate what our response should be.

You can either see the list on Google Docs or download the spreadsheet. You can also see the full workings out on a very large (24Mb) spreadsheet which runs very slowly on my computer.

2019 Data from England and Wales

It is that time of year again and the QOF data for England and Wales is now on the website. This is a relatively quiet year as there have been no changes to the indicators in England (unlike 2019/20 when there quite a number of changes). Data from Wales is quite limited as they basically only have disease registers now and a couple of indicators concerning the administration of the flu vaccine.

Primary Care Networks do not currently feature in the results. NHS England do not seem to acknowledge their existence in statistics just yet and so even nearly four months after they were officially formed (at time of writing) there is no record of who they are or the practices that make them up. It would be possible to crowdsource some data but even OpenPrescribing, who have things like staff and a budget, think that it would be too much to do. I will try to put the data on when there is eventually a list, although quite how they fit into the hierarchies is not clear.

Also at time of writing there is no data available for Northern Ireland.

There is some change to the Welsh data this year. I try not to change previous years but this year NHS Wales published codes for their GP clusters as well as the health boards. I had always just made up my own in the past. The health board codes that I used were taken from the id of their page on an old version of NHS Wales website. I have updated the codes for these organisations to the official ones which should make integration with other data sources easier. Old links to the pages should still work as well - there is some translation in the software.

There is a constant prediction that QOF is going away. In fact it has been renewed for an, apparently, five year term. There is a lot more to be done.

Whilst you are here, if you are interested in how medical information is coded you are probably aware of the roll out of Snomed CT across the NHS. For a gentle introduction to Snomed I have written a book Starting Snomed - available on Amazon and on Kindle. If you have Prime and a Kindle it is included in your package!

A look at GP at Hand

One of the things that I was interested to look at when the QOF data came out last year was how GP at Hand performed. A lot has been written over the past year or so about this service, which uses a chatbox app as the first point of contact. For all of the QOF year in question the service had restrictions which have since been lifted about registering patients with long term conditions. This has led to concerns that GP at Hand has "cherry picked" patients who are younger and fitter, leaving other practices to deal with patients who have more pathology.

This has been denied by GP at Hand. Actually, as we will see, there is little doubt that they have a younger patients but they argue that the is resource neutral under the Carr-Hill formula which adjusts the practice Global Sum according to the age and sex of patients. This was introduced in 2004 along with the rest of the GMS contract. At the time it caused significant swings in income with particularly large reductions in income to practices with large numbers of younger patients. Practices which served university students were particularly badly affected. GP at Hand claims that it gets only 65% of the average GP funding per patient.

There is no significant adjustment in the Carr-Hill formula for how sick patients are. This has largely been done through the QOF although the effect has been quite variable over the years as the QOF has waxed and waned. I wanted to see if the QOF data let us answer the question of whether the patients at GP at Hand are healthier than we would expect.

We can start with a quick look at the QOF figures. In the year 2017/18 GP at Hand was based in a single practice at Lillie Road in Fulham. There are very low levels of disease prevalence there. In nine areas they are below the first centile - i.e there are in the bottom 1% of practices for the prevalence of that condition. In only two areas are they above the bottom five percent of practices - depression and mental health.

The data also shows the practice list size. If we look back to the previous year we can see that the list size increase from 2,500 in April 2017 to 24,000 in April 2018. This is such a huge rise that it is pretty much impossible to compare year on year. This is not the same practice that it was a year before. Even if none of the original patients left during the year they form a fairly insignificant number of patients at the end of the year.

As an aside I wondered where are all of these patients are coming from? GP at Hand will register patients from a wide area due to their chatbot technology. We can get a clue if we look at the total registered list size for Hammersmith and Fulham CCG. This has steadily risen over the years with a typical rate of 4-6000 patients. In the year 2017/18 there were an extra 24473 patients in the CCG. I don't know much about London but unless there has been a lot of house building it seems that most of GP at Hand's patients came from outside the CCG area.

CHD prevalence at Lillie Road

We can see from the QOF data that prevalence has plunged at Lillie Road over the year. Some of the register have barely risen despite the huge rise in patient numbers. The cancer register has risen from 51 to just 74. The number of patients with dementia has actually fallen from fourteen to twelve. That, however, is not very useful as we have already seen that the patients are completely different to the previous year. We are not comparing like with like. Clearly the new patients at the surgery are pretty healthy, but are they unusually healthy? We need more data.

Helpfully NHS Digital publish practice list sizes monthly and these are broken down by age and sex (insert your own joke here). We can use this to create profiles of practices and other organisations. Here is a population pyramid for England (which is all that NHS Digital cover).

It may not be a pyramid that that pharaohs would be proud of but there are distinct trends in the population. We can use the data for Lillie Road to see if this is similar to their population, or at least if it is very different. We can produce pyramids for Lillie Road practice and it is remarkably different to the UK population as a whole. The vast majority of their patients are between the ages of 20 and 45 with men tending to be a little older than women on the list. With such a radically different population it would seem rather unfair to compare the surgery against English averages. They are certainly not average!

It is worth checking as well whether this is something about the population Hammersmith and Fulham CCG although we have already seen that most of the Lille Road patients come from outside the area. The pyramid below included all practices except Lillie Road. The wikipedia page for Hammersmith and Fulham suggests this a borough full of young and single people and this is borne out in the population figures. There is also quite a lot more women than men registered with a GP although it is possible that this is due to fewer men registering with a GP. Contraception and cervical screening can be a reason for young women to join a practice more actively than men when moving around.

This is still quite different to Lillie road although it has the emphasis on young adults with very small numbers of children. Lillie Road demographics are not similar to its neighbours. Again it is going to be difficult to make comparisons. Lillie Road seems to be unlike any other type of practice that we already have.

Or maybe not. I mentioned the global sum earlier and that the effect that is having on Lillie Road may be similar to university practices. What about them? I typed the word "university" into the search box on my website and looked at the practices that appeared in the result. After taking out a couple of results that were either not actually practices or were out of England I came up with a list of 26 practices. I then put their populations together and produced a (final, I promise) population pyramid.

Now we seem to be getting somewhere. The shape if familiar although the lines a bit sharper. In general people are even younger in university practices and the chart appears as an even more exaggerated version of Lillie Road. There is also likely to be a degree of selection in universities as young people with chronic health problems may find it more difficult to access university. The effects of both these factors are likely to push down the rate of disease in these populations and, by comparison, this is likely to make the pathology at Lillie Road appear higher. I am not too worried about that as we are trying to see if pathology is lower than we would expect at Lillie Road and most of the biases are in its favour: they will minimise the appearance of cherry picking.

Let's look at the prevalence for the university practices and for Lillie Road. All of these figures are in percentages of the practice population with each of the conditions.

Area Lillie Road University Practices p value
1 Atrial Fibrillation 0.2 0.26 0.13
2 Asthma 3.4 3.1 0.0044
3 Cancer 0.31 0.47 0.00049
4 Coronary Heart Disease 0.25 0.38 0.0008
5 Chronic Kidney Disease 0.25 0.38 0.0013
6 COPD 0.3 0.21 0.0039
7 Dementia 0.05 0.087 0.07
8 Depression 3.6 5.8 <0.0001
9 Diabetes 1 0.99 0.81
10 Epilepsy 0.24 0.22 0.567
11 Heart Failure 0.083 0.1 0.567
12 Hypertension 2.5 2 <0.0001
13 Learning Disability 0.088 0.082 0.86
14 Mental Health 0.77 0.35 <0.0001
15 Obesity 2.6 2.7 0.62
16 Osteoporosis 0.017 0.029 0.37
17 Peripheral Arterial Disease 0.054 0.067 0.52
18 Rheumatoid Arthritis 0.13 0.1 0.21
19 Stroke/TIA 0.17 0.24 0.021

Eyeballing the data does not suggest much of a difference. In some areas, such as depression the university practices have a higher prevalence and in others, including severe mental health problems Lillie Road is ahead. We can see the same information on a bar chart. The biggest differences are in depression. University practices are a little ahead in diseases related to ischaemic heart disease and dementia. I will cut Lillie some slack in the latter as they are fast growing and patients with dementia, or indeed cancer can be less likely to change their surgery although it is also likely that they are going to be less enthusiastic smartphone users. This is splitting hairs as University practices have about a tenth of the UK prevalence of dementia. These are small differences in small numbers. Using Pearson's Chi-Squared test only nine areas reach significance. Four are higher in Lille and five in the University practices.

I am not a statistician and this is a dig around the data rather than a formal analysis. I was looking for to see if there were obvious anomalies. We don't really know how the existing patients at the practice reacted to the change of management. It is possible there "old" and "new" populations being treated side by side but there is no evidence for this. I have certainly not found evidence of "cherry picking". The practice is no more unusual than a university practice catering primarily for students.

But before we get too used to the idea it is worth remembering that university practices are quite unusual. Their population pyramid is dramatically different to the country as a whole. Lillie Road is still an outlier even if it is similar t some other outliers. It would be quite strange to believe that the success here would automatically translate into other populations. These are patients with very low levels of chronic disease and attract relatively low levels of funding.

University practices are peculiar unusual.

I have made no attempt to review the quality of care delivered at this practice. QOF is a pretty blunt instrument for this. Their point score is good at a whisker under 550 out of 559 points. The rate of growth at Lillie Road seems to be slowing but they are also available at more sites across London so that is certainly not the whole story. I hope I have been able to cast a little light on this atypical, but perhaps not entirely unique, practice.

While you are here I would ask nicely that, if you found this interesting, you might take a look at my book "Starting Snomed: A beginner's guide to the Snomed CT medical terminology". It is an easy introduction to this powerful new tool that will be coming to practices this year. It is available now on Amazon and is also available for Kindle and all of the various offers that come with that. Thank you.