Showing posts with label analysis. Show all posts
Showing posts with label analysis. Show all posts

A look at GP at Hand

One of the things that I was interested to look at when the QOF data came out last year was how GP at Hand performed. A lot has been written over the past year or so about this service, which uses a chatbox app as the first point of contact. For all of the QOF year in question the service had restrictions which have since been lifted about registering patients with long term conditions. This has led to concerns that GP at Hand has "cherry picked" patients who are younger and fitter, leaving other practices to deal with patients who have more pathology.

This has been denied by GP at Hand. Actually, as we will see, there is little doubt that they have a younger patients but they argue that the is resource neutral under the Carr-Hill formula which adjusts the practice Global Sum according to the age and sex of patients. This was introduced in 2004 along with the rest of the GMS contract. At the time it caused significant swings in income with particularly large reductions in income to practices with large numbers of younger patients. Practices which served university students were particularly badly affected. GP at Hand claims that it gets only 65% of the average GP funding per patient.

There is no significant adjustment in the Carr-Hill formula for how sick patients are. This has largely been done through the QOF although the effect has been quite variable over the years as the QOF has waxed and waned. I wanted to see if the QOF data let us answer the question of whether the patients at GP at Hand are healthier than we would expect.

We can start with a quick look at the QOF figures. In the year 2017/18 GP at Hand was based in a single practice at Lillie Road in Fulham. There are very low levels of disease prevalence there. In nine areas they are below the first centile - i.e there are in the bottom 1% of practices for the prevalence of that condition. In only two areas are they above the bottom five percent of practices - depression and mental health.

The data also shows the practice list size. If we look back to the previous year we can see that the list size increase from 2,500 in April 2017 to 24,000 in April 2018. This is such a huge rise that it is pretty much impossible to compare year on year. This is not the same practice that it was a year before. Even if none of the original patients left during the year they form a fairly insignificant number of patients at the end of the year.

As an aside I wondered where are all of these patients are coming from? GP at Hand will register patients from a wide area due to their chatbot technology. We can get a clue if we look at the total registered list size for Hammersmith and Fulham CCG. This has steadily risen over the years with a typical rate of 4-6000 patients. In the year 2017/18 there were an extra 24473 patients in the CCG. I don't know much about London but unless there has been a lot of house building it seems that most of GP at Hand's patients came from outside the CCG area.

CHD prevalence at Lillie Road

We can see from the QOF data that prevalence has plunged at Lillie Road over the year. Some of the register have barely risen despite the huge rise in patient numbers. The cancer register has risen from 51 to just 74. The number of patients with dementia has actually fallen from fourteen to twelve. That, however, is not very useful as we have already seen that the patients are completely different to the previous year. We are not comparing like with like. Clearly the new patients at the surgery are pretty healthy, but are they unusually healthy? We need more data.

Helpfully NHS Digital publish practice list sizes monthly and these are broken down by age and sex (insert your own joke here). We can use this to create profiles of practices and other organisations. Here is a population pyramid for England (which is all that NHS Digital cover).

It may not be a pyramid that that pharaohs would be proud of but there are distinct trends in the population. We can use the data for Lillie Road to see if this is similar to their population, or at least if it is very different. We can produce pyramids for Lillie Road practice and it is remarkably different to the UK population as a whole. The vast majority of their patients are between the ages of 20 and 45 with men tending to be a little older than women on the list. With such a radically different population it would seem rather unfair to compare the surgery against English averages. They are certainly not average!

It is worth checking as well whether this is something about the population Hammersmith and Fulham CCG although we have already seen that most of the Lille Road patients come from outside the area. The pyramid below included all practices except Lillie Road. The wikipedia page for Hammersmith and Fulham suggests this a borough full of young and single people and this is borne out in the population figures. There is also quite a lot more women than men registered with a GP although it is possible that this is due to fewer men registering with a GP. Contraception and cervical screening can be a reason for young women to join a practice more actively than men when moving around.

This is still quite different to Lillie road although it has the emphasis on young adults with very small numbers of children. Lillie Road demographics are not similar to its neighbours. Again it is going to be difficult to make comparisons. Lillie Road seems to be unlike any other type of practice that we already have.

Or maybe not. I mentioned the global sum earlier and that the effect that is having on Lillie Road may be similar to university practices. What about them? I typed the word "university" into the search box on my website and looked at the practices that appeared in the result. After taking out a couple of results that were either not actually practices or were out of England I came up with a list of 26 practices. I then put their populations together and produced a (final, I promise) population pyramid.

Now we seem to be getting somewhere. The shape if familiar although the lines a bit sharper. In general people are even younger in university practices and the chart appears as an even more exaggerated version of Lillie Road. There is also likely to be a degree of selection in universities as young people with chronic health problems may find it more difficult to access university. The effects of both these factors are likely to push down the rate of disease in these populations and, by comparison, this is likely to make the pathology at Lillie Road appear higher. I am not too worried about that as we are trying to see if pathology is lower than we would expect at Lillie Road and most of the biases are in its favour: they will minimise the appearance of cherry picking.

Let's look at the prevalence for the university practices and for Lillie Road. All of these figures are in percentages of the practice population with each of the conditions.

Area Lillie Road University Practices p value
1 Atrial Fibrillation 0.2 0.26 0.13
2 Asthma 3.4 3.1 0.0044
3 Cancer 0.31 0.47 0.00049
4 Coronary Heart Disease 0.25 0.38 0.0008
5 Chronic Kidney Disease 0.25 0.38 0.0013
6 COPD 0.3 0.21 0.0039
7 Dementia 0.05 0.087 0.07
8 Depression 3.6 5.8 <0.0001
9 Diabetes 1 0.99 0.81
10 Epilepsy 0.24 0.22 0.567
11 Heart Failure 0.083 0.1 0.567
12 Hypertension 2.5 2 <0.0001
13 Learning Disability 0.088 0.082 0.86
14 Mental Health 0.77 0.35 <0.0001
15 Obesity 2.6 2.7 0.62
16 Osteoporosis 0.017 0.029 0.37
17 Peripheral Arterial Disease 0.054 0.067 0.52
18 Rheumatoid Arthritis 0.13 0.1 0.21
19 Stroke/TIA 0.17 0.24 0.021

Eyeballing the data does not suggest much of a difference. In some areas, such as depression the university practices have a higher prevalence and in others, including severe mental health problems Lillie Road is ahead. We can see the same information on a bar chart. The biggest differences are in depression. University practices are a little ahead in diseases related to ischaemic heart disease and dementia. I will cut Lillie some slack in the latter as they are fast growing and patients with dementia, or indeed cancer can be less likely to change their surgery although it is also likely that they are going to be less enthusiastic smartphone users. This is splitting hairs as University practices have about a tenth of the UK prevalence of dementia. These are small differences in small numbers. Using Pearson's Chi-Squared test only nine areas reach significance. Four are higher in Lille and five in the University practices.

I am not a statistician and this is a dig around the data rather than a formal analysis. I was looking for to see if there were obvious anomalies. We don't really know how the existing patients at the practice reacted to the change of management. It is possible there "old" and "new" populations being treated side by side but there is no evidence for this. I have certainly not found evidence of "cherry picking". The practice is no more unusual than a university practice catering primarily for students.

But before we get too used to the idea it is worth remembering that university practices are quite unusual. Their population pyramid is dramatically different to the country as a whole. Lillie Road is still an outlier even if it is similar t some other outliers. It would be quite strange to believe that the success here would automatically translate into other populations. These are patients with very low levels of chronic disease and attract relatively low levels of funding.

University practices are peculiar unusual.

I have made no attempt to review the quality of care delivered at this practice. QOF is a pretty blunt instrument for this. Their point score is good at a whisker under 550 out of 559 points. The rate of growth at Lillie Road seems to be slowing but they are also available at more sites across London so that is certainly not the whole story. I hope I have been able to cast a little light on this atypical, but perhaps not entirely unique, practice.

While you are here I would ask nicely that, if you found this interesting, you might take a look at my book "Starting Snomed: A beginner's guide to the Snomed CT medical terminology". It is an easy introduction to this powerful new tool that will be coming to practices this year. It is available now on Amazon and is also available for Kindle and all of the various offers that come with that. Thank you.

Blood pressure monitoring

Lots of stuff on the news today about the NICE guidance that all new patients should have an ambulatory blood pressure measurement. Savings of about ten million pounds in five years are promised. But what is the cost?

We can use the QOF data to work this out. As the PP1 indicator applies to all newly diagnosed hypertensives then the denominator is a good indicator of how many have been diagnosed in the previous year. (Acutally it underestimates buy up to 8% but I will let that pass for just now.) The total of the PP1 denominator over the UK in 2009/10 is 278,012

We can buy an ambulatory blood pressure machine. If we pick a decent supplier - I promise I am not on commission here - the cheapest today is £1350 including VAT.

As they go on one day and come off the next these could be used four times a week in most practices - 208 times a year.

Lets do a little bit of maths - 278012 patients per year divided by 208 slots (lets assume perfect useage) needs 1337 machines. At total cost of £1,804,404.

Of course if use is less than perfect - and to operate at all there will have to be some free slots - then the cost will be more. Possibly two to three times as much. This is a big upfront capital cost. Recurring costs will need to be added on as well as replacement costs. I would imagine a machine is going to start to look pretty shabby after 208 uses!

Incentives work

The role of the press office at a major journal is to try to get the journal into the mainstream press. They can tend to be a little, well, excitable.

So it was in last weeks BMJ that a paper was published on the early years of the QOF. Effect of financial incentives on incentivised and non-incentivised clinical activities: longitudinal analysis of data from the UK Quality and Outcomes Framework is actually quite an interesting paper on the effect of incentivised and non incentivised indicators. The not terribly startling conclusion was that attaching a third of practice income to a set of indicators seems to have concentrated the minds of GPs and influenced practice, or at least the coding of that practice. Incentives work.

The graph above is taken from the paper. You can clearly see the "hump" where QOF starts. The setting up of sytems and templates in a concentrated way has pushed up achievement and this is maintained (or "plateaued" as they say in the paper).

However most of the press attention went onto the green line. Notice how the green line plummets off the bottom of the graph indicating inadequate care? Nope, neither do I. It is still going up. It is not going up quite as fast as before, and that is the point that the paper makes.

It is not a scandalous or surprising conclusion. Paying a third of income and a greater share of profits for certain indicators is bound to put these as top priorities. It is to the credit of general practice that the standards for the lower prority areas have not simply been maintained but continuously improved.

To be startled by the result that incentive payments incentivise some things over others is to question what you thought QOF was actually for.

Does QOF work?

There is quite a bit of publicity today for a paper in the BMJ asking whether hypertension targets have any effect on outcomes. Neither blood pressure or cardiovascular morbidity seem to have been affected.

It was with a sense of dread that found the paper. QOF has had more than its fair share of thinly disguised rants appearing as research. I was however very pleasantly surprised to find a well constructed piece of research with tightly defined methods and considerably clarity of thought. Maybe it takes researchers in the USA (Harvard to be exact) to look at these things objectively.

There is considerably debate about performance related pay and very variable evidence about how effective it is. There has been some research in the USA where schemes tend to make up a much smaller proportion of practice income than in the NHS.

It is of course disappointing although not particularly surprising to see a lack of observable effect. QOF is not, of course, a controlled intervention and it is possible to argue that we will never know what would have happened without it but this is pretty weak.

Now for the political bit. The cash for the QOF came, to a large extent, from a transfer from the old capitation payments. So the pay which previously went to practices and was used for treatment of hypertension was paid, in a different way, to practice for the same treatment of hypertension.

So little change but I am strangely cheerful that it has been demonstrated in such a high quality piece of research. More please.

The 5% rule

The square root formula for adjusting prevalences finished a year ago but we are still left with the 5% cut off for the year just gone. This was less well known than the square root formula but its effects could be rather larger. It seems to have become more of an issue for many practices recently - perhaps we are all looking at our budgets just that little bit more closely.

The basic rule is this. You find the practice with the highest prevalence for any given condition and then calculate 5% of that prevalence. Any practices below that 5% value have their prevalence moved up to that level. Simple? No, not really.

The problem is that there are a small number of practices out there that are quite exceptional in their prevalence. You can see the spread of prevalences in the boxplot below (2008-9 data). For those unfamiliar with a boxplot the middle 50% of practices are within the box. The whiskers spread out from this upto 1.5 times the size of the box. The really outlying practices are plotted separately.

As you can see there are high outliers in every area, some more than others. The 5% rule really starts to kick in when the highest outlier is more than 20 times the mean. When this happens more than half of the practices will be bunched together at the 5% level. Prevalence adjustsment can simply stop. Last year, for instance, only three practices in England had a dementia prevalence more than 5% of the maximum. This meaned every other practice received the same prevalence factor.

To illustrate this effect the results for one of these extreme practices below. I am not giving their name as who they are is not really the point. They provide services to a group of patients with significant needs and there is no reason at all to doubt their figures.

Under the current rules this one practice can significantly change the QOF payments to thousands of others. But how many are affected? Well we can use the database to see.

As you can see it is not only in dementia that the 5% rule affects the vast majority of the 8229 practices in England. Learning disabilities, stroke and mental health area all hugely affected and over half of practices are affected in the area of CKD. As a side note you may notice that learning disabilities, heart failure and epilepsy all have the same maximum. This is all down to a single, and highly unusual practice, although this time down to very small numbers of patients. The same practice is also responsible for the highest stroke prevalence. Another "special" practice has the highest rate of mental health problems, although fewer than 100 patient overall.

There is no blame to attach to these practices. They are providing services to often very difficult populations and there is no doubt that they are recording accurately. The problem is with the operation of the rule, now thankfully in its final year. Expect big changes in these areas next year.

Removing Indicators

There have been a couple of new pieces of research in the last week or so relating to the QOF. I am trying to track down a copy of the this month's BJGP but fortunately there is free access to research in the BMJ and the paper The impact of removing financial incentives from clinical quality indicators. Lester et al.

The paper looks at the removal of incentive payments in California and, at the risk of spoiling the end for you, finds that there is a decrease in achievement when the incentives are withdrawn. In fact this decline is continuous over the years so things get worse. Comparisons are drawn with the UK and QOF although there are differences. In the US the payments rarely affect the clinicians directly but rather their employer. There were other programmes associated with the incentive payments that could have made a difference. Things not mentioned in the paper are that the incentives tend to be higher in the UK as a proportion of funding. Additionally most of the targets incentivised were fairly uncontroversial (cervical screening, diabetes control) whilst there is much more scepticism amongst clinicians about some of the QOF targets.

In general though the paper is a pretty easy read everyone except possibly for the NICE QOF advisory committee. Of course we won't really know what happens in the UK until it actually happens and has a chance to work through the system. As the earliest indicators will be removed is the 2011/12 year we won't really know until after the Olympics. Until then this is our best clue.

Targets and Indicators

Setting indicators is difficult. It is especially difficult setting indicators attached to incentives as they are in the QOF. It is especially difficult to produce indicators that actually produce the result that you are after without producing perverse incentives.

Required reading for anyone setting a target at any level, from practice to nationwide, should be the very readable report from the King's Fund report Getting the Measure of Quality. It clearly documents the successes and potential failures of information gathering and some of the possible remedies. It would certainly be interesting to see some of these standards applied to QOF indicators although there are signs that the new NICE committee is thinking along broadly similar lines.

Patient survey results out, but not available

My practice received the results of the nation patient survey today. As most practices are probably aware these relate to last year, 2008/9, and apply to two new indicators in the Patient Experience domain. Just as a reminder -

PE7 Patient experience of access (1)
The percentage of patients who, in the appropriate national survey, indicate they were able to obtain a consultation with a GP (in England) or appropriate health professional (in Scotland, Wales and NI) within 2 working days (in Wales this will be 24 hours).
Range 70-90% 23.5 points

PE8 Patient experience of access (2)
The percentage of patients who, in the appropriate national survey, indicate they were able to book an appointment with a GP more than 2 days ahead.
Range 60-90% 35 points

As you can see these command quite a lot of points, more than Stoke, Heart Failure, Cancer and Palliative Care combined.

Now that we have seen our results we are absolutely ... Ah, can't actually tell you how we feel about them. I can tell you other people aren't happy and are questioning the statistical validity due to very small sample sizes, but not if I feel the same. You see it is a secret. It is not, I hasten to ask our secret but instead it is the government's secret - at least until a time of their choosing. The statistics came with this warning:

The data is restricted until full national publication of all survey results by the Department of Health. The data must not be shared with any third parties except where expressly permitted by the Department of Health. This includes giving any indication over the content such as "favourable" or "unfavourable" comparisons of data.

Sharing of individual results with GP practices is permitted. Any communications with GP practices over their individual results should also enforce the confidentiality of the data and the duty not to share the data with any other third party ahead of official publication. A template letter will be available from the Department of Health to help in this regard.

PCTs should also refuse any Freedom of Information requests for this information, given national plans to publish the GP patient survey data. Any wrongful release of data to any other third party should be reported immediately to the Department of Health and may lead to an inquiry.

Now, I would hate to spoil a good press conference (30th June apparently) but we are now in the bizarre situation situation of GPs trying to get a handle on whether there is a systemic problem here without proper information. I personally receive three copies of the survey, all with different ID numbers and returned them all!. It is impossible to see if there is a problem or not without the data. Emails going around are a little cryptic as they can't disclose the data. All in the name of a stage managed press release. Expect to see a lot of argument about this over the next couple of weeks.

Square roots and cut offs

It has been announced that from next year the square root formula for calculating the cash due per point in the clinical areas is to be abandoned. In brief this meant that the cash per point rose as the square root of prevalence rather than linearly with it. The theory was that there would be an economy of scale enjoyed by practices. For more information on this use the search box above to search for square root. Over time thought has moved against this theory.

In the following year the 5% cut off will go. Previously practices with less than five percent of the maximum prevalence would be treated as if they had exactly five percent of the maximum. This could create some bizarre results.

I have been asked by many people over the past couple of weeks what the effect on practices would be. Well after having a short holiday I have looked at modeling these changes based on last year's data. Before I discuss the results a couple of warnings about all that is to follow. It is a model, not a prediction. It is based on applying next year's rules to last year's data. It assumes that all of the indicators will remain the same - which is simply not true. It assumes that practice behaviour is identical which is unlikely. I have also had to make estimates at the prevalences of smoking and depression screening which were not published for England this year. These are likely to be close but not exact. I am not using the Dep 2 indicator at all. This is a model and not a detailed estimate - but it should be close.

So on with some meat. The figures for each practice are available from the left hand menu of each practices page. These are expressed in terms of the equivalence number of points gained and lost. To get the overall picture you can see the spread of practices in the graph above (I have taken eight outlier practices off the top to make the rest of the histogram clear - they tend to be unusual practices and so have unusual patterns of prevalence).

We can also look at practices in groups. Perhaps the most obvious group to look at are University practices. Dealing with younger people they tend to have a lesser incidence of chronic disease - particular cardiovascular and pulmonary diseases which dominate QOF. A rather crude search shows 26 practices in the database with the string "Univ" in their address. On average these practices lose 234 points equivalent from their QOF payments. These were the practices that started from a very low base so to lose this amount is very significant. In fact after these changes their take home points from the entire clinical domain is an average of 93. Their clinical domain is less valuable to them than the patient experience domain. This is likely to have a very significant effect on these practices.

We can also look at the effect at PCT or Health Board level. You can see the PCT level changes online or download a (7k) csv file. The winners and losers are quite dramatic. London is hit hard with both Lambeth and Westminster losing the equivalent of over 100 "full price" points per practice. The clear winners are in the North of England or attractive seaside resorts or, in a couple of cases, both. Two PCTs gain over 100 points per practice. County Durham PCT is going to have to find another one and a half million pounds per year to cover the cost of these changes. Meanwhile in Lambeth eight hundred thousand will be taken from primary care. Of course both of these could be told, more optimistically, the other way around! The message here is that although this change may be cash neutral at the national level the same is not true at the PCT level.

As the graph above shows we have a normal distribution. These changes will be moderate for most, large for some and extreme for a few - a couple of practice gain over a thousand points although they are not large but small and specialist practices.

QOF Research

I am always interested to see new and innovative research on the data from QOF and some of the best stuff at the moment is coming from the National Primary Care Research and Development Centre and in particular the team of Dr Tim Doran.

Two papers from this team have been published within a month of each other in two big hitting journals. The first was published in the New England Journal and dealt with the effects of exception reporting (sorry, you or your institution need a subscription to read the whole thing). In one of the more thorough analyses of exception reporting so far there is no association found between exception rates and the points offered in each indicator. Indeed the main association is with the type of indicator with low rates for offering treatment and higher rates for achieving outcomes. No evidence of systematic gaming was found in the QOF data.

In the second paper, this time in the Lancet there is a look at socioeconomic factors on QOF performance (again cash required to read the whole paper). In the early years of QOF practices located in more deprived wards tended to have more problems with attaining higher levels of achievement than those in more affluent areas. There were, however, areas of high achievement in every type of area but low achievement was concentrated in more deprived areas. Things tended to become a lot more even by year three.

There are a couple of interesting points about this second paper. Firstly there appears to be some meaningful outcome despite the fairly poor results that you get with practice based social profiling rather than patient based profiling (no cash required to read - hooray!). i.e. where patients live is more important than where the practice is located.

The second interesting factor is that points are not used for the analysis. Overall mean achievement by each practice is used. This tends to give undue prominence to lithium prescribing and patient referrals and it seems likely that most of the variation between practices is concentrated in a small number of highly variable indicators. It is still, however, much the best method of analysis so far seen in any QOF study. Clearly a team to watch!

Fat maps? Fat chance.

It comes to quite something when the best source that I can find for information about QOF analysis comes from GMTV. The big story is the "Fat Map" of the UK apparently produced by Dr Foster and sponsored by Roche. I say apparently but the actual map and report don't seem to feature on the web sites of either.

The data they appear to be using is the QOF obesity register size at PCT level for April 2007 which has been available on this site for ten months now. When you come down to the business rules level this is a measure of the number of patients over sixteen years old who have had a BMI measured (or technically weight measured and BMI calculated) between January 2006 and April 2007 and that BMI was greater than or equal to 30.

A BMI of 30 is not that high these days. For those of you who don't deal with BMIs on a daily basis (basically front line clinicians) Flickr hosts a rather wonderful range of illustrated BMI catagories.

The prevalence has then been calculated by dividing this number by the total registered patient population.

There are thus quite a number of confounding factors.

Firstly and probably most significantly is the enthusiasm of the GP practice for weighing lots of people. If people were not weighed they did not count. For instance a huge patient would not be counted as obese if they did not have a BMI recorded. Getting a high prevalence involved weighing everyone who came through the door who looked like they may have a BMI over 30. There was no incentive to weigh patients with a BMI of less than 30 so it was just not done much - GPs have a pretty good eye for rough BMIs. For this reason even if we could know how many BMIs were measured it would be a bad measure of the obesity prevalence due to the skewed population at the measurement level.

Secondly we have the dodgy denominator. Remember the definition above? It applied only to patients of 16 or over - which is fair enough. BMIs don't really work with children. However to get the prevalence it was divided by the whole population. So if you have a lot of under 16s then your obesity prevalence will tend to be diluted. Similarly if you have a generally aging population then your obesity levels would appear artificially high.

Finally we have areas such as coding which are probably pretty minor.

Wales in general seems to stick out on the map, or at least the bits I could see on news.sky.co.uk Now I don't know a lot about Wales other than what I see on Torchwood but it seems rather odd that the whole of Wales is high (from North to South) and that obesity starts right on the border. Was there a LES or other country specific reason for practices to be incentivised to check BMIs a lot?

So this is a pretty dubious set of statistics on a map. Could it be better? Well perhaps a little. I mentioned the problem of the dodgy denominator above. Is there a better figure that we could use? Certainly there is. Records 22 (recording of smoking status) applies to all patients over 15 and uses that population as its denominator. We could at least correct that error although practice rates of measurement will still be a significant factor. I will try to put the figures together and if Roche or anyone else want to sponsor it they are very welcome!

Overextended?

The changes to the QOF detailed on this blog and the detailed calculations of losses under the proposed contract imposition are only a relatively small part of the current issues between GPs and the government. The central issue from Numbers 10's point of view appears to be extended hours. If the governments proposals are accepted then a Directed Enhanced Service will be commissioned for these extended hours. The politics are complex an I would direct the interested reader to Lawrence Buckman's letter to the profession.

The fundamental drive of the DES is that there should be 30 minutes of extra time per one thousand patients on the list to be delivered in 90 blocks in the evening or weekends or 60 minutes in the mornings. We are, however on shifting sands here. A new provision brought in at the end of January is that there should be no time when reception is closed during the core hours. Any reception close would have to be replaced with clinical time. The extended hours would be agreed with the PCT and based on the results of the GP Patient Survey, a national survey of patients about primary care.

The results of the patient survey have been published and so the figures can be used to work out an estimate of the impact of the DES. What I have done on this site is to calculate the amount of time required from each practice and then allocate those hours according to the result of the survey. Thus is 51% wanted weekend access and 49% evenings and there were two sessions to allocate then there would be one to each. If there was only one it would go to the weekend. A fairly simple formula but it does make it easy to automate. The ultimate detail is in the source code.

The results can be seen on the practice pages. The summary is that it is not the couple of hours a week that many imagined. 55% of practices will be required to produce three hours or over on a Saturday. Around 160 practices would also be doing Sundays under this formula. Interestingly only eight practices would be required to provide early morning surgeries.

Some of the problems with the current proposals are also seen. It is widely reported that simultaneous surgeries would not be permitted (i.e. you could not supply three hours of time by two GPs working for 90 minutes simultaneously). One of the effects of this rule is that opening hours for smaller practices will be considerably less than those for larger practices. Under this rule two practices would be open from 8am on Saturday until half past midnight on Sunday morning. Clearly this is absurd.

I will try to keep the model updated with changes, but there remains a lack of detail in these proposals, and much of the detail that does exist may not be that practical. Obviously if anyone from the government side of negotiations knows better then the email address is below!

Who loses what?

As many of you are probably aware the site has had information about the potential loss of cash to practices under the government's proposed imposed changes to the QOF in England. If you have not seen this you can click on the link on the left of each of the practice pages. There is also table of the changes effects at PCT level.

Of course now that we have these statistics we can look at the breakdown a little. As I have said before the threshold changes will mostly affect those who have had most problems in meeting the targets. The practices that have tended to have lower score have tended to be those in more deprived areas. A reasonable hypothesis would be that more deprived practices tend to loose out more.

We can go onto test this. Helpfully the deprivation index for most practices was published as part of last year's GP patient survey. We can put all of this together in a spreadsheet and work out the loss per patient for the threshold changes and overall for whole set of changes. Not difficult as we have practice list size from the QOF data as well.

As it turns out there is a correlation between the deprivation and the cash lost through threshold changes at practice level. For the mathematically minded the correlation is 0.13 - not particularly strong but it is there. In practical terms the thousand least deprived practices are to loose 62 pence per patient whilst the thousand most deprived practice will loose 84 pence per patient - a difference of 12 pence. For a "typical" practice of 5891 patient this works out at £1,287 per year between the most and least deprived practices.

This all looks pretty bleak but there is another factor that works against this effect. The removed points take more from practices that have gained all of these points in the past. Statistically these tended to be practices in the least deprived areas. If we bring in the removed points then the effect almost disappears. The correlation drops to 0.03 which is small enough to be ignored.

So balance is restored - whether by luck or judgement! It does however give some idea of the less obvious effects of changes to QOF.

Resources for Primary Care Research

I have had a few emails over the last few months about using QOF data for research and trying to break down some of the data. Unfortunately QOF is quite limited in what can be divined about individual patient treatment. There is a little more potential for breaking down populations with some of the composite registers this year but things are still pretty limited.

For those looking at used primary care data there is an excellent report on all of the sources of primary care data available. A user’s guide to data collected in primary care in England is a summary of all of the data sources, including QOF, with details of their uses and limitations. It is published by the Eastern Region Public Health Laboratory - one of the rather unsung chain of public health laboratories.

This has to be essential reading for anyone conducting or even contemplating doing research or analysis on primary care data. I can't actually see that a printed version is available or I would get a copy for my bookshelf - but get it on your computer now!

Gaming, and report writing

A few weeks ago the Centre for Health Economics at York University produced a report looking at some of the statistics in QOF. It looks in some detail at both disease prevalence and to some degree at exception reporting. They are particularly interested in the difference in behaviour between high scoring practice and lower scoring ones, although they also look at social and societal differences between practices.

They only looked at Scottish practices due to the rather better data that was available for them, which has got to be a pat on the back for ISD Scotland.

I won't go into detail about the mechanics of the analysis - you can read it yourself although I would warn you that some knowledge of statistics is needed. It is not a light read. health economics papers rarely are. Most of the really interesting findings are related to the differences between 2005 and 2006 in practices that did, and did not, get maximum points in a given area.

The results are interesting. In general terms those practices who hit the top indicator thresholds in the first year increased their prevalences in the second year relative to those practices which did not. Conversely those practices who did not reach the top thresholds tended to increase the amount of exception reporting they did.

Now there is probably nothing too surprising in that. It would be a rather worrying situation for an incentive scheme not to lead to changes in behaviour in the direction of the incentive. That is exactly what is happening here. Practices are tending to most work in the areas that lead to the greatest incentive. There are certainly issues with the underdiagnosis of chronic diseases and there are probably many people who could be exception reported and are not.

The report talks a lot about "gaming". It does not define this however and I struggle to find a good definition on the Internet. Perhaps the most benign definition would be, in this context "undertaking actions to increase revenue that would not improve patient care". Actually this would encompass all exception reporting. This is not a bad definition as they define altruism as precisely the converse (personally I think that is professionalism but lets not get bogged down in semantics)

The authors of the report do not look so kindly on gaming. They define it thus:

However, exception reporting also gives GPs the opportunity to exclude patients who should in fact be treated in order to achieve higher financial rewards. This is inappropriate use of exception reporting or "gaming".

You can see where we are going here, can't you? By page 15 they are just calling it cheating.

That is not to say that I disagree with their mathematical analysis. I actually think it is rather brilliant and represents an attempt to model QOF mathematically in a way that has not been seen before - in public at least.

However they fall over in the conclusions. They cannot see any reason for these variations except cheating and dishonesty. Now that is one possible explanation for their findings but it is not the only one by any means. They seem to have very little idea of how exceptions are actually used. They don't see practices a living organisations with priorities. If you incentivise them to look for more patients they will find them - there certainly seem to be plenty undiagnosed with diabetes and hypertension. If they are going to get extra cash for a more efficient exception reporting system then they are likely to do that. It could simply be an indication of priorities.

None of this needs dishonest exception reporting or fraudulent diagnosis, simply an understanding of where the statics come from. So are GPs cheating lying scoundrels? We some might be but there is no solid evidence of this on a large scale. It is reassuring (as a GP) to read their first conclusion.

The fact that practices could have treated substantially fewer patients (12.5%) without falling below the upper thresholds for indicators and thereby reducing practice revenue is compatible with altruistic motivation.

Not so bad after all!

QOF reduces admissions - or does it?

I like to be positive here. It is nice to find positive things about the QOF. I was very interested to see reports that higher QOF scores in asthma were associated to a reduction in emergency asthma admissions. Good news - or was it?

The original report (1.7M pdf) was produced by Asthma UK. The report, to be fair, is a glossy affair putting a political message rather than a scientific paper. There are virtually no figures, although some, partially processed, have been put in a couple of appendices. There are some graphs but even these do not seem to support some of the conclusions given.

There is undoubtedly a great variation in the number of emergency admissions with asthma. The greatest factor appears to be latitude with the number going up as you go north and pages six and seven make this clear. So far, so good. There is then a brief pause for a full page photograph of a nurse clinging to a bag and mask and a name and shame list for PCTs. The high admitters tend to be city PCTs and the lower admitters leafy southern PCTs, a fact not commented on. The next page is titled "Why the Divide?". It starts with the sentence The difference in hospital admissions across England is unlikely to reflect differences in the number of people with asthma.. Asthma UK appears to be saying, without offering any real justification that the number of people admitted with asthma is unrelated to the number of people with asthma. Intuitively it seems incredible and unfortunately no evidence if given to back up this bold statement. In fact it is printed above a graph showing pretty much the opposite.

Lastly we get to the correlation with QOF points. There certainly seems to be a weak correlation between QOF score and asthma admissions in 2004/6 - the first year of QOF. This may be something of an underestimate as they use QOF score rather than total QOF achievement. Why should that make a difference? Well QOF scores are capped at 70%. Any extra achievement above this is not counted. In 2004/5 over a third of practices got every single point in the asthma section of QOF. The extra achievement of these practices has been thrown away in the analysis.

In any event all that we can say is there is a correlation. Cause and effect is impossible to suggest without at least some data from previous years.

I would love to see some data that QOF is making a difference. I was disappointed that this report shows little other than a large variation in asthma admission around the country. It does not answer the questions of why half as well as proper peer reviewed study (no mention of QOF though!).

Measuring Prevalence

One of the questions often asked by GPs about QOF data is what their prevalence data should be. There are three ways to measure the prevalence of a condition. First you can ask doctors, second ask patients and thirdly you can get out there and thoroughly examine a random group of people.

The data from QOF on this site is definitely in the first camp. I have come across some interesting disease prevalence models which try to compare QOF data against data from the Health Survey for England which is largely, as the name suggests a survey of the asking patients variety. It does however features some objective measurement by nurses as well.

There is only analysis for heart disease and hypertension. In heart disease there is a small reduction in prevalence in QOF compared to the HSE estimation. This is probably down to a lack of coding. The differences in hypertension prevalence are much larger with the HSE prevalence over double the QOF prevalence.

Now I have to admit that I boggled at this for a while. Could it really be that a quarter of all my patients had hypertension? Well the answer by strict interpretation seems to be "Yes". In fact the data, including the difference between the diagnosed and the actively treated has been observed for some time.

Now I don't propose to go through the rights and wrongs of this but the fact remains that prevalence varies widely possibly predictably depending who you ask. We don't have the official figures for this year's prevalence but kidney disease seems certain to come in well under expectations.

So before making comparisons make sure that data is all coming from the same sources. We await the official figures (traditionally Wales has been early with them but not this year).

Waiting for the Data

So far this month the data has been submitted to QMAS - which stumbled and recovered a couple of days later. PCTs should have received the final data and I have had a short holiday.

I came back from my holiday to an email asking about the status of the data currently showing on QMAS. This is not available to the public as yet as it has not been finalised. We can expect the final data, for England at least, around September. I can see from emails and server logs however that many users of this site work in the NHS itself and so may have access to the QMAS data. Can it be relied upon?

It is useful to know the stages the data passes through. Data from practices should have been signed off on QMAS by now. The signing of process signifies the agreement with the submitted data by the practice and the guidance describes it as a legal declaration that the submission is accurate. In the vast majority of cases this will lead directly to payment without further changes, often within a few weeks.

In some cases there is disagreement from the PCT, most often querying a claim. This normally relates to some disagreement over the interpretation of an indicator although occasionally it may be used to customise QOF to some local and special circumstance - particularly in the clinical areas where the data is automatically extracted. It is this process which must be completed by June, although it is rare that it takes this long.

In any case PCTs should be aware by now of any disputed figures. If you are working in the PCT it is hopefully a simple matter to find out who is dealing with these disputes, make them a coffee and ask if they are aware of any possible changes to the data. If there is no dispute the data can probably be considered as final.

The rest of us will just have to wait.

Setting boundaries

As you might gather from my map section I quite like health geographics and find it a good way to view information. Of course the google maps approach is all very well but it is basically a lot of tables laid out over a map. What is much better is to the see the outlines of PCT areas. You can see neighbouring areas and it is much easier to see patterns.

I was quite interested to see a range of interactive maps produced by the regional public health observatories. There is a still pituture above of one of them and even on the small version you can see trends quite clearly. You can find the original here (as a techical aside you will need an SVG viewer such as the Adobe one. The SVG viewer in Firefox does not appear to work for various reasons with this site).

Now this is clearly the best way to do this. You could argue with the choice of SVG to present the maps but it is a very capable and open technology that has yet to find its feet. So why do we only see the 2005 data on these maps? We it appears it all comes down to money. Ordinance Survey wants lots of it to provide the data on PCT boundaries. The economics suggests that the department of health should receive money from the Treasury, pay it to the Ordinance survey who will then send it back to the Treasury.

Ironically the data on PCT boundaries comes from the Department of Health in the first place, although there is a degree of processing at the OS end.

There can be few better examples of how really quite reasonable projects are held back by the instance of the government on selling information to everyone, including itself. There is data sitting in one government department that would benefit the work in another, but it can't be used. Even the OFT believes that freeing information would make economic sense although there are some attempts to justify the policy also.

I have had several offers to geocode the data on this site's map to allow practice level data. To be honest I have been afraid to take them up. If you want to know the grid reference of practices in the UK the Post Office wants your cash.

Prevalence Models from Darlington

The public health intelligence unit in Doncaster have produced a tool to analyse prevalence data at the practice level. Models have been developed for all of the disease areas based on deprivation, sex and age. Most impressively this includes all of the new areas in the current year such as obesity and atrial fibrillation. This is delivered as an Excel spreadsheet and contains a deprivation measurement for every local authority in England along with instructions for producing more accurate figures for individual practices. This is certainly the first attempt that I have seen to model the QOF areas systematically. They don't claim too much for these models as they have developed from the literature and their correspondence with the QOF areas does not seem to have been fully tested. It is also not clear how much practice prevalence would be expected to be explained by these social factors. In statistical terms it would be interesting to know what the residual variance is at a national level. This is not to put down their achievement, which is considerable, but rather that there is quite a lot of opportunity for further work. They report that updates are likely, although not assured. Which leads me to one of my pet rants. As a spreadsheet there is no terribly easy way to upgrade. If all of your data is in one place then cutting and pasting it into the new version might work. However the lack of logic/data separation in spreadsheets make this by no means guaranteed. There is of course not a lot of other choice when it comes to simple application delivery though. Delivering databases is no easier and there is, as yet, no way for external applications to directly access QMAS. Perhaps tools like this would help to drive some demand.