Showing posts with label media. Show all posts
Showing posts with label media. Show all posts

Blood pressure monitoring

Lots of stuff on the news today about the NICE guidance that all new patients should have an ambulatory blood pressure measurement. Savings of about ten million pounds in five years are promised. But what is the cost?

We can use the QOF data to work this out. As the PP1 indicator applies to all newly diagnosed hypertensives then the denominator is a good indicator of how many have been diagnosed in the previous year. (Acutally it underestimates buy up to 8% but I will let that pass for just now.) The total of the PP1 denominator over the UK in 2009/10 is 278,012

We can buy an ambulatory blood pressure machine. If we pick a decent supplier - I promise I am not on commission here - the cheapest today is £1350 including VAT.

As they go on one day and come off the next these could be used four times a week in most practices - 208 times a year.

Lets do a little bit of maths - 278012 patients per year divided by 208 slots (lets assume perfect useage) needs 1337 machines. At total cost of £1,804,404.

Of course if use is less than perfect - and to operate at all there will have to be some free slots - then the cost will be more. Possibly two to three times as much. This is a big upfront capital cost. Recurring costs will need to be added on as well as replacement costs. I would imagine a machine is going to start to look pretty shabby after 208 uses!

Incentives work

The role of the press office at a major journal is to try to get the journal into the mainstream press. They can tend to be a little, well, excitable.

So it was in last weeks BMJ that a paper was published on the early years of the QOF. Effect of financial incentives on incentivised and non-incentivised clinical activities: longitudinal analysis of data from the UK Quality and Outcomes Framework is actually quite an interesting paper on the effect of incentivised and non incentivised indicators. The not terribly startling conclusion was that attaching a third of practice income to a set of indicators seems to have concentrated the minds of GPs and influenced practice, or at least the coding of that practice. Incentives work.

The graph above is taken from the paper. You can clearly see the "hump" where QOF starts. The setting up of sytems and templates in a concentrated way has pushed up achievement and this is maintained (or "plateaued" as they say in the paper).

However most of the press attention went onto the green line. Notice how the green line plummets off the bottom of the graph indicating inadequate care? Nope, neither do I. It is still going up. It is not going up quite as fast as before, and that is the point that the paper makes.

It is not a scandalous or surprising conclusion. Paying a third of income and a greater share of profits for certain indicators is bound to put these as top priorities. It is to the credit of general practice that the standards for the lower prority areas have not simply been maintained but continuously improved.

To be startled by the result that incentive payments incentivise some things over others is to question what you thought QOF was actually for.

Fat maps? Fat chance.

It comes to quite something when the best source that I can find for information about QOF analysis comes from GMTV. The big story is the "Fat Map" of the UK apparently produced by Dr Foster and sponsored by Roche. I say apparently but the actual map and report don't seem to feature on the web sites of either.

The data they appear to be using is the QOF obesity register size at PCT level for April 2007 which has been available on this site for ten months now. When you come down to the business rules level this is a measure of the number of patients over sixteen years old who have had a BMI measured (or technically weight measured and BMI calculated) between January 2006 and April 2007 and that BMI was greater than or equal to 30.

A BMI of 30 is not that high these days. For those of you who don't deal with BMIs on a daily basis (basically front line clinicians) Flickr hosts a rather wonderful range of illustrated BMI catagories.

The prevalence has then been calculated by dividing this number by the total registered patient population.

There are thus quite a number of confounding factors.

Firstly and probably most significantly is the enthusiasm of the GP practice for weighing lots of people. If people were not weighed they did not count. For instance a huge patient would not be counted as obese if they did not have a BMI recorded. Getting a high prevalence involved weighing everyone who came through the door who looked like they may have a BMI over 30. There was no incentive to weigh patients with a BMI of less than 30 so it was just not done much - GPs have a pretty good eye for rough BMIs. For this reason even if we could know how many BMIs were measured it would be a bad measure of the obesity prevalence due to the skewed population at the measurement level.

Secondly we have the dodgy denominator. Remember the definition above? It applied only to patients of 16 or over - which is fair enough. BMIs don't really work with children. However to get the prevalence it was divided by the whole population. So if you have a lot of under 16s then your obesity prevalence will tend to be diluted. Similarly if you have a generally aging population then your obesity levels would appear artificially high.

Finally we have areas such as coding which are probably pretty minor.

Wales in general seems to stick out on the map, or at least the bits I could see on news.sky.co.uk Now I don't know a lot about Wales other than what I see on Torchwood but it seems rather odd that the whole of Wales is high (from North to South) and that obesity starts right on the border. Was there a LES or other country specific reason for practices to be incentivised to check BMIs a lot?

So this is a pretty dubious set of statistics on a map. Could it be better? Well perhaps a little. I mentioned the problem of the dodgy denominator above. Is there a better figure that we could use? Certainly there is. Records 22 (recording of smoking status) applies to all patients over 15 and uses that population as its denominator. We could at least correct that error although practice rates of measurement will still be a significant factor. I will try to put the figures together and if Roche or anyone else want to sponsor it they are very welcome!

Who has two?

This morning Ben Bradshaw announced in an interview with the BBC News website that he had found a practice with only two patients. It is, apparently, in Southern England.

Well I don't know who it is either. This database only lists practices with QOF returns and it contains only nine practices in England with fewer than 300 patients at at April 2007. Of these all are specialist. Most are run by PCTs as access clinics - often these are catering to the homeless or others who may find it difficult to register with conventional practices. These practices will run under PMS contracts which don't attract the MPIG that Mr Bradshaw doesn't like. There are two other specialist practices, one attached to a very large nursing home and another to a school, but both of these latter two have over 150 patients.

So the mystery of the practice with two patients remains.

Supporting Surgeries

If there is one thing that QOF has taught us it is that most GPs respond to a challenge. In the first year the government was surpised at the levels of achievement seen, although this was largely a repeat of the situation with Item of Service payments in the 1992 contract. GPs it seem, will do what is required to meet the contract.

We may have met our match, however. When the requirement is largely that you are not a GP but a large corporation it is an impossible target to meet. With hundreds of individual and different contracts it also become impossible to collect consistent statistics and monitor the performance of the corporate clinics - just when we seemed to be getting started on that problem.

We have seen this already with independent treatment centres. For years there was a persistent rumour of poor outcomes from these centres but no good figures to back these rumours up. There is some data now which suggests that there is little difference in outcome from NHS centres but nobody benefited from a five year delay in collecting the statistics.

We risk a distraction of GPs from the patient sitting in front of them and their needs by the central declaration of needs and solutions from central government. Anything else is a risk to the patients in primary care. This is why I support the Support Your Surgery campain.

Six million people can be wrong

There are a lot of statistics bouncing around about extended hours. One that keeps coming up is the demand of six million patients for them. Here we have no less a figure than the Secretary of State for Health answering a question in parliament.

About 6 million people in our patients survey said that they want improved access to their GP in the evenings and on Saturdays, which is why we are seeking to reach a negotiated settlement with the BMA.

The survey he seems to be talking about here is the 2007 GP patient survey. Looking at the results things are not quite as clear as they might seem from the above answer. For a start six million people did not say anything of the sort. There were not even six million in the survey. The survey was only sent to 4.7 million people and less than half of them (2.3 million) sent it back. The people sent surveys were picked largely from those that had been to their GP in the previous six months.

So where does this figure of six million come from? Well out of those who replied 16% said that they were, in some way, dissatisfied with opening hours. Take that figure together with the population of England over 18 (just shy of 40 million) - multiply and you get a figure of around around about six million. Clearly what Mr Johnson intended to say was that if the whole adult population had been asked and they all replied he believed that six million people would say that.

Now that is a pretty rotten bit of statistical conjecture. It assumes that all of those people who did not reply would think the same way as those who did. Of course it may be they did not reply because they had not particular views. Even more ambitiously it assumes that that group that were not polled - people who had not seen their GP recently - had identical views.

Worse still it ignores the fact that only ten per cent were able to say in what way they were unhappy with the opening hours (lunchtimes, evenings etc). Only 208,000 asked for increases outside of the usual 8-6.30 Monday to Friday - about 9% of the total responses. It is difficult to call this a massive pressure. Even with the simplistic extrapolation this would only be 3.6 million. The pie chart graphically shows the responses (click on it to enlarge).

Its not just me saying this. When you pay 11 million pounds for a survey MORI gives you some quite detailed analysis - in this case 111 pages (2.4Mb) of it. So what do the experts have to say?

When interpreting the findings, it is important to remember that the results are based on a sample of patients registered with a GP in England who responded to the survey, and not the entire population of England.
The vast majority of patients (84%) say they are satisfied with the hours their GP practice was open during the last six months, while the remaining 16% say they are dissatisfied with the opening hours.

What do we know for sure then? Simply there is some demand for extended hours, but not a lot. You can read the MORI report for some detailed socioeconomic breakdown of the figures. What is quite clear though is the figure of six million people is definitely wrong.

What's the point?

A little nihilistic maybe as questions go but when applied to QOF it would be nice to think that all this effort is doing the patients good. After all paying GPs and keeping administrators gainfully employed is all very well but it would be nice to think that it was actually achieving some health outcome.

Well there is, as yet, very little evidence of actual improvements in patients outcomes and at least some evidence of very little improvement. It is simply too early to say for sure. An article in this weeks BMJ (subscription required outside of NHS) goes rather further and suggests that harm may actually coming about because of the targets.

The quality and outcomes framework diminishes the responsibility of doctors to think, to the potential detriment of patients, and encourages a focus on points scored, threshold met, and income generated.

Pretty severe stuff but it is a feeling anecdotally shared by a reasonable number of GPs and indeed some patients (not suitable for those offended by swearing). Indeed there are quite a number of points made that I would broadly agree with. There are weaknesses in the approach of QOF, in particular in the application of treatment to groups rather than individual circumstance, although that is a problem Evidence Based Medicine has been struggling with for years - although to describe the QOF as fully evidence based is to rather push the definition.

This debate has some time to run.

QOF reduces admissions - or does it?

I like to be positive here. It is nice to find positive things about the QOF. I was very interested to see reports that higher QOF scores in asthma were associated to a reduction in emergency asthma admissions. Good news - or was it?

The original report (1.7M pdf) was produced by Asthma UK. The report, to be fair, is a glossy affair putting a political message rather than a scientific paper. There are virtually no figures, although some, partially processed, have been put in a couple of appendices. There are some graphs but even these do not seem to support some of the conclusions given.

There is undoubtedly a great variation in the number of emergency admissions with asthma. The greatest factor appears to be latitude with the number going up as you go north and pages six and seven make this clear. So far, so good. There is then a brief pause for a full page photograph of a nurse clinging to a bag and mask and a name and shame list for PCTs. The high admitters tend to be city PCTs and the lower admitters leafy southern PCTs, a fact not commented on. The next page is titled "Why the Divide?". It starts with the sentence The difference in hospital admissions across England is unlikely to reflect differences in the number of people with asthma.. Asthma UK appears to be saying, without offering any real justification that the number of people admitted with asthma is unrelated to the number of people with asthma. Intuitively it seems incredible and unfortunately no evidence if given to back up this bold statement. In fact it is printed above a graph showing pretty much the opposite.

Lastly we get to the correlation with QOF points. There certainly seems to be a weak correlation between QOF score and asthma admissions in 2004/6 - the first year of QOF. This may be something of an underestimate as they use QOF score rather than total QOF achievement. Why should that make a difference? Well QOF scores are capped at 70%. Any extra achievement above this is not counted. In 2004/5 over a third of practices got every single point in the asthma section of QOF. The extra achievement of these practices has been thrown away in the analysis.

In any event all that we can say is there is a correlation. Cause and effect is impossible to suggest without at least some data from previous years.

I would love to see some data that QOF is making a difference. I was disappointed that this report shows little other than a large variation in asthma admission around the country. It does not answer the questions of why half as well as proper peer reviewed study (no mention of QOF though!).

What would you change?

I have discovered - perhaps a little later that I should have - that NHS Employers and the GPC are asking for comments on what should go into the QOF from April 2008. They last did this two years ago and go over five hundred responses - largely it must be said from single issue pressure groups. Interesting this time they are also asking for view on sections that are already in the framework. It is probably useful for respondents to look at the results of the last review. There are many areas here that were considered and rejected. At that review there was also the luxury of being able to add things without taking anything out. This is unlikely to be repeated. There are already areas that are considered uneconomic to pursue (ethnicity recording for example) and to dilute incentives further would be counter productive. It is also probably worth noting that there is considerable grass roots annoyance of "Ivory Towers" academic and research instruments making their way into QOF without a terribly strongly relevant evidence base. GPs will await the result of the review with some anxiety.

What would David Cameron do?

Not the first question I would ask myself in any difficult situation but certainly has some relevance to QOF. Last week the Conservatives announced their new health policies. The one that interests us is "Outcomes not Targets".

Now it is hard to argue with that slogan. Outcomes are what matters. When it comes to drugs we are most interested in whether lives are saved or improved rather than exactly how much the blood pressure is changed or whatever. There is also no denying that most health targets are based more on process than outcome. In the QOF we have lots of targets about how often people have their blood pressure taken or an inhaler check and only one or two about any outcomes such as cholesterol or blood pressure readings.

So what to the Conservatives plan to do about it? Well all that is on their website is a show as a PDF - so there is not a lot of detail - but they do talk about putting the EQ-5D into the framework. If you are not familiar with the tool then follow that link for an example of the form. It is pretty short 5 question survey and at least appears to be easy to administer.

What is not clear is what happens next. For all the literature that I can find this is very much a research tool. There is not a lot of evidence for its use in general practice and what there is measures the patient rather than the practice. It would also be very difficult to separate out the effects of primary care, secondary care and social services.

It is probably a bit much to expect the fine details of implementation from a mid term opposition policy review. The answers to some questions would be nice though.

How is this translated into a points score and then (the final outcome) into cash? The score varies quite dramatically with age, sex and socioeconomic group. Paying for high health status would simply favour practices with younger and more affluent patients. Paying for health status improvements would reward practices starting from a low baseline rather than those who have worked hard in the past. Practices could also be 'rewarded' for a new housing estate or 'punished' for a factory closing. I will write and ask.

The Billion Pound Database

There is a lot of discussion in the media about the increased income of GPs with even Mrs Hewitt wading into the debate. Once you get past the headline figures and the fact that a good part of the increase goes straight back to HMG as increased pension contributions you are left with the QOF income. If there was one condition which came with the investment in the NHS during Labour's second term it was that there should be verifiable results. These were always difficult to find for general practice and so the QOF was formed. Payment for the QOF follows sending in the statistics. Its is these numbers which receive payment and most practices have put in quite a lot of work to get them right. I have sat in innumerable meetings discussing discussing how things should be coded, spent many hours going back through records to make them visible to QMAS, performed over a dozen practice visits and have been on the receiving end of a couple. All of this takes time and I, in common with most of the UK population, don't work for nothing. Most of the work requires a reasonable amount of clinical knowledge so can be difficult to delegate. Like many GPs I am self employed - I don't have a salary for my work, I have profits. The government has decided to pay for statistics. Many GPs have spent hours polishing those statistics to a high shine for inspection and assessment. It is no surprise that profits have risen. The bigger question of the effect on clinical care is more difficult to assess as, almost by definition, there was not much data before QOF. What data there is greatly affected by the lack of incentive to code things. The only direct comparison that I have found is an audit of diabetes which shows some improvement but the effect is not terribly dramatic. In the end what the government really wanted was the statistics and they got them. So can you either browsing this site or downloading the billion pound database.