Research, more research, and real life?

Graeme Wilkes, Medical Director of Connect Health, discusses the clinical outcomes data accumulated by Connect Health with the aim of improving the services for the people with the problems.

25 June 2019

Graeme Wilkes, Medical Director of Connect Health, discusses the clinical outcomes data accumulated by Connect Health with the aim of improving the services for the people with the problems.

 

OK so randomised controlled trials are the most robust way of assessing effectiveness of a health intervention. This is beaten into us all through our training and careers. Randomization prevents the skewing or deliberate manipulation of results. A control group in a randomized controlled trial is to help reduce the likelihood that any benefits or risks that the researchers identify during the trial occurred due to factors outside of the experimental treatment.

The BMJ stated that without such “academic rigor” we cannot rule out the possibility that any association between outcome and intervention was not caused by a third factor linked to both and given these constraints. It remains an ideal that all new healthcare interventions should be evaluated through randomised controlled trials (1).

The problem is that the capacity for research is low due to ever-growing funding restraints and so many questions remain unanswered. NICE policy on consideration of evidence states in guidance: Randomised trials, non-randomised trials, and observational studies (including diagnostic or prognostic studies) were included in the evidence reviews as appropriate. (2)  which meant that a great many research projects were not considered of high enough academic standard to be considered. So, there is a whole lot of research effort and time which is considered pointless. There is a lot of poor research out there and recurring concern about how apparently valid but actually flawed research affects medical decisions and care (3). How much did that cost the taxpayer?

Ultimately in the Low Back Pain NICE guideline, 564 research papers were examined, and many were found to have potentially significant limitations. As a clinician and non-academic, the academic world seems to only scratch the surface and so reliance on this world would may be misguided. We don’t know what we don’t know but we need to make decisions. 

So, what is the alternative?

Well we could work in the normal real-world use evidence and common sense based on our learned first principles and ask our patients

  • how they feel?
  • have they improved?
  •  has it cleared up?
  • are they happier?
  •  they back at work?

…….. and a myriad of other questions that the human brain can assimilate and decide if that person is in a better place. I think this is what some of my regular critics call “(n=1)” but no doubt I have that wrong. It is certainly not randomised, but neither random.

My own treatments are undoubtedly fashioned by research, but a lot has been instinctive, based on first principles and perhaps many flawed ideas. I am sure the academics are heating up. ………Research was showing me one thing 20 years ago but now it has been refuted by new studies – I am not sure whether it will change again, and which was/is correct!

So, the point of this driveling tale?

I am Medical Director of Connect Health, a growing and responsible provider of MSK care to the NHS over the last 25 years. In the past 2-3 years we have developed a data warehouse that is accumulating clinical outcome data at an alarming rate given our 20+ services in England. We want to get our data out and contribute to improvement in services for the people with the problems. My clinical colleagues around the country are diligently applying our 10 out of 10 Infographic, coding their practice and their patient outcomes knowing their personal outcomes are shared only with themselves and their mentor. These outcomes are not considered in their performance assessment, just their personal development.

We now have, at last count, 226,357 datasets of patient self-rated EQ5D change (Health & Quality of Life measure) in transition through our community MSK services. Our data shows a mean change of + 0.20 which is double the usual marker of effectiveness – the “Mean Clinical Important Difference” (MCID) at +0.10. An 0.20 increase is pretty good.  70% of people attending our services exceed the MCID. A further selection of data that may be of interest:

  • I can report that the mean EQ5D change across individual services in these 226,357 patients varies from +0.171 (12,175 patients) to 0.235 (4889 patients). All other services are in-between
  • The smallest service (~ 50 discharges per month) has serial quarterly changes of 0.232, 0.248, 0.139, 0.200, 0.236, 0.263, 0.153, 0.226 – significant variation
  • The largest service (~ 3000 discharges per month) has serial changes of 0.214, 0.222, 0.228, 0.215, 0.208, 0.222, 0.214 – little variation

One thing I think I did remember from those boring research lectures was you need large numbers for robust data so I think they may have been right, and it looks like our data follows those patterns. I believe 226,357 is a large number as is 350 clinical staff who may have their individual issues but are not all corrupting the data deliberately.

But the academics tell me all this may have happened by chance or “natural history” i.e. we may as well have taken them all on a trip to the seaside or cinema and they may have improved just as much. Maybe but who will fund that RCT? Cinema or seaside?

A few other observations from our Data Warehouse (can’t call them facts…….) in the last 12 months based largely on exercise-based management include

  • Knee OA hits +0.22 (888 patients) but Hip OA a mere +0.16  (327 pts)
  • Low Back pain hits +0.22  ( 3326 patients)
  • Tendons largely treated with loading rank:

– Plantar fascia +0.22 (499 patients)

– Achilles tendinopathy     +0.20 ( 327)

– Greater trochanteric       +0.20 ( 494)

– Rotator cuff tendon pain +0.20 (1844)

– Extensor tendon elbow   + 0.18 ( 217)

 

There are flaws. There are missing patients and some diagnoses may need challenging.

However, we have great collection rates from our clinicians (70%) largely generated by consistent feedback of data and growling line managers (not really) – the only thing we expect of our clinical staff is to collect the data. We all share and celebrate our combined data NOT criticise any persons individually – we all see different caseloads in differing services.

So, this is mass, prospective (my view) retrospective (academic view) non blinded, non-randomised data. Its facilitated by the efforts of our Operational and Business Intelligence colleagues. It is reported by the patient themselves. It is collected in 70% of episodes of care where more than one contact. I believe it tells a story. The journals are largely not interested – happier to await RCTs large or small.

Perhaps the cynic in me wonders that if our data really means something, why would we need so many academics? If our data is easy to understand but theirs requires years of study and a degree of complexity and mystery who wants to use academics. In reality, of course, we need RCTs but mass data of the sort we produce has to be relevant…. doesn’t it?

Non academics seem to understand and be impressed by our data… perhaps that’s the problem

1. Understanding controlled trials: Why are randomised controlled trials important?

2. NICE Low back pain and sciatica in over 16s: assessment and management 

3. Why so much science research is flawed – and what to do about it. New Scientist