Happy new year everyone! It’s lovely blue skies here in Boston and the sun is shining.
The issue is that some people who are into predictive modelling say there’s a storm coming.
They call them ‘nor-easters’ which basically means a good lashing off Greenland. Ok, so it’s not the next superstorm Sandy but -17 oC, 45 mph gusts and 14″ of snow should bring out the rosy cheeks if nothing else. Last year there was a three day power cut and leaking through the windows in this place so it’s essential to get prepared:
- Snow gear
- 10 gallons water
- Candles and torches
- 3 days of tinned and dried food
- Phone numbers of hotels
- Spare clothes
- Financial stuff together
- Car ready
- Getaway plan
Walmart have been here before and don’t leave a business opportunity to chance. Once they have wind of a significant storm (no pun intended) the central nerve center triggers an series of automatic series of events ensuring just the right increase in water, torches and yes, beer stocks are at the optimum level within hours.
Baden Powell was into this kind of stuff. ‘Be prepared’ he told the young lads on Brownsea island whose pen knives never left their pockets. Discerning the weather from pattern recognition is nothing new. Jesus berated the religious leaders for being able to discern the weather patterns but not the signs of the times. Meteorologists of course take things a bit further using thousands of data points across whole continents in complex computer modelling. Enough data and you can confidently predict a number of variables.
What’s this got to do with anything in healthcare you may ask?
Well in healthcare we generally use pattern recognition to predict, inform and ultimately intervene. This improves with experience but the risk of inexperience, my experience being different from yours plus our subjective interpretation can lead to a wide variety of care and potential variance in quality. In addition it can be difficult to weigh multiple factors within a short time frame whilst juggling multiple other issues. Being prepared is all about knowing what is coming your way
– on a system, team and individual patient level. Equipping health care professionals with the penknife/tools should provide the ability to take cases from what appear to be unexpected to the expected and with that timely intervention.
Insurers, meteorologists, finance markets, even travel companies are all using predictive modelling so what’s the issue with using the mathematics in medicine? Some people find that attaching numbers to patients can depersonalize the medicine. Patients are people, not a collection of data points. Its challenging for the meteorologists to predict down to the very local level too which is why ‘local knowledge’ of the patient will always be necessary.
It’s taken a while for predictive modelling to get its foot through the door but believe me it’s going to force it wide open as big data meets innovative patient specific analytics in real time. Actuaries are decent proof that there’s money to be made in risk and prediction and that’s because getting it right (or wrong) has such huge consequences.
The Framingham heart study kicked us off in 1948. Epidemiologists at Boston University used regular statistics to give us the Framingham score and we’ve set relative levels of risk for interventions in the name of prevention and essentially preparedness ever since(2). Risk prediction doesn’t need to be complex though. On a simple level a Medical Early Warning Score quite accurately predicts the deterioration of a patient and need for critical care (3) Wells score for DVT, CHADs2VASC for AF, LACE for readmission and so the list continues…
Prevention really is better than cure. However, getting ahead of the curve can prove difficult. Key factors to a successful predictive modelling include
- Data (and ideally lots of it)
- The ability to interrogate that data in a confidential manner
- A mathematical model that produces the goods in a
- Timely manner
- An actual intervention that makes a difference.
One missing link in the chain and it’s not really worth the while.
Specialist models are understandably more specific than generalist models. In addition generally the higher risk a patient is, the greater the positive predictive value of the tool. For example the Welsh Model for unplanned admissions the PPV for a patient with a score of 80 is about 80%. That drops to 45% for a score of less than 50/100 . Of course, such patients may be obvious to the clinician but what if a predictive model identifies 20% of those overseen by the clinician or simply prompts in amongst the noise and nonsense. Preventing a deterioration is going to be pretty crucial for those individuals.
About 75% of stalls at the IHI conference were health IT related, a significant number advertising risk modelling. However, there’s a way to go before we get up to the level of the meteorologists in using maths to guide our practice – and guide being the prerogative. As clinicians however we should be prepared to engage with the data geeks developing models that buy us precious time to get ahead of the curve.
I’m pretty confident those weather geeks have got it right, it’s getting chilly out there and I can see a pattern forming.
References & Resources:
1.Choosing a predictive risk model: a guide for commissioners in England, Nuffield Trust – an easy read and great summary of what makes a good tool from a generic perspective
2. P.W., Wilson; D’Agostino, R.B., Levy, D., Belanger, A.M., Silbershatz, H., Kannel, W.B. (12 May 1998). “Prediction of coronary heart disease using risk factor categories.”. Circulation 97 (18): 1837–1847
3. Subbe CP, Kruger M, Rutherford P, Gemmel L: Validation of a modified Early Warning Score in medical admissions. QJM 2001, 94:521-526.
4. Welsh Model