From some time I have been researching, or shall I rather say trying to research the concept of risk associated with infection transmission through inadequately decontaminated surgical instruments. So far I have found very limited resources on the subject and after a short conversation with Craig Williams at the CSC Autumn Study Day 2014 it seems that research and available data on the subject are very limited. For the time being we are operating in a fuzzy logic domain of low and high risk. This makes decision making based on this risk difficult.
Excluding the worst case scenario of patients dying because of failures in decontamination it is possible to estimate the cost of the risk looking into the cost of the treatment used to countermeasure the particular infection – quantifying cost of failure. It is certainly possible to put a cost on an extra day in the hospital, necessary medication, aftercare or whatever else is necessary to make the patient recover after such infection.
Such data could first of all help justifying any further investment in equipment or staff that could reduce the risk. Detailed analysis of particular risks could further point out to areas where risk could be reduced at a given cost. Quantified risks would also help choosing between technologies when limited budgets are available.
In the absence of scientific evidence it is easy to ask questions like whether dentists should follow the same decontamination procedures as other healthcare units when it comes to reusable instruments. I personally think they should but…what will happen if they do not, how big is the risk?
From the decontamination process optimisation point of view we can already start analysing data, provided we use quantitative methods to evaluate process performance. In other words we need to know how much contamination was removed and inactivated in the overall process. The first question is whether this data could be juxtaposed with variables describing for example patients’ recovery time in the hospital particular SSD is servicing? Intuitively, there should be some form of correlation.
At the first glance such study seems to be a monumental task because of the number of variables. On the other hand in the age of Bid Data and number crunching there may be a way to design the experiment so that data gathering is greatly simplified. Information could be collected side by side with other research activities and already existing data could be processed to extract the information that is needed for the purpose of this particular study.
I do not know the answer to this question today but perhaps collectively we could come up with a solution that would take us one step closer to having a more precise estimate on the cost of risk – at the end of the day insurance companies and banks do it every day, so why couldn’t we?