Red News Readers,
The Safety Study that was done in 1995 has not been repeated. Its results caused controversy at the time and since. I remember evidence being given at the Campbelltown Camden Enquiry about this study, and efforts were made then to discredit the study and its authors. Interestingly Crikey, the link to the study doesn’t open?
Certainly politicians are a group in our community who are not interested in seeing such a study repeated but it should be if we are to know whether the system has improved or not since 1995. The safety processes that were put in place in NSW after 2003 should have made the system more safe and NSW Health should show confidence in their process. But with health being the political football it is these days, I can’t see it happening any time soon!!
Jenny Haines
Millions of dollars and 14 years later: is Australian health care any safer than before?
Australian Healthcare and Hospitals Association Vice-President Dr Patrick Bolton writes in Crikey 19.05.09:
Almost 14 years after the Quality in Australian Healthcare Study showed many patients are harmed as a result of their health care, we still do not know whether Australian hospitals are safer today than when the study was conducted.
Nor do we know if the millions invested in improving patient safety since its November 1995 publication has been money well spent.
The study was a landmark in Australian health service evaluation. It comprised a review of the medical records of over 14 000 admissions to 28 hospitals in New South Wales and South Australia and revealed that 16.6% of these admissions were associated with an "adverse event", which resulted in disability or a longer hospital stay for the patient and was caused by health care management; 51% of the adverse events were considered preventable.
In 77.1% the disability had resolved within 12 months, but in 13.7% the disability was permanent and in 4.9% the patient died.
This study was the catalyst for investment in quality improvement in Australian healthcare.
Unfortunately, the benefit from this investment is unmeasured and questionable.
Nor do we know much about the safety of Australian primary care because no comparable study has been published in that setting.
The overall death rate in Australian public hospitals in 2006-7 was 1.3%. Assuming that the 1995 study’s figures remain applicable, nearly two-thirds (63%) of deaths in Australian hospitals are associated with an adverse event. Half of these -- one third of all hospital deaths -- are preventable. Australia is not alone: The results of the 1995 study are similar to those of other developed nations.
All Australian jurisdictions have invested in quality improvement programs to improve patient safety. Most have introduced self-reporting of safety incidents by hospital staff. This may be useful in identifying opportunities for improvement, but is a poor performance measure. It is unclear whether an increase in the reported rate of error reflects an increase in errors or reporting.
More empirical measures that reflect major causes of morbidity and mortality in areas known to be at high risk of error include drug errors, healthcare acquired infection, mortality after surgery and in-hospital mortality. Successful use of these measures has been demonstrated in limited settings in Australia and overseas. University hospitals in the US have used this approach to reduced in-hospital mortality.
Quality improvement focuses on improving processes, potentially at the cost of improved outcomes. The bureaucratised nature of the Australian healthcare system can combine with the focus on process to produce a ritualised response to adverse events in healthcare. This is illustrated by the root cause analysis process.
Root cause analysis is used internationally and recommended by all state health departments as a tool to investigate and improve from clinical incidents. Most Australian states have in place legislation to facilitate or mandate root cause analysis, at least in public hospitals.
Recent international reviews have questioned the appropriateness and effectiveness of root cause analysis as a tool for improving patient safety, and noted the costs that it imposes.
Root cause analysis involves retrospective analysis of a critical incident by an independent team clinically similar to, and familiar with the kind of work done by, the team involved in the incident. The team identify the causes of the incident and make recommendations about how recurrences might be prevented.
It is not uncommon for recommendations to include the development of tools such as a new policy, form, or further education. Tools such as these can only prevent an error if the risk of error is recognised in advance so that they can be applied. If the risk is not recognised in a timely manner then the tools are not useful. If the risk is recognised in advance then the tools are often of marginal benefit. The requirement to employ these tools increases the overhead on the health system. The recommendations of the root cause analysis team are not subject to any cost-benefit analysis, so additional demands are made without resources or analysis of benefit.
Effectiveness data are urgently required to assess the directions chosen in quality improvement in Australian healthcare.
Healthcare workers sometimes suggest that the 1995 study has not been repeated because the results would be unpalatable to politicians. If this is the case it needs to be recognised that here, as elsewhere in quality improvement, blame is inappropriate because it obscures the interests of patients.
Careful thought is required to ensure the efficient collection of useful data about the safety of the Australian healthcare system, but there is no question that ongoing collection of these data is required.
Australia must build on the foundation laid by the Quality in Australian Healthcare Study so that we have robust contemporary national data about the quality and safety of our healthcare system.