Engineering Health Information Systems

This site is for our upcoming book

Archive for the ‘Safety’ Category

The difficulty of regulating “Medical Apps” for mobile devices

without comments

There is a fast emerging market for mobile apps (software) to be downloaded on mobile devices (such as smart phones, iPods, tablets, etc). These apps range from simple information aids up to complex radiology imaging software with decision support. They also target a variety of users, including GPs, specialists, patients and caregivers. Clearly, there is concern about the safety (and security) of these devices. The U.S. FDA has made it clear that even software can be considered a medical devices if it is used in clinical practice and risks are associated with it. However, it is unclear how to properly control software in general – and mobile apps specifically. Now, the FDA has held a hearing on that subject matter. An interesting summary and protocol of that hearing can be found here. It provides a good account of the difficulties associated with an attempt of regulating the “health app market”.

One point that I found particularly interesting is the aspect of daisy chaining of apps. This problem occurs when multiple interoperable apps exchange data. The normal approach of regulators such as the US FDA and Health Canada is to put tight controls only on those software apps that play an important role in diagnosis or treatment. However, if such an app received data from another app that is less controlled, lack of data quality (e.g., errors in that data caused by errors in this other app) may contribute to safety hazards. This “daisy chaining” problem may in fact indicate that we need a new way of asserting controls, shifting the focus from “software devices” to “data items”.

Written by Jens

September 14th, 2011 at 4:27 pm

How to Define Safety?

with 2 comments

In follow up to Jens’ post on Safety mandates in Canada, I agree that this is an important aspect that needs to be considered.

But how should we consider safety?

At a high-level, “Do no harm” resonates with many providers. We would need to be more precise in our definition of the domains of safety, so we can measure what is safe / unsafe. The US requirements on Meaningful Use does mention safety, but it appears to assume that safety is improved through use of systems (e.g. CPOE).

But with health information systems, how do we evaluate safety and harm? There are many aspects and we are only just beginning to turn a critical eye to some of the unintended consequences. Implementing systems does not equate to improved safety, de facto. We are seeing new kinds of errors that are happening and the heatlhcare systems are changed due to the introduction of technology. Even as academia is beginning to explore this, realizing that each system and even each installation is likely unique in its context, decision makers are not aware of these important distinctions. For many, adoption equates to an improvement in safety. So how do we go about defining aspects of safety in a manner that is measurable and digestible?

We can define it from an outcome (or potential outcome) perspective and measure quantitatively how many errors we have. Areas such as adverse drug events, unnecessary surgeries, mortality, excess hospital stays, etc. can be used. These are important. How to attribute them to the information system is another question as these are interventions into a complex space.

Looking upstream a bit, we can look at the system function and design. Usability testing and analysis is helpful here. While the errors can be more attributed to the information system, it is harder to predict the actual impact of design errors on patients. It is also harder for decision makers to wrap their head around some usability results, as they can be very detailed and not concrete in their outcomes.

Although I have moved a bit off topic, I think safety is something that needs to be considered, but how can we get safety design on the table?

Written by priceless

July 3rd, 2010 at 7:15 am

Posted in Quality,Safety

Safety – a missing mandate in Canada’s national EHR project?

with 2 comments

Canada Health Infoway (CHI) has been funded by the Canadian government with $2.1 billion since 2001 to foster the development of a pan-Canadian EHR infrastructure. CHI’s mandate has been to “collaborate[s] with Canada‚Äôs provinces and territories, health care providers and technology solution companies to implement private and secure health information systems.” (see CHI’s latest report to the public.) The latest report contains a risk management framework containing an analysis for different types of risks, including financial risks (funding), adoption risks and security and privacy risks. It’s noteworthy that safety risks do not appear at all. In fact, the safety aspect is not addressed in the entire report, apart from general conjectures that eHealth technologies will improve patient safety. As mentioned in my previous blog, there is significant indications that this is not necessarily so. The absence of making safety a priority in the pan-Canadian summary care record architecture standards may very well become a major problem down the road. (see recent commentary of Ross Anderson in BMJ, who believes that summary care records will do more harm than good.) The failure of addressing safety in pan-Canadian EHR standards as a primary objective is also in stark contrast with the objectives of regulators (Health Canada, FDA), who are primarily concerned about safety of EHR software. It it time to redefine CHI’s mandate to include safety as another quality objective, next to privacy and security?

Written by Jens

July 2nd, 2010 at 10:36 am

Posted in Quality,Safety