Showing posts with label informed consent. Show all posts
Showing posts with label informed consent. Show all posts

Thursday, August 30, 2012

A Tacit Admission That National Health IT is a Gargantuan Experiment

In my post yesterday "The Scientific Justification for Meaningul Use, Stage 2" I wrote:

There's no truly robust evidence of generalizable benefit, no randomized trials, there's significant evidence to the contrary, there's risk to safety that this disruptive technology causes in its present state (but the magnitude is unknown, see quotes from 2012 IOM study here) that MU and "certification" do not address, there's a plethora of hair-raising defect reports from the only seller that reports such things, but CMS justifies the program [starting at p. 18 in the Final Rule for Meaningful Use Stage 2 at this link - ed.] with the line:


"Evidence [on benefits] is limited ... Nonetheless, we believe there are substantial benefits that can be obtained by eligible hospitals and EPs ... There is evidence to support the cost-saving benefits anticipated from wider adoption of EHRs."

I am deeply impressed by the level of rigorous science here.  We are truly in a golden age of science.  [That is obviously satirical - ed.]

The Final Rule for MU Stage 2, via the admissions made by it regarding limited evidence, is in fact a tacit admission that the whole national health IT enterprise is a huge experiment (involving human subjects, obviously).  It is likely the most forthright admission we will get from this government on that issue.

With neither explicit patient informed consent nor a formal regulatory process to validate safety, but merely based on a "we believe" justification from the government, hospitals and practices are leaving themselves wide open to liability in the situation of patient injury or death caused by, or promoted by, this technology.

(Parenthetically, I note that I've already seen a claim in a legal brief that "certification" implies safety and a legal indemnification, and that the federal HITECH act - that as in this report merely provides statutory authority to the incentive program - pre-empts common-law i.e., state litigation over health IT.  The judge dismissed the claims.)

-- SS

Aug. 30, 2012 addendum:

A commenter pointed out that experiments on minors without consent might constitute an even more egregious action, subject to even more stringent laws (and perhaps penalties, I add) than on adults.   I cannot confirm that, but it is an interesting observation.  If you are an attorney, please comment, anonymously or otherwise.

-- SS

Tuesday, June 5, 2012

Cart Before the Horse, Part 3: AHRQ's "Health IT Hazard Manager"

In a July 2010 post "Meaningful Use Final Rule: Have the Administration and ONC Put the Cart Before the Horse on Health IT?" and an Oct . 2010 post "Cart before the horse, again: IOM to study HIT patient safety for ONC; should HITECH be repealed?" I wrote about the postmodern "ready, fire, aim" approach to health IT:

In the first post, I wrote:

... These "usability" problems require long term solutions. There are no quick fix, plug and play solutions. Years of research are needed, and years of system migrations as well for existing installations.

Yet we now have an HHS Final Rule on "meaningful use" regarding experimental, unregulated medical devices the industry itself admits have major usability problems, along with a growing body of literature on the risks entailed.
For crying out loud, talk about putting the cart before the horse...

Something's very wrong here...

However, this situation is anything but humorous.

How more "cart before the horse" can government get?

In the second post, I wrote:

... So, in the midst of a National Program for Health IT in the United States (NPfIT in the U.S.), with tens of billions of dollars earmarked for health IT already (money we don't really have, but it can be printed quickly, or borrowed from China) the IOM is going to study health IT safety, prevention of health IT-related errors, etc. ... only now?

Here we go yet again.

The problem with the AHRQ (Agency for Healthcare Research and Quality, a division of HHS) announcement below of a webinar about a new tool for identifying, categorizing, and resolving health IT hazards, as I have written before, is putting the "cart before the horse" and throwing medical ethics to the wind.

If we've just developed a tool "for identifying, categorizing, and resolving health IT hazards", the magnitude of which others such as IOM admit are unknown to our detriment (e.g., Health IT and Patient Safety: Building Safer Systems for Better Care, pg. S-2), then health IT is, it follows, an experimental technology.

If it is an experimental technology, AHRQ and others in HHS should probably be raising the issue of a slow down or moratorium on widespread rollout under HITECH until risk management and remediation is better understood.  At the very least they should be calling for patient informed consent that a device that will largely regulate their care is experimental, that a competency "gap" exists among healthcare practitioners within the "health IT environment" (meaning patients are at risk), and that patients should be offered the opportunity for informed consent with opt-out provisions.  The principals should not just be announcing a webinar:

Sent: Tuesday, June 05, 2012 12:23 PM
To: OHITQUSERS@LIST.NIH.GOV
Subject: Register Now! AHRQ Health IT Webinar "Purpose and Demonstration of the Health IT Hazard Manager and Next Steps" June 11, 2:30 PM ET

Agency for Healthcare Research and Quality

Purpose and Demonstration of the Health IT Hazard Manager and Next Steps

June 11, 2012 — 2:30-4 p.m., EST

The Agency for Healthcare Research and Quality (AHRQ) has identified a gap in a health care/public health practitioner’s competency within the health IT environment. This webinar is designed to increase practitioners’ competencies in several areas: improving health care decision making; supporting patient-centered care; and enhancing the quality and safety of medication management by improving the ability to identify, categorize, and resolve health IT hazards.

The Webinar will explore the Health IT Hazard Manager—a tool for identifying, categorizing, and resolving health IT hazards. When implemented, the tool allows health care organizations and software vendors alike to learn about potential hazards and work to resolve them, including the use of data to communicate potential and actual adverse effects. The session will discuss how the Health IT Hazard Manager was tested and refined as well as strategies and implications for deploying it. The target audience includes AHRQ grantees/researchers; health care providers, including physicians and nurses; consumers/patients; and health care policymakers.

... Webinar learning objectives include:

1. Describe the rationale for developing the Health IT Hazard Manager and how it evolved through alpha and beta testing.
2. Explain the process for identifying and categorizing health IT-related hazards.
3. Demonstrate how the Health IT Hazard Manager would be used [i.e., it's not yet in use, despite mandates for HIT rollout with penalties for non-adopters - ed.] within and across care delivery organizations and health IT software vendors.
4. Discuss policy and process implications for deploying the Health IT Hazard Manager via different organizations (i.e., AHRQ; Office of the National Coordinator for Health IT; Patient Safety Organization(s); Accrediting bodies; IT entities).

In effect, HHS seems to be saying "we're working on the HIT risk problem, but roll it out anyway; if you get harmed or killed, tough luck."  This seems a form of negligence.

Have we thrown out all we know about medical research and human subjects protections in face of the magical powers and profits of computers in medicine?

-- SS

Thursday, February 9, 2012

A Critical Review of a Critical Review of e-Prescribing ... Or Is It CPOE?

In PLoS medicine, the following article was recently published by researchers at the University of New South Wales in Australia:

Westbrook JI, Reckmann M, Li L, Runciman WB, Burke R, et al. (2012) Effects of Two Commercial Electronic Prescribing Systems on Prescribing Error Rates in Hospital In-Patients: A Before and After Study. PLoS Med 9(1): e1001164. doi:10.1371/journal.pmed.1001164


The section I find most interesting is this:

We conducted a before and after study involving medication chart audit of 3,291 admissions (1,923 at baseline and 1,368 post e-prescribing system) at two Australian teaching hospitals. In Hospital A, the Cerner Millennium e-prescribing system was implemented on one ward, and three wards, which did not receive the e-prescribing system, acted as controls. In Hospital B, the iSoft MedChart system was implemented on two wards and we compared before and after error rates. Procedural (e.g., unclear and incomplete prescribing orders) and clinical (e.g., wrong dose, wrong drug) errors were identified. Prescribing error rates per admission and per 100 patient days; rates of serious errors (5-point severity scale, those ≥3 were categorised as serious) by hospital and study period; and rates and categories of postintervention “system-related” errors (where system functionality or design contributed to the error) were calculated.

Here is my major issue:

Unless I am misreading, this research took place in hospitals (i.e., "wards" in hospitals) and does not seem to focus (if even refer to) discharge prescriptions.

I think it would be reasonable to say that what are referred to as "e-Prescribing" systems are systems used at discharge, or in outpatient clinic/offices to communicate with a pharmacy selling commercially and not involved in inpatient care.

From the U.S. Centers for Medicare and Medicaid Services (CMS), for example:

E-Prescribing - a prescriber's ability to electronically send an accurate, error-free and understandable prescription [theoretically, that is - ed.] directly to a pharmacy from the point-of-care

I therefore think the terminology used in the article as to the type of system studied is not well chosen. I believe it could mislead readers not experienced with the various 'species' of health IT.

This study appears to be of an inpatient Computerized Practitioner Order Entry (CPOE) system, not e-Prescribing.

Terminology matters. For example, in the U.S. the HHS term "certification" is misleading purchasers about the quality, safety and efficacy of health IT. HIT certification as it exists today (granted via ONC-Authorized Testing and Certification Bodies) is merely a features-and-functionality "certification of presence." It is not like an Underwriter Labs (UL) safety certification of an electrical appliance that the appliance will not electrocute you.

(This is not to mention the irony that one major aspect of Medical Informatics research is to remove ambiguity from medical terminology, e.g., via the decades-old Unified Medical Language System project or UMLS. However, as I've often written, the HIT domain lacks the rigor of medical science itself.)

I note that if this were a grant proposal for studying e-Prescribing, I would return it with a low ranking and a reviewer comment that the study proposed is actually of CPOE.

That said, looking at the nature of this study:

The conclusion of this paper was as follows. I am omitting some of the actual numbers such as confidence intervals for clarity; see the full article available freely at above link for that data:

Use of an e-prescribing system was associated with a statistically significant reduction in error rates in all three intervention wards. The use of the system resulted in a decline in errors at Hospital A from 6.25 per admission to 2.12 and at Hospital B from 3.62 to 1.46. This decrease was driven by a large reduction in unclear, illegal, and incomplete orders. The Hospital A control wards experienced no significant change. There was limited change in clinical error rates, but serious errors decreased by 44% across the intervention wards compared to the control wards.

Both hospitals experienced system-related errors (0.73 and 0.51 per admission), which accounted for 35% of postsystem errors in the intervention wards; each system was associated with different types of system-related errors.

I note that "system related errors" were defined as errors "where system functionality or design contributed to the error." In other words, these were unintended adverse events as a result of the technology itself.

The authors conclude:

Implementation of these commercial e-prescribing systems resulted in statistically significant reductions in prescribing error rates. Reductions in clinical errors were limited in the absence of substantial decision support, but a statistically significant decline in serious errors was observed.

The authors do acknowledge some limitations of their (CPOE) study:

Limitations included a lack of control wards at Hospital B and an inability to randomize wards to the intervention.

Thus, this was mainly a pre-post observational study, certainly not a randomized controlled clinical trial.

Not apparently accounted for, either, were potential confounding variables related to the CPOE implementation process (as in this comment thread).

In that thread I wrote to a commenter [a heckler, actually, apparently an employee of HIT company Meditech] with a stated absolute faith in pre-post studies that:

... A common scenario in HIT implementation is to first do a process improvement analysis to improve processes prior to IT implementation, on the simple calculus that "bad processes will only run faster under automation." There are many other changes that occur pre- and during implementation, such as training, raising the awareness of medical errors, hiring of new support staff, etc.

There can easily be scenarios (I've seen them) where poorly done HIT's distracting effects on clinicians is moderated to some extent by process and other improvements. Such factors need to be analyzed quite carefully, datasets and endpoints developed, and data carefully collected; the study design and preparation needs to occur before the study even begins. Larger sample sizes will not eliminate the possible confounding effects of these factors and many more not listed here.

The belief that simple A/B pre-post test that look at error rate comparisons are adequate is seductive, but it is wrong.

Stated simply, in pre-post trials the results may be affected by changes that occur other than the intervention. HIT implementation does not involve just putting computers on desks, as I point out above.

In other words, the study was essentially anecdotal.

The lack of RCT's in health IT are, in general, one violation of traditional medical research methodologies for studying medical devices. That issue is not limited to this article, of course.

Next, on ethics:

CPOE has already been demonstrated in situ to create all sorts of new potential complications, such in at Koppel et al.'s "Role of Computerized Physician Order Entry Systems in Facilitating Medication Errors", JAMA. 2005;293(10):1197-1203. doi: 10.1001/jama.293.10.1197 that concluded:

In this study, we found that a leading CPOE system often facilitated medication error risks, with many reported to occur frequently. As CPOE systems are implemented, clinicians and hospitals must attend to errors that these systems cause in addition to errors that they prevent.

CPOE technology, at best, should be considered experimental in 2012.

In regards to e-Prescribing proper, there's this: Errors Occur in 12% of Electronic Drug Prescriptions, Matching Handwritten and this: Upgrading e-prescribing system can bump up error risk to consider; in other words, the literature is conflicting, confirming the technology remains experimental.

This current study confirmed some (CPOE) errors that would not have occurred with paper did occur with cybernetics, amounting to "35% of postsystem errors in the intervention wards."

In other words, patient Jones was now subjected to a cybernetic error that would not have occurred with paper, in the hopes that patients Smith and Silverstein would be spared errors that might have occurred without cybernetic aid.

Even though the authors observe that "human research ethics approval was received from both hospitals and the University of Sydney", since patient Jones did not provide informed consent to the experimentation with what really are experimental medical devices as I've written often on this blog [see note 1], I'm not certain the full set of ethical issues have been well-addressed. It's not limited to this occasion, however. This phenomenon represents a pervasive, continual world-wide oversight with regard to clinical IT.

Furthermore, and finally: of considerable concern is another common limitation of all health IT studies, which I believe is often willful.

What really should be studied before justifications are given to spend tens of millions of dollars/Euros/whatever on CPOE or other clinical IT is this:

The impact of possible non-cybernetic interventions (e.g., additional humans and processes) to improve "medication ordering" (either CPOE, or ePrescribing) that might be FAR LESS EXPENSIVE, and that might have far less IT-caused unintended adverse consequences, than cybernetic "solutions."

Instead, pre-post studies are used to justify expenditures of millions (locally) and tens or hundreds of billions (nationally), with results sometimes like this affecting an entire country.

There is something very wrong with this, both scientifically and ethically.

-- SS

Note:

[1] If these devices are not experimental, why are so many studying them to see if they actually work, to see if they pose unknown dangers, and to try to understand the conflicting results in the literature? More at this query link: http://hcrenewal.blogspot.com/search/label/Healthcare%20IT%20experiment


Addendum Feb. 10, 2012:

An anonymous commenter points out an interesting issue. They wrote:

The study was flawed due to its failure to consider delays in care and medication administration as an error caused by these experimental devices.

Delays are widespread with CPOE devices. One emergency room resorted to paper file cards and vacuum tubes to communicate urgency with the pharmacy. Delays were for hours.

I agree that lack of consideration of a temporal component, i.e., delays due to technology issues, is potentially significant.

I, for example, remember a more than five-minute delay in getting sublingual nitroglycerin to a relative with apparent chest pain due to IT-related causes. The problem turned out to be gastrointestinal, not cardiac; however, in another patient, the hospital might not be so lucky.

Addendum Feb. 12, 2012:

A key issue in technology evaluation studies is to separate the effects of the technology intervention from other, potentially confounding variables which always exist in a complex sociotechnical system, especially in a domain such as medicine. This seems uncommonly done in HIT evaluation studies. Not doing so will likely inflate the apparent contribution of the technology.

A "control ward" where the same education and training, process re-engineering, procedural improvements, etc. were performed as compared to the "intervention ward" (but without actual IT use) would probably be better suited to pre-post studies such as this.

A "comparison ward" where human interventions were implemented, as opposed to cybernetic, would be a mechanism to determine how efficacious and cost-effective the IT was compared to less expensive non-cybernetic alternatives.

-- SS