Showing posts with label FDA. Show all posts
Showing posts with label FDA. Show all posts

Wednesday, September 26, 2012

HIMSS Senior Vice President on Medical Ethics: Ignore Health IT Downsides for the Greater Good

The Health Information Management Systems Society (HIMSS) is the large health IT vendor trade group in the U.S.  At a Sept. 21, 2012 HIMSS blog post, John Casillas, Senior Vice President of HIMSS Financial-Centered Systems and HIMSS Medical Banking Project dismisses concerns about health IT with the refrain:

... To argue that the existence of something good for healthcare in many other ways, such as having the right information at the point of care when it’s needed, is actually bad because outliers use it to misrepresent claims activity is deeply flawed.

Through the best use of health IT and management systems, we have the opportunity to improve the quality of care, reduce medical errors and increase patient safety. Don’t let the arguments of some cast a cloud over the critical importance and achievement of digitizing patient health records.

Surely, no one can argue paper records are the path forward. Name one other industry where this is the case. I can’t.

Let’s not let the errors of a few become the enemy of good.

The ethics of these statements from a non-clinician are particularly perverse.

The statement "Don’t let the arguments of some cast a cloud over the critical importance and achievement of digitizing patient health records" is particularly troubling.

When those "some" include organizations such as FDA (see FDA Internal 2010 memo on HIT risks, link) and IOM's Committee on Patient Safety and Health Information Technology (see 2012 report on health IT safety, link) both stating that harms are definite but magnitude unknown due to systematic impediments to collecting the data, and the ECRI Institute having had health IT in its "top ten healthcare technology risks" for several years running, link, the dismissal of "clouds" is unethical on its face.

These reports indicate that nobody knows if today's EHRs improve or worsen outcomes over good paper record systems or not.  The evidence is certainly conflicting (see here).

It also means that the current hyper-enthusiasm to roll out this software nationwide in its present state could very likely be at the expense of the unfortunate patients who find themselves as roadkill on the way to the unregulated health IT utopia.

That's not medicine, that's perverse human subjects experimentation without safeguards or consent.

As a HC Renewal reader noted:

Astounding hubris, although it does seem to be effective.  Such is PC hubris.  Who could ever call for reducing the budget of the NIH that is intended to improve health.  Has health improved?  No.

So why does a group with spotty successes if not outright failure never get cut?  It’s not the results, it’s the mission that deserves the funding.  So it’s not the reality of HIT, it’s the promise, the mission, that gets the support.  Never mind the outcome, it’s bound to improve with the continued support of the mission.

Is this HIMSS VP aware of these reports?  Does he even care?

Does he believe patients harmed or killed as a result of bad health IT (and I know of a number of cases personally through my advocacy work, including, horribly, infants and the elderly) are gladly sacrificing themselves for the greater good of IT progress?

It's difficult to draw any other conclusion from health IT excuses such as proffered, other than he and HIMSS simply don't care about unintended consequences of health IT.

Regarding "Surely, no one can argue paper records are the path forward" - well, yes, I can.  (Not the path 'forward', but the path for now, at least, until health IT is debugged and its adoption and effects better understood).  And I did so argue, at my recent posts "Good Health IT v. Bad Health IT: Paper is Better Than The Latter" and "A Good Reason to Refuse Use of Today's EHR's in Your Health Care, and Demand Paper".  I wrote:

I opine that the elephant in the living room of health IT discussions is that bad health IT is infrequently, if ever, made a major issue in healthcare policy discussions.

I also opine that bad health IT is far worse, in terms of diluting and decreasing the quality and privacy of healthcare, than a very good or even average paper-based record-keeping and ordering system.  


This is a simple concept, but I believe it needs to be stated explicitly. 

A "path forward" that does not take into account these issues is the path forward of the hyper-enthusiastic technophile who either deliberately ignores or is blinded to technology's downsides, ethical issues, and repeated local and mass failures.

If today's health IT is not ready for national rollout, e.g., causes harms of unknown magnitude (e.g., see this query link), results in massive breaches of security as the "Good Reason" post above, and mayhem such as at this link, then:

The best - and most ethical - option is to slow down HIT implementation and allow paper-based organizations and clinicians to continue to resort to paper until these issues are resolved.  Resolution needs to occur in lab or experimental clinical settings without putting patients at risk - and with their informed consent.

Anything else is akin to the medical experimentation abuses of the past that led to current research subjects protections such as the "Ethical Guidelines & Regulations" used by NIH.

-- SS

Monday, August 13, 2012

Old Mystery Solved? Former FDA Reviewer Speaks Out About Intimidation, Retaliation and Marginalizing of Safety

At my Dec. 2005 post "Report: Life Science Manufacturers Adapt to Industry Transition" I wrote:

... The recognition of a gap in formally-trained medical informatics-trained personnel in the pharmaceutical industry [by Gartner Group] is welcome. For example, from my own experience:

I recall an interview I had last year with the head of the Drug Surveillance & Adverse Events department at Merck Research Labs in a rehire situation [after a 2003 layoff]. I came highly recommended by an Executive Director in the department, to whom I had shown my prior work. This included well-accepted, novel human-computer interaction designs I'd developed for use by busy biomedical researchers for a large clinical study in the Middle East, as well as my work modeling invasive cardiology and leading the development and implementation of a comprehensive information system to detect new device and treatment modality risks in a regional center performing more than 6,000 procedures/year. In addition, I'd worked with the wife of the Executive Director in years prior, when she ran the E.R. of the hospital where I was director of occupational medicine.

Despite all this in my favor, the Executive Director's boss, himself a former FDA adverse events official [a former deputy director of CDER’s office of drug safety, who'd recently moved to the pharma industry he once regulated - ed.], dismissed me in five minutes as I was showing him the cardiology project, saying flatly "we don't need a medical informatics person here." I had driven 80 miles to Rahway for this interview to save the executive a trip to Pennsylvania, where I was originally scheduled to come for the interview, since the executive's father was ill in the hospital. In an instance of profound social ineptness, my effort was not even acknowledged. Perhaps he was in a bad frame of mind, but the dismissal under the circumstances was all the more disappointing.

I recall this was one of the most puzzling hiring debacles I'd ever experienced, as all the senior people in his dept. had recommended he hire me - I was really only there for his approval and signoff - and the work I'd shown him had improved care, saved lives, and saved money.

I may not need to be puzzled any longer.  This story just appeared:

Former FDA Reviewer Speaks Out About Intimidation, Retaliation and Marginalizing of Safety
By Martha Rosenberg, Truthout
July 29, 2012

The Food and Drug Administration (FDA) is often accused of serving industry at the expense of consumers. But even FDA defenders are shocked by reports this week of an institutionalized FDA spying program on its own scientists, lawmakers, reporters and academics that included an enemies list of "actors" and collaborators

... Ronald Kavanagh [FDA drug reviewer from 1998 to 2008]:  ... In the Center for Drugs [Center for Drug Evaluation and Research or CDER], as in the Center for Devices, the honest employee fears the dishonest employee. There is also irrefutable evidence that managers at CDER have placed the nation at risk by corrupting the evaluation of drugs and by interfering with our ability to ensure the safety and efficacy of drug ... While I was at FDA, drug reviewers were clearly told not to question drug companies and that our job was to approve drugs.

Read the entire story at the link.  I won't cover it more here, except to say it's certainly possible to believe certain FDA officials don't want serious people around -- who in addition to being MD's can write serious software to detect drug and device problems -- whose work can get in the way of drug approvals.

-- SS

Monday, July 23, 2012

Health IT FDA Recall: Philips Xcelera Connect - Incomplete Information Arriving From Other Systems

Another health IT FDA recall notice, this time on middleware, an interface engine that routes data:

Week of July 11

Product description:  

Philips Xcelera Connect, Software R2.1 L 1 SP2, an interface engine for data exchange [a specialized computer and accompanying software package - ed.]. Philips Xcelera Connect R2.x is a generic interface and data mapping engine between a Hospital Information System (HIS), Imaging Modalities, Xcelera PACS and Xcelera Cath Lab Manager (CLM). This interface engine simplifies the connection by serving as a central point for data exchange. The data consists only of demographic patient information, schedules, textual information and text reports.

Classification:  Class II

Reason For Recall Xcelera Connect R2.1 L 1 SP2 , incomplete information arriving from unformatted reports interface

The data consists "only" of demographic patient information, schedules, textual information and text reports?

This is a dangerous fault mode, indeed.

"Incomplete information" moving between a hospital information system, imaging systems, a PACS system used to manage the images, and a cardiac cath lab can lead to very bad outcomes (and million dollar lawsuits), such as at "Babies' deaths spotlight safety risks linked to computerized systems", second example.

Note that the interface engine is in release 2.1, level 1, service pack 2.

In other words, a critical hardware/software product such as this undergoes constant tweaking (like Windows).

As a Class II device, at least the software is vetted to some degree by FDA:

Class II devices are those for which general controls alone are insufficient to assure safety and effectiveness, and existing methods are available to provide such assurances.[8][10] In addition to complying with general controls, Class II devices are also subject to special controls.[10] A few Class II devices are exempt from the premarket notification.[10] Special controls may include special labeling requirements, mandatory performance standards and postmarket surveillance.[10] Devices in Class II are held to a higher level of assurance than Class I devices, and are designed to perform as indicated without causing injury or harm to patient or user. Examples of Class II devices include powered wheelchairs, infusion pumps, and surgical drapes.[8][10]

One wonders how testing of tweaks and updates to this product is done, if at all, other than on live and unsuspecting patients.

When you go into the hospital you are not just putting your life in the hands of the doctors and nurses, you're putting your life into the hands of computer geeks and software development experiments.

-- SS

July 25, 2012 Addendum:

The WSJ covered this here:  http://blogs.wsj.com/cio/2012/07/20/philips-recalls-flawed-patient-data-system/.  From their report:

... The problem that led to the recall: hitting the “enter” button, to start a new paragraph, in the summary field of heart test reports, sometimes caused the text entered below that point to be stripped from the report as it was transmitted into the patient’s electronic health record. And doctors later reviewing the patient’s electronic health record would not necessarily know they had received only part of the report, which could lead them to make “incorrect treatment decisions,” Philips said in a letter to hospitals.

...  Mike Davis, managing director at The Advisory Board Company, a healthcare research firm, says in the case of the Xcelera Connect, Philips should have caught the problem in testing. “How the hell does this get out? It shows there wasn’t good quality assurance processes in place.”

Indeed.

-- SS

Monday, June 25, 2012

FDA: Software Failures Responsible for 24% Of All Medical Device Recalls

At "FDA: Software Failures Responsible for 24% Of All Medical Device Recalls" via Kapersky Labs, a software security company, it is observed (emphases mine):

Software failures were behind 24 percent of all the medical device recalls in 2011, according to data from the U.S. Food and Drug Administration, which said it is gearing up its labs to spend more time analyzing the quality and security of software-based medical instruments and equipment.

The FDA's Office of Science and Engineering Laboratories (OSEL) released the data in its 2011 Annual Report on June 15, amid reports of a compromise of a Web site used to distribute software updates for hospital respirators. The absence of solid architecture and "principled engineering practices" in software development affects a wide range of medical devices, with potentially life-threatening consequences, the Agency said. In response, FDA told Threatpost that it is developing tools to disassemble and test medical device software and locate security problems and weak design.

... "Manufacturers are responsible for identifying risks and hazards associated with medical device software (or) firmware, including risks related to security, and are responsible for putting appropriate mitigations in place to address patient safety," the agency said in an e-mail statement.

Health IT medical devices are the exception, of course.  A health IT virtual medical device is always of rock-solid architecture, always uses "principled engineering practices" in software development, and never has life-threatening consequences, of course.

Hence its special regulatory accommodations over non-virtual (tangible) medical devices.

-- SS

Sunday, June 3, 2012

WSJ "There's a Medical App for That—Or Not" - Misinformation on Health IT Safety Regulation?

There's a health IT meme that just won't die (patients may, but not the meme).

It's the meme that health IT "certification" is a certification of safety.

I expressed concern about the term "certification" being misunderstood even before the meme formally appeared, when the term was adopted by HHS with regard to evaluation of health IT for adherence to the "meaningful use" pre-flight features checklist.  See my mid-2009 post "CCHIT Has Company" where I observed:

HIT "certification." ... is a term I put in quotes since it really is "features qualification" at this point, not certification such as a physician receives after passing Specialty Boards.

The "features qualification" is an assurance that the EHR functions in way that could enable an eligible provider or eligible hospital to meet the Center for Medicare & Medicaid Services' (CMS) requirements of "Meaningful Use."  No rigorous safety testing in any meaningful sense is done, and no testing under real-world conditions is done at all.

I've seen the meme in various publications and venues.  I've even seen it in legal documents in medical malpractice cases where EHR's were involved, as an attempted defense.

Now the WSJ has fallen for the health IT Certification meme.

An article "There's a Medical App for That—Or Not" was published on May 29, 2012.  Its theme is special regulatory accommodation for health IT in the form of opposition to FDA regulation of devices such as "portable health records and programs that let doctors and patients keep track of data on iPads."

In the article, this assertion about health IT "certification" is made:

... The FDA's approach to health-information technology risks snuffing out activity at a critical frontier of health care. Poor, slow regulation would encourage programmers to move on, leaving health care to roil away for yet another generation, fragmented, disconnected and choking on paperwork.

The process already exists for safeguarding the public for computers in health care. It's not FDA premarket review but the health information technology certification program, established under President George W. Bush and still working fine under the Obama Health and Human Services Department. The government sets the standards and an independent nonprofit [ATCB, i.e., ONC Authorized Testing and Certification Bodies - ed.] ensures that apps meet those standards. It's a regulatory process as nimble as the breakout industry it's meant to monitor. That is where and how these apps should be regulated.

It's a wonderful meme.  Unfortunately, it's wrong.  Dead wrong.

Certification by an ATCB does not "safeguard the public."   Two ONC Authorized Testing and Certification Bodies (ATCB's) admitted this in email, as in my Feb. 2012 post "Hospitals and Doctors Use Health IT at Their Own Risk - Even if Certified".  I had asked them, point-blank:

"Is EHR certification by an ATCB a certification of EHR safety, effectiveness, and a legal indemnification, i.e., certifying freedom from liability for EHR use of clinical users or organizations? Or does it signify less than that?"

I received two replies from major ONC ATCB's indicating that "certification" is merely assurance that HIT meets a minimal set of "meaningful use" guidelines, not that it's been vetted for safety.  For instance:

From: Joani Hughes (Drummond Group)
Sent: Monday, March 05, 2012 1:06 PM
To: Scot Silverstein
Subject: RE: EHR certification question

Per our testing team:

It is less than that. It does not address indemnification although a certification could be used as a conditional part of some other form of indemnification function, such as a waiver or TOA, but that is ultimately out of the scope of the certification itself. Certification in this sense is an assurance that the EHR functions in way that could enable an eligible provider or eligible hospital to meet the CMS requirements of Meaningful Use Stage 1. Or to restate it more directly, CMS is expecting eligible providers or eligible hospitals to use their EHR in “meaningful way” quantified by various quantitative measure metrics and eligible providers or eligible hospitals can only be assured they can do this if they obtain a certified EHR technology.

Please let me know if you have any questions.

Thank you,
Joani.

Joani Hughes
Client Services Coordinator
Drummond Group Inc.

The other ATCB, ICSA Labs, stated that:

... Certification by an ATCB signifies that the product or system tested has the capabilities to meet specific criteria published by NIST and approved by the Office of the National Coordinator. In this case the criteria are designed to support providers and hospitals achieve "Meaningful Use." A subset of the criteria deal with the security and patient privacy capabilities of the system.

Here is a list of the specific criteria involved in our testing:
http://healthcare.nist.gov/use_testing/effective_requirements.html

In a nutshell, ONC-ATCB Certification deals with testing the capabilities of a system, some of them relate to patient safety, privacy and security functions (audit logging, encryption, emergency access, etc.).

What was suggested in the email below (freedom from liability for users of the system, etc.) would be out of scope for ONC-ATCB testing based on the given criteria. [I.e., certification criteria - ed.] I hope that helps to answer your question.

I had noted that:

... My question was certainly answered [by the ATCB responses]. ONC certification is not a safety validation, such as in a document from NASA on aerospace software safety certification, "Certification Processes for Safety-Critical and Mission-Critical Aerospace Software" (PDF) which specifies at pg. 6-7:
In order to meet most regulatory guidelines, developers must build a safety case as a means of documenting the safety justification of a system. The safety case is a record of all safety activities associated with a system throughout its life. Items contained in a safety case include the following:

• Description of the system/software
• Evidence of competence of personnel involved in development of safety-critical software and any
safety activity
• Specification of safety requirements
• Results of hazard and risk analysis
• Details of risk reduction techniques employed
• Results of design analysis showing that the system design meets all required safety targets
Verification and validation strategy
• Results of all verification and validation activities
• Records of safety reviews
• Records of any incidents which occur throughout the life of the system
• Records of all changes to the system and justification of its continued safety

A CCHIT ATCB juror, a physician informatics specialist, has also done a guest post in Jan. 2012 on HC Renewal about the certification process, reproducing his testimony to HHS on the issue.  That post is "Interesting HIT Testimony to HHS Standards Committee, Jan. 11, 2011, by Dr. Monteith."  Dr. Monteith testified (emphases mine):

... I’m “pro-HIT.” For all intents and purposes, I haven’t handwritten a prescription since 1999.

That said and with all due respect to the capable people who have worked hard to try to improve health care through HIT, here’s my frank message:

ONC’s strategy has put the cart before the horse. HIT is not ready for widespread implementation. 

... ONC has promoted HIT as if there are clear evidence-based products and processes supporting widespread HIT implementation.

But what’s clear is that we are experimenting…with lives, privacy and careers.

... I have documented scores of error types with our certified EHR, and literally hundreds of EHR-generated errors, including consistently incorrect diagnoses, ambiguous eRxs, etc.

As a CCHIT Juror, I’ve seen an inadequate process. Don’t get me wrong, the problem is not CCHIT. The problem stems from MU.

EHRs are being certified even though they take 20 minutes to do a simple task that should take about 20 seconds to do in the field.  [Which can contribute to mistakes and "use error" - ed.] Certification is an “open book” test. How can so many do so poorly?

For example, our EHR is certified, even though it cannot generate eRxs from within the EHR, as required by MU.

To CCHIT’s credit, our EHR vendor did not pass certification. Sadly, our vendor went to another certification body, and now they’re certified.

MU does not address many important issues. Usability has received little more than lip-service. What about safety problems and reporting safety problems? What about computer generated alerts, almost all of which are known to be ignored or overridden (usually for good reason)?
 
The concept of “unintended consequences” comes to mind.

All that said, the problem really isn’t MU and its gross shortcomings, it is ONC trying to do the impossible:

ONC is trying to artificially force a cure for cancer, basically trying to promote one into being, when in fact we need to let one evolve through an evidence-based, disciplined process of scientific discovery and the marketplace.

Needless to say, as was learned at great cost in past decades, a "disciplined process" in medicine includes meaningful safety regulation by objective outside experts.

Further, the certifiers have no authority to do important things such as forcibly remove dangerous software from the market.  An example is the forced Class 1 recall of a defective system as I wrote about in my Dec. 2011 post "FDA Recalls Draeger Health IT Device Because This Product May Cause Serious Adverse Health Consequences, Including Death".   Class 1 recalls are the most serious type of recall and involve situations in which there is a reasonable probability that use of these products will cause serious adverse health consequences or death.

In that situation, the producer had been simply advising users (in critical care environments, no less) to "work around the defects" that could indicate incorrect recommended dosage values of critical meds, including a drug dosage up to ten times the indicated dosage, as well as corrupt critical cardiovascular monitoring data.  As I observed:

... I find a software company advising clinicians to make sure to "work around" blatant IT defects in "acute care environments" the height of arrogance and contempt for patient safety.

Without formal regulatory authority to take actions such as this FDA recall, "safeguarding the public" is a meaningless platitude.

It's also likely the ATCB's, which are private businesses, would not want the responsibility of "safeguarding the public."  That responsibility would open them up to litigation when patient injuries or death were caused, or were contributed to, by "certified" health IT.

I have in the past also noted that the use of the term "certification" might have been deliberate, to mislead potential buyers exactly into thinking that "certification" is akin to a UL certification of an electrical appliance for safety, or an FAA approval of a new aircraft's flight-worthiness.

The WSJ needs to clarify and/or retract its statement, as the statement is misinformation.

At my Feb. 2012 post "Health IT Ddulites and Disregard for the Rights of Others" I observed:

Ddulites [HIT hyper-enthusiasts - ed.] ... ignore the downsides (patient harms) of health IT.

This is despite being already aware of, or informed of patient harms, even by reputable sources such as FDA (Internal FDA memo on H-IT risks), The Joint Commission (Sentinel Events Alert on health IT), the NHS (Examples of potential harm presented by health software - Annex A starting at p. 38), and the ECRI Institute (Top ten healthcare technology risks), to name just a few.

In fact, the hyper-enthusiastic health IT technophiles will go out of their way to incorrectly dismiss risk management-valuable case reports as "anecdotes" not worthy of consideration (see "Anecdotes and medicine" essay at this link).

They will also make unsubstantiated, often hysterical-sounding claims that health IT systems are necessary to, or simply will "transform" (into what, exactly, is usually left a mystery) or even "revolutionize" medicine (whatever that means).

Health IT is a potentially dangerous technology.   It requires meaningful regulation to "safeguard the public."  How many incidents like this and this will it take before that is understood by the hyper-enthusiasts?

I've emailed the ATCB's that had responded to my aforementioned query for clarification on the WSJ assertion about their role, being that the statement is in contradiction to their earlier replies to me.  I also advised them of the potential liability issues.

However, if it turns out to be true that the ONC-ATCB's do intend themselves as the ultimate watchdog and assurer of public safety related to EHR's, that needs to be known by the public and their representatives.

-- SS

Sunday, March 11, 2012

Doctors and EHRs: Reframing the "Modernists v. Luddites" Canard to The Accurate "Ardent Technophiles vs. Pragmatists" Reality

One manner by which Healthcare's core values are usurped is via distortions and slander about physicians and other clinicians.

At "Health IT: Ddulites and Irrational Exuberance" and related posts (query link) I've described the phenomenon of the:

'Hyper-enthusiastic technophile who either deliberately ignores or is blinded to technology's downsides, ethical issues, and repeated local and mass failures.'

I have called this personality type the "Ddulite", which is "Luddite" with the first four letter reversed. I have also pointed out that the two are not exact opposites, as the Luddites did not endanger anyone in trying to preserve their textile jobs, whereas the Ddulites in healthcare IT do endanger patients.

Yet, in the 20 years I've been professionally involved in health IT, I have frequently heard the refrain, usually from IT personnel and their management, that "Doctors resists EHRs because they are [backwards, technophobic, reactionary, dinosaurs, unable/unwilling to change, think they are Gods, ..... insert other slanderous/libelous comment].

I've heard this at Informatics meetings, at medical meetings, at commercial health IT meetings (e.g., Microsoft's Health Users Group, and at HIMSS), at government meetings (e.g., GS1 healthcare), and others.

The summary catchphrase I've heard and seen (even in the comments on this blog) is that doctors are "Luddites" while IT personnel are forward-thinking, know better than doctors, and are "Modernists."

This slander and libel of physicians and other clinicians needs to stop, and the entire issue needs to be reframed.

Doctors are pragmatists. When a new technology is rigorously shown to be beneficial to patients, and (perhaps more importantly) rigorously shown not to be of little benefit or worse, significantly harmful, doctors will embrace it. There are countless examples of this that I need not go into. They also have responsibilities, obligations, ethical considerations, liabilities, and other factors to consider in their decisions:

Pragmatism (Merriam-Webster):

: a practical approach to problems and affairs

The reality is not:


Luddite doctors <---- are in tension with ----> Modernist IT personnel

but is:


Pragmatist doctors <---- are in tension with ----> Ardent technophiles (Ddulites)


The technophiles' views may be due, on the one hand, to ignorance of medicine's true complexities and "innocent" overconfidence in technology. Unfortunately, it is a gargantuan leap of logic to go from "well, computers work in tracking FedEx packages and allowing me to withdraw money from my U.S. bank when I'm abroad, to "therefore with just a little work they will transform medicine."

Anyone familiar with even the most fundamental issues in Medical Informatics is aware of this. (This is the problem with "generic management" of healthcare IT - healthcare amateurs are unfamiliar with these issues.) Due to the complex, messy social, scientific, informational, ethical, cultural, emotional and other issues relatively unique to medicine, that leap from banking/widget tracking/mercantile computing --> medicine is probably more naive than the leap in logic, for instance, that would have a person believe since a hot air balloon can go high in the sky, it can take a person to the moon, as I observed here.

On the other hand the technophile's expressed views can also be a territorial ploy with full awareness of, and reckless disregard for, the consequences of technology's downsides.

(The CIO where I was a CMIO was well-known to be an aficionado of Sun Tzu's "Art of War" in his corporate politics - the polar opposite of a 'team player.' I might add that the doctors were fully expected to be 'team players'.)

Part of the struggle between the health IT industry and medical professionals has also been control of information flow about HIT.

This has been brought to the fore by my observation of the almost uniformly negative comments on today's HIT at the physician-only site Sermo.com. Sermo is populated, I might add, not by computerphobes but by physicians in a wide variety of specialties using computers for social networking. These comments will hopefully soon be published.

(They are not dissimilar to the many comments I reported in my Jan. 2010 post "An Honest Physician Survey on EHR's", although some might call the sponsor of the latter survey, AAPS, biased. I do not think the same can be said of Sermo.com, an open site for all physicians.)

I have mentioned on this blog the numerous impediments to flow of information about health IT's downsides, and these impediments are well described, for example, in the Joint Commission Sentinel Events Alert on Health IT (link), the FDA Internal Memorandum on H-IT Safety (link) and elsewhere (such as at link, link).

The Institute of Medicine of the National Academies noted this in their late 2011 study on EHR safety:

... While some studies suggest improvements in patient safety can be made, others have found no effect. Instances of health IT–associated harm have been reported. However, little published evidence could be found quantifying the magnitude of the risk.

Several reasons health IT–related safety data are lacking include the absence of measures and a central repository (or linkages among decentralized repositories) to collect, analyze, and act on information related to safety of this technology. Another impediment to gathering safety data is contractual barriers (e.g., nondisclosure, confidentiality clauses) that can prevent users from sharing information about health IT–related adverse events. These barriers limit users’ abilities to share knowledge of risk-prone user interfaces, for instance through screenshots and descriptions of potentially unsafe processes. In addition, some vendors include language in their sales contracts and escape responsibility for errors or defects in their software (i.e., “hold harmless clauses”). The committee believes these types of contractual restrictions limit transparency, which significantly contributes to the gaps in knowledge of health IT–related patient safety risks. These barriers to generating evidence pose unacceptable risks to safety.[IOM (Institute of Medicine). 2012. Health IT and Patient Safety: Building Safer Systems for Better Care (PDF). Washington, DC: The National Academies Press, pg. S-2.]

Also in the IOM report:

… “For example, the number of patients who receive the correct medication in hospitals increases when these hospitals implement well-planned, robust computerized prescribing mechanisms and use barcoding systems. But even in these instances, the ability to generalize the results across the health care system may be limited. For other products— including electronic health records, which are being employed with more and more frequency— some studies find improvements in patient safety, while other studies find no effect.

More worrisome, some case reports suggest that poorly designed health IT can create new hazards in the already complex delivery of care. Although the magnitude of the risk associated with health IT is not known, some examples illustrate the concerns. Dosing errors, failure to detect life-threatening illnesses, and delaying treatment due to poor human–computer interactions or loss of data have led to serious injury and death.”


I note that the 'impediments to generating evidence' effectively rise to the level of legalized censorship, as observed by Koppel and Kreda regarding gag and hold-harmless clauses in their JAMA article "Health Care Information Technology Vendors' Hold Harmless Clause: Implications for Patients and Clinicians", JAMA 2009;301(12):1276-1278. doi: 10.1001/jama.2009.398.

Pragmatist physicians are quite rightly very wary of the technology as it now exists.

Ultimately, even when information on HIT risks or defects does surface, it is highly inappropriately labeled as "anecdotal" (see this post on anecdotes for why this behavior is inappropriate).

This "anecdotalist" phenomenon occurs right up to the HHS Office of the National Coordinator for Health IT (ONC), as I described in my post "Making a Stat Less Significant: Common Sense on 'Side Effects' Lacking in Healthcare IT Sector" and elsewhere.

Therefore, another part of reframing the pragmatism vs. technophilia issue is for clinicians to put an end to censorship of HIT adverse experiences.

I have the following practical suggestions, used myself, to start to accomplish the latter goal.

These suggestions are in the interest of protecting public health and safety:

When a physician or other clinician observes health IT problems, defects, malfunctions, mission hostility (e.g., poor user interfaces), significant downtimes, lost data, erroneous data, misidentified data, and so forth ... and most certainly, patient 'close calls' or actual injuries ... they should (anonymously if necessary if in a hostile management setting):

(DISCLAIMER:  I am not responsible for any adverse outcomes if any organizational policies or existing laws are broken in doing any of the following.)

  • Inform their facility's senior management, if deemed safe and not likely to result in retaliation such as being slandered as a "disruptive physician" and/or or being subjected to sham peer review (link).
  • Inform their personal and organizational insurance carriers, in writing. Insurance carriers do not enjoy paying out for preventable IT-related medical mistakes. They have begun to become aware of HIT risks. See, for example, the essay on Norcal Mutual Insurance Company's newsletter on HIT risks at this link. (Note - many medical malpractice insurance policies can be interpreted as requiring this reporting, observed occasional guest blogger Dr. Scott Monteith in a comment to me about this post.)
  • Inform the State Medical Society and local Medical Society of your locale.
  • Inform the appropriate Board of Health for your locale.
  • If applicable (and it often is), inform the Medicare Quality Improvement Organization (QIO) of your state or region. Example: in Pennsylvania, the QIO is "Quality Insights of PA."
  • Inform a personal attorney.
  • Inform local, state and national representatives such as congressional representatives. Sen. Grassley of Iowa is aware of these issues, for example.
  • As clinicians are often forced to use health IT, at their own risk even when "certified" (link), if a healthcare organization or HIT seller is sluggish or resistant in taking corrective actions, consider taking another risk (perhaps this is for the very daring or those near the end of their clinical career). Present your organization's management with a statement for them to sign to the effect of:
"We, the undersigned, do hereby acknowledge the concerns of [Dr. Jones] about care quality issues at [Mount St. Elsewhere Hospital] regarding EHR difficulties that were reported, namely [event A, event B, event C ... etc.]

We hereby indemnify [Dr. Jones] for malpractice liability regarding patient care errors that occur due to EHR issues beyond his/her control, but within the control of hospital management, including but not limited to: [system downtimes, lost orders, missing or erroneous data, etc.] that are known to pose risk to patients. We assume responsibility for any such malpractice.

With regard to health IT and its potential negative effects on care, Dr. Jones has provided us with the Joint Commission Sentinel Events Alert on Health IT at http://www.jointcommission.org/assets/1/18/SEA_42.PDF, the IOM report on HIT safety at http://www.modernhealthcare.com/Assets/pdf/CH76254118.PDF, and the FDA Internal Memorandum on H-IT Safety Issues at http://www.scribd.com/huffpostfund/d/33754943-Internal-FDA-Report-on-Adverse-Events-Involving-Health-Information-Technology.

CMO __________ (date, time)
CIO ___________ (date, time)
CMIO _________ (date, time)
General Counsel ___________ (date, time)
etc."
  • If the hospital or organizational management refuses to sign such a waiver (and they likely will!), note the refusal, with date and time of refusal, and file away with your attorney. It could come in handy if EHR-related med mal does occur.
  • As EHRs remain experimental, I note that indemnifications such as the above probably belong in medical staff contracts and bylaws when EHR use is coerced.

These measures can help "light a fire" under the decision makers, and "get the lead out" of efforts to improve this technology to the point where it is usable, efficacious and safe.

-- SS

Monday, February 13, 2012

Congressman Darrell Issa: FDA's email monitoring of "whistleblowers" communicating with Congress was illegal

In followup to my post of Jan. 30, 2012 "Can You Sue the Government? FDA Whistleblowers Sue Over Surveillance of Personal e-Mail" I provide a link to a probing letter from Darrell Issa, Chairman, US House of Representatives Committee on Oversight and Government Reform to Margaret Hamburg MD, Commissioner of the FDA.

The letter raises the issue that FDA's email monitoring of "whistleblowers" communicating with Congress was illegal ("unlawful, and will not be tolerated"), and the illegality was further compounded by harassment and retaliation against the "whistleblowers."

Many probing "who? why? when?" questions are asked of FDA.

I do not have free text of this letter, just a link to images of the letter. I cannot post the text (no access to OCR of a PDF at the moment), but the letter images are here:

http://www.whistleblowers.org/storage/whistleblowers/documents/FDAComplaint/issaletter.fdaspying.pdf

Worth reading in its entirety.

-- SS

Monday, January 30, 2012

Can You Sue the Government? FDA Whistleblowers Sue Over Surveillance of Personal e-Mail

From the Washington Post:

FDA staffers sue agency over surveillance of personal e-mail
Ellen Nakashima and Lisa Rein
January 29, 2012

The Food and Drug Administration secretly monitored the personal e-mail of a group of its own scientists and doctors after they [the scientists - ed.] warned Congress that the agency was approving medical devices that they believed posed unacceptable risks to patients, government documents show.

The surveillance — detailed in e-mails and memos unearthed by six of the scientists and doctors, who filed a lawsuit against the FDA in U.S. District Court in Washington last week — took place over two years as the plaintiffs accessed their personal Gmail accounts from government computers.

While accessing Gmail from government computers was not a wise idea, since all traffic over an institutional PC and network can be monitored, these Gmails were apparently to members of Congress.

Copies of the e-mails show that, starting in January 2009, the FDA intercepted communications with congressional staffers and draft versions of whistleblower complaints complete with editing notes in the margins. The agency also took electronic snapshots of the computer desktops of the FDA employees and reviewed documents they saved on the hard drives of their government computers.

See sample emails at link above.

Information garnered this way eventually contributed to the harassment or dismissal of all six of the FDA employees, the suit alleges. All had worked in an office responsible for reviewing devices for cancer screening and other purposes.

That's very unfortunate.

It will be far more unfortunate if the warnings of the six, as in this whistleblower case, went unheeded, and patients are injured or die as a result. In that case, FDA bureaucrats might have been accessories to those injuries or deaths.

“Who would have thought that they would have the nerve to be monitoring my communications to Congress?” said Robert C. Smith, one of the plaintiffs in the suit, a former radiology professor at Yale and Cornell universities who worked as a device reviewer at the FDA until his contract was not renewed in July 2010. “How dare they?”

I, on the other hand, would have expected it. It would have been far more prudent to send such emails from a private home computer and ISP.

The scientists and doctors denied sharing information improperly. The HHS inspector general’s office, which oversees FDA operations, declined to pursue an investigation, finding no evidence of criminal conduct. It also said that the doctors and scientists had a legal right to air their concerns to Congress or journalists.

FDA officials sought a second time that year to initiate action against the scientists and doctors. “We have obtained new information confirming the existence of information disclosures that undermine the integrity and mission of the FDA and, we believe, may be prohibited by law,” wrote Jeffrey Shuren, director of the FDA’s Center for Devices and Radiological Health, on June 28, 2010.

The inspector general, after consulting with federal prosecutors, declined the second request, as well.


The IG office seemed to find the behavior legal, but FDA bureaucrats apparently did not like non-team players.


The FDA scientists and doctors, all of whom worked for the agency’s Office of Device Evaluation, said they first made internal complaints beginning in 2007 that the agency had approved or was on the verge of approving at least a dozen radiological devices whose effectiveness was not proven and that posed risks to millions of patients. Frustrated, they also brought their concerns to Congress, the White House and the HHS inspector general.

Three of the devices risked missing signs of breast cancer, the scientists and doctors warned, according to documents and interviews. Another risked falsely diagnosing osteoporosis, leading to unnecessary treatments; one ultrasound device could malfunction while monitoring pregnant women in labor, risking harm to the fetus; and several devices for colon cancer screening used such heavy doses of radiation that they risked causing cancer in otherwise healthy people, the FDA scientists and doctors said.


Permit me to wonder if regulatory capture played a role in these decisions.

One might also wonder if complaints about electronic health records or other clinical IT, admitted by FDA to be a medical device "political hot potato" they elected to not regulate, were also involved.


... The first documented FDA interception was of an e-mail dated Jan. 29, 2009, shortly after the letter from Ferry. In it, device reviewer Paul T. Hardy asked a congressional aide, Joanne Royce, for assurances that “it is not a crime to provide information to the Congress about potential misconduct by another Agency employee.”

Royce replied: “[Y]ou and your colleagues have committed no crime. . . . you guys didn’t even provide confidential business information to Congress.”


The only 'crime' was apparently not being a 'team player', which on Healthcare Renewal has been defined as someone who is silent, or silenced, or a co-conspirator regarding managerial mediocrity, malfeasance, or madness.


Hardy, who is among the six employees who filed the suit, was fired in November after a negative performance review; an internal FDA letter obtained in separate litigation quoted managers saying they did not “trust” him. Of the other five scientists and doctors, the suit says two did not have their contracts renewed, two suffered harassment and werepassed over for promotions, and one was fired.


Trust him to do - what, I ask?

Read the whole WaPo article.

Plaintiff's lawyers need to be aware of this event, and I intend to make them aware.

-- SS

Feb. 13, 2012 addendum:

A link to Darrell Issa's letter to FDA Commissioner Hamburg is here.

-- SS

Wednesday, December 14, 2011

FDA Recalls Draeger Health IT Device Because "This Product May Cause Serious Adverse Health Consequences, Including Death"

More health IT madness, in the form of an actual FDA recall:


FDA Recall notice

Draeger Medical Inc., Infinity Acute Care System Monitoring Solution (M540)
Recall Class: Class I [the most serious type of recall, see below - ed.]
Date Recall Initiated: October 18, 2011
Product: Infinity Acute Care System Monitoring Solution (M540), Catalog number MS25510
All serial numbers are affected by this recall.

This product was manufactured from March 1, 2011 through September 30, 2011 and distributed only to the Rush University Medical Center (Chicago, Illinois) from July 1, 2011 through September 30, 2011.

Use: This product is a networked solution system used to monitor a patient’s vital signs and therapy, control alarms, review Web-based diagnostic images, and access patient records. The number of monitored vital signs can be increased or decreased based on the patient’s needs.

Recalling Firm: Draeger Medical, Inc. 3135 Quarry Rd., Telford,
Pennsylvania 18969-1042

Manufacturer: Draeger Medical GmbH, Moislinger Allee 53-55_23558, Lubeck, Germany

Reason for Recall: The weight-based drug dosage calculation may indicate incorrect recommended values, including a drug dosage up to ten times the indicated dosage. Additionally, there may be a 5-10 second delay between the electrocardiogram and blood pressure curves (waveforms) at the Infinity Central Station.

This product may cause serious adverse health consequences, including death.


Public Contact: Draeger Medical, Inc. 3135 Quarry Road Telford, Pennsylvania 18969-1042_215-660-2349

FDA District: Philadelphia

FDA Comments: On October 17, 2011, the company sent the Rush University Medical Center a letter stating that users should enter the patient’s weight by way of the admin/demographics screen to ensure the drug dosage is calculated as intended.

Additionally, the company’s letter states that users should follow the instructions for Use of the Infinity Acute Care System Monitoring Solution. The Instructions for Use includes, "For primary monitoring and diagnosis of bedside patients, use the bedside monitor. Use the Infinity Central Station only for remote assessment of a patient's status." [That is, clinicians should work around the device's defects, which would seem to hold the computer's rights over the patients' rights -- rather than taking the device out of service immediately and having the vendor fix it - ed.]

Class I recalls are the most serious type of recall and involve situations in which there is a reasonable probability that use of these products will cause serious adverse health consequences or death.

Health care professionals and consumers may report adverse reactions or quality problems they experienced using these products to MedWatch: The FDA Safety Information and Adverse Event Reporting Program either online, by regular mail or by FAX.

I find a software company advising clinicians to make sure to "work around" blatant IT defects in "acute care environments" the height of arrogance and contempt for patient safety. Yes, acute care environments are not unpredictable, chaotic environments often moving a mile a minute. They are precisely the environment where everyone can sit around on their butts and leisurely hold, over pizzas and cokes, a committee meeting where each and every move can be discussed, just like in a software development shop ...

I also find the statement that this medical device was "distributed only to the Rush University Medical Center" remarkable. If true, it raises a number of issues that make me very uncomfortable:

  • How did the company's letter to Rush make its way to FDA? Whistle blower?
  • What testing was done by the manufacturer of this medical device before release to a live-patient environment?
  • Who approved this software "going live?" What due diligence was performed?
  • Was this a software beta test of experimental software on live subjects?
  • Did Rush University Medical Center have some sort of quid pro quo (e.g., financial arrangement) with the software company?
  • Did Rush seek IRB approval of this device?
  • Were patients presented with an informed consent process regarding its use in their care?
  • Were any patients actually injured or did any die as a result of this software?

The answers to these questions need to be sought by FDA.

-- SS

Thursday, December 8, 2011

A Logical Fallacy Affecting Selection of Panelists on an FDA Advisory Committee

An old argument used to defend against criticisms of conflicts of interest was just employed in a disturbing context. 

Expert Removed from FDA Advisory Committee for Having an Opinion

As first reported on the PharmaLot blog, and later by the Newark Star-Ledger, a panelist was just disqualified from voting on a US Food and Drug Administration (FDA) panel for having previously expressed an opinion about the safety of the drug up for re-evaluation.  Per the Star-Ledger,
Federal drug regulators have notified Sidney Wolfe, one of the nation's leading advocates for drug safety, that he would not be permitted to join a committee of experts asked to review new dangers associated with a group of birth control pills, including Bayer Healthcare's top-selling Yaz.

The Food and Drug Administration scheduled a meeting Thursday of two advisory committees — one on drug safety and risk management and the other on reproductive health drugs — after new information emerged on the safety of oral contraceptives containing the synthetic hormone

Why Was Dr Wolf removed from the committee?
The agency recently learned that Public Citizen, a non-profit consumer advocacy organization, had placed one of the contraceptives, Bayer’s Yasmine — a predecessor to Yaz — on its list of 'Do Not Use Pills' in 2002.

'He did not volunteer this information,' said agency spokeswoman Erica Jefferson. 'It was brought to our attention.'

The FDA offered Wolfe two options: He could present information to the advisory committee like other members of the public or he could sit on the committee, participate in the discussion but refrain from voting.

Logical Fallacy: False Dilemma

We frequently post about conflicts of interest affecting health care decision-makers.  It is now clear (e.g., look here) that leading health care academics often have significant financial relationships with drug and device companies and other health care corporations which could potentially influence their clinical research, clinical teaching, health policy recommendations, or direct patient care.  These relationships are frequently defended, often with logical fallacies used by those who themselves have conflicts. 

One common argument is based on the assertion that conflicts of interest are ubiquitous and everyone is conflicted.  Therefore, if one were to ban people with conflicts from responsible positions, there would be no one left to fill these positions, so such a ban would be untenable.  This seems to be an example of the false dilemma.  It is often employed by people who themselves have conflicts of interest.

One way to make it appear that everyone has conflicts of interest is to broaden the concept of conflicts of interest to "intellectual conflicts of interest."  Doing this facilitates the assertion that everyone who has an opinion on a subject has a conflict of interest, so this argument implies that all sentient beings have important conflicts.  This argument would make equivalent a doctor who would not use a particular drug because his or her reading of the clinical research literature about this drug suggests its benefits do not outweigh its harms, and a doctor who advocates using the drug, and is paid $100,000 a year by the marketing division of the company that makes this drug as a marketing consultant. 

The decision to prevent Dr Wolfe from voting on this committee seems to be based on this logical fallacy. As Dr Wolfe said,
In his statement, Wolfe said if intellectual conflict of interest means being informed and subsequently having opinions on a drug, then 'many more members of advisory committees would have to be excluded.'

'For members of a scientific and technical advisory committee, possessing information and expert views on matters within the purview of the committee is not a conflict of interest,' Wolfe wrote. 'To the contrary, qualified experts are likely to have developed views on a variety of subjects based on their professional experience.'

As Larry Husten wrote on the CardioBrief blog,
do we really want to choose advisory committee panelists who have never expressed opinions about the topics they are reviewing? Are we reaching the point where potential FDA panelists will be required, like Supreme Court nominees, to have avoided any discussion of all important issues at every point in the past?

Thus they point out the absurdity of banning people with "intellectual conflicts of interests," that is, with relevant opinions, as if they had real conflicts of interest. (But wait for someone to argue that if Wolfe were allowed to serve, it would be unfair to ban anyone with financial conflicts of interest from serving.)

What is most distressing about this case is that the sort of fallacious arguments usually employed by the conflicted to defend conflicts of interest are now being employed by leaders of government agencies, who are supposed to not have their own conflicts, and to serve the people, and in this case, to be dedicated to improving the health and safety of the population and of individual patients.

Every fallacious argument made in support of financial conflicts of interest affecting health care decision makers suggests we need to do more to combat such conflicts.  At an absolute minimum, all such conflicts should be fully disclosed in detail in any context in which they possibly could influence medical research, medical education, clinical care, or health policy.  Furthermore, we need to work towards ending as many such conflicts as possible.  A good starting point would be the recommendations made by the Institute of Medicine committee reports on conflicts of interest, and clinical practice guidelines.

See also comments by Merrill Goozner on the GoozNews blog.

ADDENDUM (9 December, 2011) - In response to comments below, see two posts by Dr Howard Brody in the Hooked: Ethics, Medicine, and Pharma blog on the problems with the concept of "intellectual conflict of interest" - here and here.

ADDENDUM (9 December, 2011) -  See further discussion by posted by Dr Brody today here.

Thursday, December 30, 2010

Former NIH Director Spins Through Revolving Door, Ends Up at Sanofi-Aventis

A bit of news that got little attention this month was a new job for the former head of the US National Institutes of Health (NIH).  Dr Elias Zerhouni had left the NIH in October, 2008.  Here is the Reuters version of the story of his hew career:
French drugmaker Sanofi-Aventis (SASY.PA) replaced its head of research and development with a leading academic and former top U.S. health official on Tuesday to raise its game in medical innovations.

The company said Elias Zerhouni would lead R&D of drugs and bring R&D for vaccines under his control too as Sanofi reshapes its portfolio and looks to vaccines as one area for growth to offset sales losses from mounting generic competition.

The appointment of Zerhouni, a professor of radiology and biomedical engineering, comes as Sanofi battles to buy U.S. rare disease specialist Genzyme.

Chief executive Chris Viehbacher brought in Zerhouni in February 2009 as his scientific adviser, shortly after taking charge of the group which he has been transforming to include the development of drugs based on biotechnology.

Zerhouni's Embrace of Corporate Health Care

Although Zerhouni ostensibly left the NIH to return to academia at Johns Hopkins University, note that by February, 2009, four months after his resignation was announced, Zerhouni was already advising the Sanofi CEO. 

Soon after he joined the corporate health care world in earnest.  In April, 2009, he was proposed for membership on the board of directors of Actelion Ltd, a Swiss biotechnology company.  On December 8, 2009, he was elected to the board of Danaher Corp, a diversified technology corporation which makes medical devices.  At some time he had become President of the Zerhouni Group, which advertised itself as a resource to "pharmaceutical and biotechnology companies, trade organizations, sovereign wealth funds, government agencies, and research entities around the globe."

Zerhouni at the NIH: His Response to the Conflict of Interest Scandal

There is more than a little irony inspired by Zerhouni's quick circuit through the revolving door.

Zerhouni became director of the NIH in 2002, and announced his departure in October, 2008. In December, 2003, David Willman published his landmark article in the Los Angeles Times on severe conflicts of interest affecting NIH scientists and leaders.  It revealed that formerly stringent conflict of interest policies at the Institutes were rescinded by then director Dr Harold Varmus in 1995, during the Clinton administration, and increasingly since 1998, disclosure of NIH personnel's conflicts of interest had been reduced.  Thus, in 2002, Zerhouni had taken charge of an agency already deeply affected by conflicts of interest affecting many of its leaders, even though that was not yet public.  He initially did nothing about the situation. 

Willman published another series of articles revealing even more breathtaking conflicts of interest in December, 2004.  (See our post here.)   By then, a Los Angeles Times editorial said there was the "appearance of corruption" at the NIH, and called for Dr Zerhouni's resignation. 

Only after the second series of articles did Dr Zerhouni swing into action (see post here).  In February, 2005, he announced that he would now hold the NIH to a "higher standard."  Yet new conflict of interest stories kept surfacing and their handling kept provoking concern (e.g., see this post from 2007, and this post from 2008), and concerns about how NIH deals with conflicts of interest affecting the extramural researchers it funds persist to this day (e.g., see this post). 

By the late 1990s, the NIH, like many other government agencies, seemed to have become extremely cozy with the world of big corporations.  Dr Zerhouni did nothing to obvious to reduce the local version of this coziness until it had become a public scandal.  His actions let questions about the relationships of the NIH, once a pristine example of a government run biomedical research agency, with big health care business persist to this day. 

So it should perhaps be no surprise that he so quickly transitioned from the government that is supposed to be"of the people, by the people, for the people" to top leadership positions in corporate health care.

Other US Government Health Care Agency Leaders Transit the Revolving Door

Meanwhile, the previous commissioner of the US Food and Drug Administration, Dr Andrew von Eschenbach, is Senior Director for Strategic Initiatives at the Center for Health Transformation, a group whose membership includes some of the biggest health care organizations, many of which have had their own moments in the sun on Health Care Renewal.  For example, see Charter Members, AstraZeneca, Sutter Health, and Wellpoint; and Platinum Members, GlaxoSmithKline and Merck.  Dr Eschenbach is also on the board of directors of Histosonics Inc. 

Also, the previous director of the Centers for Disease Control, Dr Julie Geberding, became President of Merck Vaccines in late 2009. 

Conclusions

So the revolving door just keeps spinning, its revolutions suggesting how closely tied together big government and big corporations have become in what is now the health care business.  Whatever the motivations of Doctors Zerhouni, von Eschenbach, and Geberding were, the message to every person in a leadership position in health care in the US government has to still be: you too can earn big corporate compensation soon after you leave here.  Who knows how much that siren song will lead current government leaders to avoid antagonizing the leaders of big health care corporations during their government "service."  That is, of course, not what we want them to be thinking about if government agencies ae to serve the people, not the CEOs of big corporations. 

I am sure that the career transitions of Doctors Zerhouni, von Eschenbach, and Geberding were perfectly legal.  If we want government health care agencies to put the peoples' interests ahead of those of the CEOs of big health care corporations, should not, however, the law be changed to at least slow down the revolving door?

Thursday, December 2, 2010

"Unreasonably Dangerous" Heparin

It is time for an update on the case of the deadly contaminated heparin sold by Baxter International, which has received much less attention than seems warranted given its human costs (81 lives).  How the heparin was contaminated, and how the contaminated heparin ended up being sold as a US Food and Drug Administration approved American product are still unknown.  Despite the fact that the outcome of this case were so bad, it received disproportionately little attention when it was first made public, and now seems to have become nearly anechoic.

Case Summary 

Baxter International imported the "active pharmaceutical ingredient" (API) of heparin, that is, in plainer language, the drug itself, from China. That API was then sold, with some minor processing, as a Baxter International product with a Baxter International label. The drug came from a sketchy supply chain that Baxter did not directly supervise, apparently originating in small "workshops" operating under primitive and unsanitary conditions without any meaningful inspection or supervision by the company, the Chinese government, or the FDA. The heparin proved to have been adulterated with over-sulfated chondroitin sulfate (OSCS), and many patients who received got seriously ill or died. While there have been investigations of how the adulteration adversely affected patients, to date, there have been no publicly reported investigations of how the OSCS got into the heparin, and who should have been responsible for overseeing the purity and safety of the product. Despite the facts that clearly patients died from receiving this adulterated drug, no individual has yet suffered any negative consequence for what amounted to poisoning of patients with a brand-name but adulterated pharmaceutical product.  (For a more detailed summary of the case, look here, and for all our posts on this topic, look here.)

Civil Cases Plod Forward
If there is any ongoing official investigation of this case, it has not been made public.  Civil cases filed by patients allegedly injured by the heparin, or by relatives of patients who died allegedly from the heparin, seem to be proceeding at a glacial pace.  However, there is one development in one set of civil cases worthy of note.  As reported two weeks ago by Alicia Mundy in the Wall Street Journal:
A state court in Illinois has granted a partial summary judgment to two plaintiffs suing Baxter International Inc. over contaminated blood thinner, saying that some of the company's heparin was 'unreasonably dangerous.'

The suit involves tainted imported heparin ingredients from China that caused a public health crisis in 2008, and were linked to more than 80 deaths in the U.S. and many other serious allergic reactions.

Some 300 cases nationwide against Baxter and its main ingredient supplier, Wisconsin-based Scientific Protein Laboratories LLC, were consolidated in Chicago in Cook County Circuit Court.

Both companies have said that they weren't negligent and weren't responsible for the deadly reactions among patients.

The Illinois judge's ruling, dated Wednesday, involved a motion for partial summary judgment that named only Baxter. The motion was filed on behalf of two plaintiffs in the consolidated cases, one of whom died.

The ruling cites statements by Baxter's corporate quality vice president and the president of the company's medication delivery division that the heparin was defective.

Baxter argued in its defense that a jury should address the question of whether a product is 'unreasonably dangerous.' The company noted that the two Baxter executives who agreed in depositions that the heparin was defective aren't doctors or scientists. However, Judge Jennifer Duncan-Brice wrote that the issue before her wasn't whether heparin actually caused the death or injury to the plaintiffs, but just whether the product was, as a matter of fact, contaminated.
The most basic responsibility of a pharmaceutical company is to produce pure, unadulterated product.  As the current director of the US Food and Drug Administration wrote in this week's New England Journal of Medicine, the agency's "modern regulatory functions began with the passage of the 1906 Pure Food and Drugs Act, a law, more than a quarter of a century in the making, that prohibited interstate commerce in adulterated and misbranded food and drugs."  However, in the 21st century, drug companies are increasingly failing to produce unadulterated products, and the FDA is having increasing difficulty assuring patients that the drugs they take meet even the most basic safety standards. 

I submit that corporate cultures increasingly influenced by the arrogant, greedy, amoral leadership of the financial services industry that lead us to the brink of another depression are also leading us to the brink of a poisonous era in health care.  Corporate leaders intent on cutting costs, and paying themselves as much of the resulting proceeds as possible, may see quality and safety as just another cost cutting target.  Corporate leaders brought up in the culture of finance, but untrained and inexperienced in engineering, science, and medicine find it all too easy to ignore quality and safety and focus on the bottom line.  (It is ironic that in the quote above, Baxter International's attorneys made light of the judgments of the company's own executives because they are not physicians or scientists.)

Meanwhile, society seems to have been so mesmerized by the mantra that laissez faire capitalism will lead to miraculous "innovation" that we do not even attend to instances in which it lead instead to death.

As we have said until being blue in the face, as long as the leaders of health care organizations are not held accountable for the results of their decisions on health care quality, cost, and access (even in such extreme quality violations as those resulting in multiple patient deaths), we can expect continuing decisions that sacrifice quality, increase costs, and worsen access, but that are in the self-interest of the people making them.


To really reform health care, we must hold health care organizations and their leaders accountable (and not blame all the problems on doctors, other health care professionals, patients, and society at large).

Monday, November 22, 2010

EHRevent.org CEO Edward Fotsch MD: The Real Challenge with EHRs is -- User Error?

Additional detailed answers to the questions I raised here and here about a new site EHRevent.org, for reporting of healthcare IT-related medical errors, can now be found at a HIStalk interview entitled "HIStalk Interviews Edward Fotsch MD, CEO, PDR Network (EHR Event)" at this link.

It is an interesting interview. I certainly find the recognition of need for an EHR/clinical IT problems reporting service a major cultural advancement in healthcare.

It's still unclear to me how -- and why -- this organization originated with little to no public knowledge and involvement, especially considering the organization types mentioned below that participated, and how it will function in interactions with myriad healthcare IT stakeholders.

Here's an explanation by Dr. Fotsch:

... We work with a not-for-profit board called the iHealth Alliance. They Alliance is made up of medical society executives, professional liability carriers, and liaison representatives from the FDA. They govern some of the networks that we run, and in exchange for that, help us recruit physicians. Professional liability carriers, for example, promote our services that send drug alerts to doctors because that’s good and protective from a liability standpoint.

In the course of our conversations with them roughly a year ago, when we were talking about adding some drug safety information into electronic health records, we came across the fact that there were concerns from the liability carriers that there was no central place for reporting adverse EHR events or near misses or potential problems or issues with electronic health records.

[Translation: the carriers saw their losses potentially increasing as a result of litigation arising from EHR-related lawsuits, and decided to do something proactive- ed.]

They were interested in creating a single place where they could promote to their insured physicians that they could report adverse EHR events. Then it turned out that medical societies had similar concerns.

[That must have been one of the best-kept secrets on Earth considering the promotion EHR's have received as a miracle-working technology, and the lack of expression of concerns from those societies - ed.]

Rather than have each of them create a system, the Alliance took on a role of orchestrating all of the interests, including some interest from the FDA and ONC in creating an electronic health record problem reporting system. That’s how it came into play.

Our role in it, in addition to having a seat on the iHealth Alliance board, was really in network operations — in running the servers, if you will, which didn’t seem like a very complicated task. Since business partners we rely on for our core business were interested in it, it was easy to say yes. It frankly turned out to be somewhat more complicated than we originally thought [I predict they haven't seen anything yet; wait until they get knee deep into real world EHR issues - ed.], but now it’s up and available.


While I find the recognition of need for an EHR/clinical IT reporting service a major advancement, I am nonetheless troubled by certain statements made by Dr. Fotsch. They seem at odds with the theoretic and empirical findings of medical informatics, social informatics, human-computer interaction and other fields relevant for health IT evaluation, and/or seem to demonstrate biases about HIT. My comments are in red italics:

Fotsch:

… Probably what we’re seeing more often than not, the real challenge with EHRs like any technology, turns out to be some form of user error.

[What about contributory or causative designer error? – ed.]

“I didn’t know it would do that"

[Why did the user not know? Lack of training, poor manuals, or overly complex information systems lacking informative messages and consistency of control-action relationships, as an example? -ed]

... or “I didn’t know that it pre-populated that"

[Why did it pre-populate? Was that inappropriate for the clinical context, such as in this example?]

... or “I didn’t know I shouldn’t cut and paste"

[Then why did the software designers enable cut and paste, without some informative message on overuse, such as length of text cut and pasted?– ed.]

... or “I wasn’t paying attention to this"

[Perhaps due to distractions from mission hostile user interfaces? -ed]

... or maybe the user interface was a little confusing

[What is "a little confusing?" (Is that like "A little pregnant?) And why was it confusing? User intellectual inadequacy, or software design issues leading to cognitive overload? - ed.]

Actual software errors appear to be the exception rather than the rule as it relates to EHR events.

["Actual software errors" are defined as, what, exactly--? Loss of database relational integrity as a result of a programming error, as apparently recently happened at Trinity Health, a large Catholic hospital chain as reported in HIStalk? Memory leaks from poor code? Buffer overflows? What?]

That’s at least as I understand it.

[Understand it from whom? Hopefully not from me or my extensive website on the issues - ed.]


In summary, a "blame the user" attitude seems apparent. There appears to be little acknowledgment of the concept of IT "errorgenicity" - the capacity of a badly designed or poorly implemented information system to facilitate error, and of the systemic nature of errors in complex organizations to which ill-done IT can contribute.

These are concepts understood long ago in mission critical settings, as in this mid 1980's piece from the Air Force cited in my previously-linked eight part series on mission hostile health IT:


From "GUIDELINES FOR DESIGNING USER INTERFACE SOFTWARE"
ESD-TR-86-278
August 1986
Sidney L. Smith and Jane N. Mosier
The MITRE Corporation
Prepared for Deputy Commander for Development Plans and Support Systems, Electronic Systems Division, AFSC, United States Air Force, Hanscom Air Force Base, Massachusetts.

... SIGNIFICANCE OF THE USER INTERFACE

The design of user interface software is not only expensive and time-consuming, but it is also critical for effective system performance. To be sure, users can sometimes compensate for poor design with extra effort. Probably no single user interface design flaw, in itself, will cause system failure. But there is a limit to how well users can adapt to a poorly designed interface. As one deficiency is added to another, the cumulative negative effects may eventually result in system failure, poor performance, and/or user complaints.

Outright system failure can be seen in systems that are underused, where use is optional, or are abandoned entirely. There may be retention of (or reversion to) manual data handling procedures, with little use of automated capabilities. When a system fails in this way, the result is disrupted operation, wasted time, effort and money, and failure to achieve the potential benefits of automated information handling.

In a constrained environment, such as that of many military and commercial information systems, users may have little choice but to make do with whatever interface design is provided. There the symptoms of poor user interface design may appear in degraded performance. Frequent and/or serious errors in data handling may result from confusing user interface design [in medicine, this often translates to reduced safety and reduced care quality - ed.] Tedious user procedures may slow data processing, resulting in longer queues at the checkout counter, the teller's window, the visa office, the truck dock, [the hospital floor or doctor's office - ed.] or any other workplace where the potential benefits of computer support are outweighed by an unintended increase in human effort.

In situations where degradation in system performance is not so easily measured, symptoms of poor user interface design may appear as user complaints. The system may be described as hard to learn, or clumsy, tiring and slow to use [often heard in medicine, but too often blamed on "physician resistance" - ed.] The users' view of a system is conditioned chiefly by experience with its interface. If the user interface is unsatisfactory, the users' view of the system will be negative regardless of any niceties of internal computer processing.


I am not entirely happy when the CEO of an organization taking on the responsibility of being a central focus for EHR error reporting makes statements that are consistent with unfamiliarity with important HIT-relevant domains, as well as a possible pro-IT, anti-user biases.

For that reason as well as the other questions raised at my prior posts (such as the onerous legal contract and apparent lack of ability of the public to easily view the actual report texts themselves), I cannot recommend use of their site for EHR problems reporting.

I recommend the continued use of the FDA facilities until such time as a compelling argument exists to do otherwise.

-- SS

Addendum 11/28/10:

This passage ends the main essay at my site "Contemporary Issues in Medical Informatics: Common Examples of Healthcare Information Technology Difficulties" and is quite relevant here:

... An article worth reviewing is "Human error: models and management", James Reason (a fitting name!), BMJ 2000;320:768-770 (18 March), http://www.bmj.com/cgi/content/full/320/7237/768:

Summary points:

  • Two approaches to the problem of human fallibility exist: the person and the system approaches

  • The person approach focuses on the errors of individuals, blaming them for forgetfulness, inattention, or moral weakness
  • The system approach concentrates on the conditions under which individuals work and tries to build defenses to avert errors or mitigate their effects
  • High reliability organizations---which have less than their fair share of accidents---recognize that human variability is a force to harness in averting errors, but they work hard to focus that variability and are constantly preoccupied with the possibility of failure.

-- SS