Showing posts with label Healthcare IT experiment. Show all posts
Showing posts with label Healthcare IT experiment. Show all posts

Sunday, December 23, 2012

ONC's Christmas Confessional on Health IT Safety: "HIT Patient Safety Action & Surveillance Plan for Public Comment"

This time of year is certainly appropriate for a confessional on the health IT industry and hyperenthusiasts' sins.

In the first report I've seen that seems genuinely imbued with a  basic level of recognition of social responsibility incurred by conducting the grand human subjects experiment known as national health IT, ONC has issued a Dec. 21, 2012 report "Health Information Technology Patient Safety Action & Surveillance Plan for Public Comment." It is available at this link in PDF.

Statements are made that have appeared repeatedly since 2004 at this blog, and my health IT difficulties site that went online years before this blog (1998 to be exact); it is possible through my early writing and that of like-minded colleagues that we were the origin of most of these memes.  We wrote them with the result of bringing much scorn upon ourselves. After all, "how could health IT possibly not be a panacea?" was the "you are an apostate" attitude I certainly experienced (e.g., as in my Sept. 2012 post "The Dangers of Critical Thinking in A Politicized, Irrational Culture").

Observations echoed in the new ONC report:

  • "Just as health IT can create new opportunities to improve patient care and safety, it can also create new potentials for harm."
  • Health IT will only fulfill its enormous potential to improve patient safety if the risks associated with its use are identified, if there is a coordinated effort to mitigate those risks, and if it is used to make care safer.
  • Because health IT is so tightly integrated into care delivery today, it is difficult to interpret this initial research [such as the PA Patient Safety Authority study  - ed.], which would seem to suggest that health IT is a modest cause of medical errors. However, it is difficult to say whether a medical error is health IT-related. [Not emphasized, as I wrote here, is the issue of risk when, say, tens of thousands of prescriptions are erroneous due to one software bug, a feat impossible with paper - ed.]
  • The proper steps to improve the safety of health IT can only be taken if there is better information regarding health IT’s risks, harms, and impact on patient safety.

Suggested steps to be taken include:

  • Make it easier for clinicians to report patient safety events and risks using EHR technology.
  • Engage health IT developers to embrace their shared responsibility for patient safety and promote reporting of patient safety events and risks. [I am frankly amazed to see this admission.  In the past, that sector excused itself entirely on the basis of the "learned intermediary" doctrine and "hold harmless" clauses; where the clinician is an all-knowing Deity between computer and patient.  I've been writing for years, however, that the computer is now the intermediary between clinician and patient since all care 'transactions' have to traverse what is now an enterprise clinical resource and clinician control system - ed.]
  • Provide support to Patient Safety Organizations (PSOs) to identify, aggregate, and analyze health IT safety event and hazard reports.
  • Incorporate health IT safety in post-market surveillance of certified EHR technology
  • Align CMS health and safety standards with the safety of health IT, and train surveyors.
  • Collect data on health IT safety events through the Quality & Safety Review System (QSRS).
  • Monitor health IT adverse event reports to the Manufacturer and User Facility Device Experience (MAUDE) database. [I've been promoting the use of MAUDE for just that purpose, and much more regarding documenting and reporting on mission-hostile health IT; see this post - ed.]

These steps are to be taken in order to "Inspire Confidence and Trust in Health IT and Health Information Exchange."

The title of my keynote address to the Health Informatics Society of Australia this summer was, in fact, "Critical Thinking on Building Trusted, Transformative Medical Information:  Improving Health IT as the First Step".

My thoughts on this report:

  • It is at least two decades overdue.
  • It was produced largely if not solely due to the pressure of the "HIT apostates", finally overcoming industry memes and control of information flows through great perseverance.
  • It is indeed a confessional of the sins committed by the health IT industry over those decades.  Creating, implementing and maintaining mission critical software in a safety-cognizant way is not, and was not, a mystery.  It's been done in numerous industries for decades.
  • It is still a bit weak in acknowledging the likely magnitude of under-reporting of medical errors, including HIT-related, in the available data, and the issue of risk vs. 'confirmed body counts' as I wrote at my recent post "A Significant Additional Observation on the PA Patient Safety Authority Report -- Risk".
  • It is unfortunate that this report did not come from the informatics academic community in the United States, i.e., the American Medical Informatics Association (AMIA).  AMIA's academics have done well in advancing the theoretical aspects of the technologies, and how to create "good health IT" and not "bad health IT."  However, they have largely abrogated their social responsibilities and obligations, including but not limited to those of physicians, in ensuring the theories were followed in practice by an industry all too eager to ignore academic research (which, in order to follow, utilizes money and resources and reduces margins).
(On the latter point, just last week did the American College of Medical Informatics [ACMI] refuse to permit me to be a speaker at their early 2013 annual retreat despite support from some of its members.)

And this:

  • If the industry and the academics had been doing their job responsibly, I might be spending this Christmas and New Years's holiday with my mother, rather than visiting her in the cemetery.

All that said, the report is welcome.

Finally, it is hoped - and expected - that public comments will indeed be "public", and that any irregularities in such comments (such as appeared in the public comments period for MU2 due to industry ghostwriting as in my Aug. 2012 post "Health IT Vendor EPIC Caught Red-Handed: Ghostwriting And Using Customers as Stealth Lobbyists - Did ONC Ignore This?" and Sept. 2012 post "Was EPIC successful in watering down the Meaningful Use Stage 2 Final Rule?") will be reported and acted upon in an aggressive manner.

And finally, from the Healthcare Renewal blog, Merry Christmas.

-- SS

Thursday, October 18, 2012

HITECH and Experimental Airplanes

This from a commenter, who has been deeply involved in major governmental health IT initiatives in another land, who wishes to remain anonymous:

The whole HITECH initiative really is getting like the equivalent of loading up a brand new airplane with paying travelers before debugging the software or even putting a model in the wind tunnel, and doing so without FAA approval.

If anyone attempted that in aviation, no one and I mean NO ONE would board the plane including the crew and Captain, so why is it OK in healthcare?  Is it just because the avoidable disasters are one body at a time in Health vs. 200-400 at once in air travel?

The answer to the last question?

Yes.

-- SS

Friday, October 12, 2012

A Response to the NY Times Article "Ups and Downs of EMRs" So Full Of The Usual Refrains, I Am Using It To Throw A Spotlight On Those Endlessly-Repeated Memes

My Google search alert turned up a response to the Oct. 8, 2012 NY Times article The Ups and Downs of Electronic Medical Records by Milt Freudenheim.

It was posted on the blog of a company Medical-Billing.com and is filled with the usual rhetoric and perverse excuse-making.

It is, in fact, so laden with typical industry refrains and excuse-making that I am using it to throw a spotlight on the misconceptions and canards proffered by that industry in defense of its uncontrolled practices:

A Response to the NY Times on Electronic Medical Records
Posted on October 10, 2012 by Kathy McCoy

A recent article by the New York Times entitled “The Ups and Downs of Electronic Medical Records” has generated a lot of discussion among the HIT community and among healthcare professionals.

It’s an excellent article, looking at concerns that a number of healthcare professionals have about the efficiency, accuracy and reliability of EMRs. One source quoted, Mark V. Pauly, professor of health care management at the Wharton School, said the health I.T. industry was moving in the right direction but that it had a long way to go before it would save real money.

“Like so many other things in health care,” Dr. Pauly said, “the amount of accomplishment is well short of the amount of cheerleading.”

Seriously? I can’t believe we’re still having this conversation.  [Emphasis in the original - ed.] 

I can believe it -- and quite seriously -- as it's a "conversation" long suppressed by the health IT industry and its pundits.

Seriously, I can't believe the comment about "it's an excellent article"; that comment appears to merely be a distraction for the interjection of attacks upon the substance of selfsame "excellent" article.

In a world where I can go to Lowe’s and they can tell me what color paint I bought a year ago, or I can call Papa John’s and they know what my usual pizza order is, how can we expect less from our healthcare systems?

Because healthcare is not at all like buying paint and ordering a pizza, being several orders of magnitude more demanding and complex and on many different planes (e.g, educational, organizational, social and ethical to name a few).  Only the most avid IT hyper-enthusiast (or those prone to ignoratio elenchi) would make such a risible comparison.

I recently joined a new healthcare system, and I have been impressed and pleased by their use of EMR and technology. I no longer have to worry about whether I told the new specialist everything he or she needed to know about my health history; it’s in my record. I no longer have to remember when I had my last tetanus shot; it’s in my record.

My care is coordinated between doctors, labs, etc., better than it ever has been before. In the past, I felt as though my healthcare was a giant patchwork quilt—and some of the stitches were coming loose, frankly. This new system with a widely used EMR, to me, is a huge improvement.

The problem with this argument is that n=1, and the going's not yet gotten tough, such as it had for people injured or killed as a result of the experimental state of current health IT.

Granted, the problems cited in the article are real and need to be addressed. 

Another dubious statement to be followed with excuses ... here it is:

However, the article itself mentions some redundancies that are in place to insure that a system going down doesn’t throw the entire Mayo Clinic into freefall. And certainly, additional redundancies may be needed to insure that prescriptions aren’t incorrectly sent to a pharmacy for the wrong patient, etc.

Those "redundancies" are not complete, do not cover for all aspects of enterprise health IT when it is down, and necessarily compromise patient care when they have to be called upon.   I, for one, a physician, would not enjoy being a patient nor taking care of patients when the "IT lights" go out.

Do doctors and medical staff need to learn how to code correctly so that they aren’t accused of cloning? Yes—but that’s a relatively easy problem to fix. The problem has already been identified, and training has already begun to address the issue.

Cloning of notes and "coding correctly" are two entirely different issues.  Easy to fix?  The health IT industry has been saying all its problems are easy to fix, i.e., in version 2.0 ... for the past several decades, when few if any problems have been.

I have been through this type of problem before, as have many of you, with new systems. It’s called a learning curve, and it’s relatively easy to work through with patience and determination. I have encountered situations before where the team I was working with threw up their hands when they ran into problems learning a new database system and said “It doesn’t work.” Yet in time, they learned to love the system—and some of the biggest doubters became the experts on it.

I surmise that since they were forced into using it, the Stockholm Syndrome was likely at work.  However, speculation aside, the seemingly banal statement that "it’s called a learning curve" is an ethical abomination.  The subjects of these systems are human beings, not lab rats.

Further, health IT is not a "database system."  It is an enterprise clinical resource and clinician workflow control and regulation deviceThis statement illustrates the dangers of having personnel of a technical focus in any kind of authority role in health ITTheir education and worldview is far too narrow.


Healthcare professionals overcome more difficult challenges than this every day; they bring people back from the dead, for Pete’s sake! I have no doubt that they will adapt and learn to utilize EMRs so that they improve healthcare and take patient care to levels currently unimaginable.

Wrong solution, completely ignoring (or perhaps I should say willfully ignorant of) the fact that there's good health IT and bad health IT (GHIT/BHIT).  The IT industry needs to adapt to healthcare professionals, not the other way around, by producing GHIT and banishing BHIT.  This point needs to be frequently repeated, I surmise, due to tremendous disrespect for healthcare professionals by the industry.

And to say, as was quoted in the article: “The technology is being pushed, with no good scientific basis”? Ridiculous, with all due deference to Dr. Scot M. Silverstein, a health I.T. expert at Drexel University who reports on medical records problems on the blog Health Care Renewal and made the statement.

The only thing "ridiculous" is that Ms. McCoy was clearly too lazy to check the very blog she cites, as conspicuously cited in the NY Times article itself.  (That assumes she has the education and depth to understand its arguments and copious citations.)

Lack of RCT's, supportive studies weak at best with literature conflicting on value, National Research Council indicating current health IT does not support clinician cognitive processes, known harms but IOM/FDA both admitting the magnitude of EHR-related harms is unknown, usability poor and in need of significant remediation, cost savings in doubt - these are just a few examples of where the science (as medicine knows it) does not in 2012 support hundreds of billions of dollars for a national rollout of experimental health IT.

I wish it were not so, but alas, that is the current reality.

Database management of information has been proven to be an improvement on paper records in just about every industry there is; healthcare will not be an exception.

Ignoring the repeated "database" descriptor, I agree, eventually, that electronic information systems will improve upon paper.  That's why I began a postdoctoral fellowship in Medical Informatics two decades ago.  However, the technology in its present form interferes with care and is an impediment to the collection and accuracy of that data, and the well being of its subjects, e.g.:  

  • Next-generation phenotyping of electronic health records, George Hripcsak,David J Albers, J Am Med Inform Assoc, doi:10.1136/amiajnl-2012-001145 .  The national adoption of electronic health records (EHR) promises to make an unprecedented amount of data available for clinical research, but the data are complex, inaccurate, and frequently missing, and the record reflects complex processes [economic, social, political etc. that bias the data - ed.] aside from the patient's physiological state.

As I've written before, a good or even average paper system is better for patients than bad health IT, and the latter prevails over good health IT in 2012.

These issues seem chronically to be of little interest to the hyper-enthusiasts as I've written here and here (perhaps the author of the Medical Billing blog post could use her wrist and eyes and navigate there and read).

Is it hard? Yes, it’s hard. To quote the movie A League of Their Own, “If it were easy, everyone would do it.”

It's even harder to do when apologists make excuses shielding a very dysfunctional industry.

Everyone can’t do it. But I have no doubt that healthcare professionals will do it. Remember that part about bringing people back from the dead? This is a lesser miracle.

If qualified healthcare professionals were in charge of the computerization efforts, there would be a smoother path.

However, that is sadly not the case.  It will not happen until enough pressure is brought to bear on the IT industry and its apologists, which I believe will most likely only happen though coercion, not debate.

Finally, the endless stream of excuses and rhetoric that confuse non-healthcare professionals, such as typical patients who are the subjects of today's premature grand health IT experiment and our decision-makers in Washington, needs to be relentlessly challenged.  The stakes are the well being of anyone needing medical care.

-- SS

Note:  my formal reply to the Medical Billing blog post above awaits moderation.  I am reproducing it here:

  1. Your comment is awaiting moderation.

    Dear Ms. McCoy,

    Will all due deference, your own experience with EHR’s is obviously limited.

    Your comments demonstrate an apparent lay level of understanding of medicine and healthcare informatics.

    “Ridiculous?” “Learning curve?” I.e., experimentation on non-consenting human subjects putting them at risk with an unregulated, unvetted medical technology? That is, as kindly as I can put it, a perverse statement.

    Perhaps I am too harsh. You clearly didn’t check the link to the Healthcare Renewal blog conspicuously placed in the NYT article by Milt Freudenheim.

    I suggest you should educate yourself on the science and ethics of medicine and healthcare informatics.

    I am posting the gist of your comments, and my reply, at that blog.

    I do not think most truly informed patients would agree to being guinea pigs as your comments suggest is simply part of the “leaning curve.”

    Scot Silverstein, M.D.

I'll bet the author of the Medical-Billing.com post never heard critique like this coming from today's typical abused-into-submission, learned helplessness-afflicted physicians.

A bit harsh?  Lives are at stake.

-- SS

Thursday, August 30, 2012

A Tacit Admission That National Health IT is a Gargantuan Experiment

In my post yesterday "The Scientific Justification for Meaningul Use, Stage 2" I wrote:

There's no truly robust evidence of generalizable benefit, no randomized trials, there's significant evidence to the contrary, there's risk to safety that this disruptive technology causes in its present state (but the magnitude is unknown, see quotes from 2012 IOM study here) that MU and "certification" do not address, there's a plethora of hair-raising defect reports from the only seller that reports such things, but CMS justifies the program [starting at p. 18 in the Final Rule for Meaningful Use Stage 2 at this link - ed.] with the line:


"Evidence [on benefits] is limited ... Nonetheless, we believe there are substantial benefits that can be obtained by eligible hospitals and EPs ... There is evidence to support the cost-saving benefits anticipated from wider adoption of EHRs."

I am deeply impressed by the level of rigorous science here.  We are truly in a golden age of science.  [That is obviously satirical - ed.]

The Final Rule for MU Stage 2, via the admissions made by it regarding limited evidence, is in fact a tacit admission that the whole national health IT enterprise is a huge experiment (involving human subjects, obviously).  It is likely the most forthright admission we will get from this government on that issue.

With neither explicit patient informed consent nor a formal regulatory process to validate safety, but merely based on a "we believe" justification from the government, hospitals and practices are leaving themselves wide open to liability in the situation of patient injury or death caused by, or promoted by, this technology.

(Parenthetically, I note that I've already seen a claim in a legal brief that "certification" implies safety and a legal indemnification, and that the federal HITECH act - that as in this report merely provides statutory authority to the incentive program - pre-empts common-law i.e., state litigation over health IT.  The judge dismissed the claims.)

-- SS

Aug. 30, 2012 addendum:

A commenter pointed out that experiments on minors without consent might constitute an even more egregious action, subject to even more stringent laws (and perhaps penalties, I add) than on adults.   I cannot confirm that, but it is an interesting observation.  If you are an attorney, please comment, anonymously or otherwise.

-- SS

Saturday, August 18, 2012

Health IT difficulties and controversial excuses from health IT hyperenthusiasts and extremists

As I observed at my Aug. 15, 2012 post "Contra Costa's $45 million computer health care system endangering lives, nurses say" and other posts, common in case reports of health IT difficulties is the refrain (usually from a seller, healthcare executive or health IT pundit) that:

  • It's a rare event, it's just a 'glitch',  it's teething problems, it's a learning experience, we have to work the 'kinks' out, it's growing pains, etc.

Or perhaps worse:
  • Patient safety was not compromised (stated long before the speaker or writer could possibly know that).

What these statements translate to:  any patient harm that may have resulted is for the "greater good" in perfecting the technology.

Here is the problem with that:

These statements, while seemingly banal, are actually highly controversial and amoral. and reflect what can be called "faith-based informatics beliefs" (i.e., enthusiasm not driven by evidence).

They are amoral because they significantly deviate from accepted medical ethics and patient rights, especially regarding experimentation or research such as the plain language of the Nuremberg code, the Belmont Report, World Medical Association Declaration of Helsinki, Guidelines for Conduct of Research Involving Human Subjects at NIH, and other documents that originated out of medical abuses of the past.

Semantic or legal arguments on the term "research", "experimentation" etc. are, at best, misdirection away the substantive issues.  Indeed, for all practical purposes the use of unfinished software (or software with newly-minted modifications) that has not been extensively tested and validated and that is suspected to or known to cause harm, without explicit informed consent, is contrary to the spirit of the aforementioned patients' rights documents.

They are excuses from health IT hyper-enthusiasts ("Ddulites"), who in fact have become so hyper-enthusiastic as to ignore the ethical issues and downsides.  The attitude gives more rights to the cybernetic device and its creators than to the patients who are subject to the device's effects.

These excuses are, in effect, from people who it would not be unreasonable to refer to as technophile extremists.

-- SS

Addendum:

The Belmont Report of the mid to late 1970's, long before health IT became at all common, actually starts out with a section discussing "BOUNDARIES BETWEEN PRACTICE AND RESEARCH."  I have updated one of the observations in that section to modern times:

 ... It is important to distinguish between biomedical and behavioral research, on the one hand, and the practice of accepted therapy on the other, in order to know what activities ought to undergo review for the protection of human subjects of research.

... When a clinician [or entire healthcare delivery system - ed.] departs in a significant way from standard or accepted practice, the innovation does not, in and of itself, constitute research. The fact that a procedure is "experimental," in the sense of new, untested or different, does not automatically place it in the category of research.

Radically new procedures of this description [such as use of cybernetic intermediaries to regulate and govern care - ed.] should, however, be made the object of formal research at an early stage in order to determine whether they are safe and effective. Thus, it is the responsibility of medical practice committees, for example, to insist that a major innovation [such as health IT - ed.] be incorporated into a formal research project.

Health IT appears to have been "graduated" from experimental to tried-and-true without the formal safety research called for in the Belmont report.

The Belmont report continues:

Research and practice may be carried on together when research is designed to evaluate the safety and efficacy of a therapy. This need not cause any confusion regarding whether or not the activity requires review; the general rule is that if there is any element of research in an activity, that activity should undergo review for the protection of human subjects.

Instead, what we have for the most part are excuses and special accommodations for health IT, on which the literature is conflicting regarding safety and efficacy, all the way up to the Institute of Medicine.

-- SS

Friday, June 1, 2012

Know-Nothing, or Industry Shill? You Be The Judge.

I have not been writing much the past few weeks due to other concerns, and will probably not write much this summer.

However, I have been commenting on various posts on other blogs.  One resultant thread stands out as yet another example of a likely industry shill or sockpuppet defending the state of health IT, oddly at a blog on pharma (same blog as was the topic of tmy post "More 'You're Too Negative, And You Don't Provide The Solution To The Problems You Critique', This Time re: Pharma").

Industry-sponsored sockpuppetry is a form of stealth marketing or lobbying, through discreditation of detractors, although in a perverse form.

The following exchanges meet the sockpuppetry criteria once pointed out by business professional and HC Renewal reader Steve Lucas in 2010 in a post about an industry sockpuppet caught red-handed through IP forensics here:

... In reading this thread of comments I have to believe [anonymous commenter moniker] "IT Guy" is a salesperson. My only question is: Were you assigned this blog or did you choose it? We had this problem a number of years ago where a salesperson was assigned a number of blogs with the intent of using up valuable time in trying to discredit the postings.

In my very first sales class we learned to focus on irrelevant points, constantly shift the discussion, and generally try to distract criticism. I would say that HCR is creating heat for IT Guy’s employer and the industry in general.

I find it sad that a company would allow an employee to attack anyone in an open forum. IT Guy needs to check with his superiors to find out if they approve of this use of his time, and I hope he is not using a company computer, unless once again this attack is company sanctioned.

In the hopes that continued exposure of this nonsense can educate and thus help immunize against its effects, I present this:

At "In the Pipeline", a blog on medicinal chemistry (the science of drug making) and other pharma topics, a rebuttal to a claim that over 500,000 people (not 50K) might have died due to VIOXX was posted entitled "500,000 Excess Deaths From Vioxx? Where?"

That 500K possibility appeared on a UK site 'THE WEEK With the FirstPost' at "When half a million Americans died and nobody noticed."  The author of the FirstPost piece started out by raising the point made by publisher Ron Unz that life in China might be more valued than that in the U.S., where major pharma problems and scandals generally meet what this blog calls "the anechoic effect."  (In China, Unz noted, perpetrators of scandalous drug practices actually get arrested and suffer career repercussions.)

FirstPost notes:

ARE American lives cheaper than those of the Chinese? It's a question raised by Ron Unz, publisher of The American Conservative, who has produced a compelling comparison between the way the Chinese dealt with one of their drug scandals – melamine in baby formula - and how the US handled the Vioxx aspirin-substitute disaster ... (Unz) "The inescapable conclusion is that in today's world and in the opinion of our own media, American lives are quite cheap, unlike those in China." 

Not to argue to merits of the order of magnitude-expanded VIOXX claim, which I disagree with, but having concern for the general state of ethics in biomedicine in the U.S., I posted the following comment at the comment thread of the rebuttal post at "In the Pipeline" at this link:

6. MIMD on May 30, 2012 11:38 AM writes...

While I agree the VIOXX numbers here are likely erroneous, the point of the cheapening of the value of American life is depressingly accurate.

For instance, look how readily companies lay people off, ruining them, and perhaps forcing them out of the workforce forever.

Also, currently being pushed by HHS is a medical device for rapid national implementation known to cause injury and death. The government is partially financing it to the tune of tens of billions of dollars, probably with Chinese money no less.  [Either that, or with freshly-printed money adding to the trillions of $ in our deficit - ed.]

There are financial penalties for medical refuseniks (non-adopters).

However, FDA, the Institute of Medicine and others readily admit in publication thay have no idea of the magnitude of the harm because of lack of data collection, impediments to information diffusion and even legal censorship of the harms data. In effect, we don't even know if the benefits exceed the harms, and FDA and IOM admit it. FDA in fact refers to the known injuries and deaths from this device as "likely the tip of the iceberg."

Perhaps to some it's no longer a big deal if people are injured and/or die in data gathering for this medical enterprise.

E.g., see "FDA Internal Memo on H-IT Risks", and the Inst. of Medicine report in the same issues here.
It's all for the greater social good, they might say.
 
The following anonymous reply ensued:
10. Watson on May 30, 2012 1:47 PM writes...

@6 You keep using that word - "device" - I do not think it means what you think it means
 
I replied:

11. MIMD on May 30, 2012 3:48 PM writes...
 #10

'medical device' is the term chosen by FDA and SMPA (EU).
But that's a distraction from the points I raise in the linked post about the experiment.

To which this confused misdirection came forth from the ether:

12. Watson on May 30, 2012 4:46 PM writes...

The linked article is discussing the poor state of "medical device records" because of a lack of uniform specifications with respect to Health Information Technology, i.e. how these technologies code data and the challenges of making the data obtained uniform across a wide variety of implementations and vendors. [Erroneous, incomplete misdirection - ed.]

It seems that the concern, far from being that Health Information Technology is "killing" people, is that the Medical Device Records may contain duplicate reports for adverse health events because of health care providers encoding the data more than once for each event.  [What in the world? - ed.] This problem with replication exists because there are different health record systems where this data needs to be input, and perhaps the same patient uses different physicians who have different systems, but all of which are required to report adverse events. [I have little idea what this even means - ed.]
In other words, "Health Information Technology" is not some monolithic "device", and your conflation of "HIT" which is more properly an abstract term with the "devices" which are used to generate some forms of patient data is in my view the real distraction. [The "real" distraction from the ethical issues of the HIT experiment is terminology about medical devices?  Misdirection again from the ethical issue, and of a perverse nature - ed.]

Yes, some of the "devices" (a blood pressure monitor for example) may have underlying issues, which the FDA regulations for "medical device records" are designed to identify. The FDA, as a governmental entity has no constitutional power to mandate certain devices or implementations are to be used.  [Now we're in la-la land of misinformation and distraction- ed.] The power that the FDA does have is to inspect that the manufacturer of a device keeps appropriate medical device records (e.g. a lot of syringes, or a batch of formulated drug) and addresses any complaints about the device to the satisfaction of the FDA.

My replies:

17. MIMD on May 31, 2012 8:51 PM writes...

#12

It seems that the concern, far from being that Health Information Technology is "killing" people, is that the Medical Device Records may contain duplicate reports for adverse health events because of health care providers encoding the data more than once for each event

Yes, fix just that little problem and then the problems with clinical IT are solved! (Actually,I'm not even sure what you're referring to, but the evidence is that fixing it as you suggest is the cure.)  [Sarcasm - ed.]

The FDA, as a governmental entity has no constitutional power to mandate certain devices or implementations are to be used.

You are also right about FDA. They were completely toothless even in this situation[Sarcasm again - ed.]

18. MIMD on May 31, 2012 9:10 PM writes...

#12

In other words, "Health Information Technology" is not some monolithic "device", and your conflation of "HIT" which is more properly an abstract term with the "devices" which are used to generate some forms of patient data is in my view the real distraction.

Those who conducted the Tuskegee experiments probably felt the same way.

It's all about definitions, not ethics, and not data - which FDA as well as IOM or the National Academies, our highest scientific body, among others, admits, as in the linked posts in #6, is quantitatively and structurally lacking on risks and harms.

I don't really mean to laugh at you, not knowing how little you really know about the Medical Informatics domain, but you bring to mind this Scott Adams adage on logical fallacy:

FAILURE TO RECOGNIZE WHAT’S IMPORTANT
Example: My house is on fire! Quick, call the post office and tell them to hold my mail!

And with that, I move on, letting others enjoy the risible comments from surely to follow! :-)

I could not have been more correct.

In typical industry shill/sockpuppet fashion comes this, with clear evidence of a not-so-clever liar which I've bolded:

19. Watson on June 1, 2012 12:35 AM writes...
@18

I read the articles you originally linked to, and my comments were based upon trying to interpret your meaning from those selections. I worked in the industry and had to deal with GMP, and had to make sure to follow all of the guidelines with respect to medical device manufacturing and electronic records. I understand the terminology very well. Luckily, I never had to deal with "health IT", but I did have to pore over enough pages of Federal Register legalese to know that what is sufficient is not necessarily what is best.  [Right.  See below - ed.]

Is that risible enough for you?

The link that you provided in @17 was a much more concrete example, and if you had referenced it in your original post, would have cleared up much of the confusion that I (and I assume @9) faced in understanding what it was you were trying to convey. It would have been useful if you had explained which device or devices you were talking about. If you had more than conjecture to back up the Chinese money trail, and if you had provided an example of a company that has been damaged by being a refusenik, those would have supported your argument as well. [Continuing haphazardly with the irrelevant as a distraction in an attempt to shift the focus from the ethical issue of nationally implementing HIT in a relative risk-information vacuum, having weakly conceded the main argument's been lost - ed.]

Straw man and ad hominem fallacies are pretty transparent around here, and I wish you the best with both. [Another attempt at diversion - ed.]

I assure you that I recognize what's important, that I have ethics, and that I care about people having reliable healthcare. [This seems a form of post-argument-lost attempt to seize the moral high ground - ed.]

I then point out the nature of what is likely a bold-faced lie.  Someone who's read the Federal Register in depth would likely know FDA's authority is not a "Constitutional power", as bolded below:

20. MIMD on June 1, 2012 6:44 AM writes...

@19

The FDA taxonomy of HIT safety issues in the leaked Feb. 2010 "for internal use only" document "H-IT Safety Issues" available at the link in my post #6 is quite clear:
- errors of commission
- errors of omission or transmission
- errors in data analysis
- incompatibility between multi-vendor software applications or systems

This is further broken down in Appendices B and C, with actual examples.

Both this FDA internal report and the public IOM report of 2011 (as well as Joint Commission Sentinel Events Alert on health IT of a few years ago, and others) make it abundantly clear there is a dearth of data on the harms, due to multiple cultural, structural and legal impediments to information diffusion.

Yes, it's in the linked IOM report at #6 entitled "Health IT and Patient Safety: Building Safer Systems for Better Care". See for instance the summary pg. S-2 where IOM states about limited[ed] transparency on H-IT risk that "these barriers to generating evidence pose unacceptable risks to safety."

Argue with them, not me.

Back to my original point: national rollout of this medical device (whatever you call it is irrelevant to my point, but see Jeff Shuren's statement to that effect here) under admitted conditions of informational scarcity regarding risks and harms represents a cheapening of the value of patient's lives. Cybernetics Over All.

As to your other misdirection, spare me the lecture. It's not ad hominem to call statements like "The FDA, as a governmental entity has no constitutional power to mandate certain devices or implementations are to be used" for what they are - laughable (and I am being generous).

FDA's authority is statutory, not written in the Constitution. Same with their parent, HHS. To get quite specific, on human subjects experimentation, which the H-IT national experiment is, the statutory authority for HHS research subject protections regulations derives from 5 U.S.C. 301; 42 U.S.C. 300v-1(b); and 42 U.S.C. 28. [The USC or United States Code is the codification by subject matter of the general and permanent laws of the United States.  HHS revised and expanded its regulations for the protection of human subjects in the late 1970s and early 1980s. The HHS regulations on human research subjects protections themselves are codified at 45 CFR (Code of Federal Regulations, aka Federal Register) part 46, subparts A through D. See http://www.hhs.gov/ohrp/humansubjects/guidance/. - ed.]

A real scientist would have known things like this before posting, or have made it their business to know.
Tell me: are you in sales? Not to point fingers, but with your dubious evasion of the ethical issue that was the sole purpose of my post, and your other postings using misdirection and logical fallacy to distract, you fit that mindset.

You certainly don't sound like a scientist. Any med chemist worth their salt (pun intendend) would have absorbed the linked reports and ethical issues accurately, the first time.

I then pointed out I've moved this 'discussion' to the HC Renewal Blog, as it is not relevant per se to pharma, the major concern of In the Pipeline.

Industry shill/sockpuppet (as in the perverse example at this link) or just a dull, ill-informed but opinionated person who happens to read blogs for medicinal chemists, where layoffs have been rampant in recent years, takes issue with my attacks on those practices, and defends health IT like a shill?

You be the judge.

Whether a shill or know-nothing contributed the cited comments, it is my hope this post contributes to an understanding of pro-industry sockpuppetry.

-- SS











Tuesday, March 27, 2012

Experiments on Top of Experiments: Threats to Patients Safety of Mobile e-Health Devices - No Surprise to Me

As noted by columnist Neil Versel at MobiHealthNews.com in a Mar. 14, 2012 post "Beware virtual keyboards in mobile clinical apps":

... Remember the problems Seattle Children’s Hospital had with trying to run its Cerner EMR, built for full-size PC monitors, on iPads? The hospital tried to use the iPad as a Citrix terminal emulator, so the handful of physicians and nurses involved in the small trial had to do far too much scrolling to make the tablet practical for regular use in this manner.

[From that post: As CIO magazine reported last week, iPads failed miserably in a test at Seattle Children’s Hospital. “Every one of the clinicians returned the iPad, saying that it wasn’t going to work for day-to-day clinical work,” CTO Wes Wright was quoted as saying. “The EMR apps are unwieldy on the iPad.” - ed.]


Thank heaven it was a small trial, instead of a typical forced rollout to an entire clinical community. Someone seems to have grasped the experimental nature of the effort.

Well, there may be a greater risk than just inconvenience when tablets and smartphones stand in for desktop computers. According to a report from the Advisory Board Co., “[A] significant threat to patient safety is introduced when desktop virtualization is implemented to support interaction with an EMR using a device with materially less display space and significantly different support for user input than the EMR’s user interface was designed to accommodate.”

The report actually is a couple months old, but it hasn’t gotten the publicity it probably deserves. We are talking about more than user inconvenience here. There are serious ramifications for patient safety, and that should command people’s attention.


Unfortunately, far too little about health IT safety commands people's attention. It's merely assumed that either 1) health IT is inherently beneficent, or 2) the risks are deliberately ignored for - I'm sorry to note - profit and career advancement.

How many CIOs or even end users have considered another one of the unintended consequences of running non-native software on a touch-screen device, that the virtual, on-screen keyboard can easily take up half the display? “Pop-up virtual keyboards obscure a large portion of the device’s display, blocking information the application’s designer intended to be visible during data entry,” wrote author Jim Klein, a senior research director at the Washington-based research and consulting firm.


How many CIO's have considered unintended consequences of such experiments-on-top-of-experiments (i.e., handheld or other lilliputian computing devices on top of the HIT experiment itself)? Probably few to none.

The typical hospital CIO, usually of an MIS background and generally lacking meaningful backgrounds in research, computer science, medicine, medical informatics, social informatics, human-computer interaction, and other research domains, are usually "turnkey-shrinkwrapped software implementers." In fact, most have backgrounds woefully inadequate for any type of clinical device leadership role. They may even lack a degree of any kind, as major HIT recruiters over the past decade expressed the following philosophy ca. 2000:


I don't think a degree gets you anything," says healthcare recruiter Lion Goodman, president of the Goodman Group in San Rafael, California about CIO's and other healthcare MIS staffers. Healthcare MIS recruiter Betsy Hersher of Hersher Associates, Northbrook, Illinois, agreed, stating "There's nothing like the school of hard knocks." In seeking out CIO talent, recruiter Lion Goodman "doesn't think clinical experience yields [hospital] IT people who have broad enough perspective. Physicians in particular make poor choices for CIOs. They don't think of the business issues at hand because they're consumed with patient care issues," according to Goodman. (Healthcare Informatics, "Who's Growing CIO's".)


I wonder just how many CIO's "from the school of hard knocks" were put into action by those groups.

Back to the MobiHealthNews.com article:


... Klein said that users have two choices to deal with a display that’s much smaller than the software was designed for. The first is to zoom out to view the whole window or desktop at once, but then, obviously, users have to squint to see everything, and it becomes easy to make the wrong selection from drop-down menus and radio buttons.

Or, users can zoom in on a small part of the screen. “This option largely, if not completely, eliminates the context of interaction from the user’s view, including possible computer decision-support guidance and warnings, a dangerous trade-off to be sure,” Klein wrote.

In either case, the virtual keyboard makes it even more difficult to read important data that clinicians need to make informed decisions about people’s health and to execute EMR functions as designed.

Good observations. Two points:

1. As far back as the mid 1990's in my teaching of postdoctoral fellows in my role as Yale faculty in Medical Informatics, due to the limited screen real estate I uniformly presented the following 'diagram' regarding my beliefs about handheld devices (then commonly known as PDA's) as tools for significant EHR interaction:



Mid 1990's wisdom: small handhelds as desktop replacements at the bedside - just say "no"


This was before today's hi-res screens on small devices, but the limited real estate and its ramifications were obvious to critical thinkers who knew both medicine and medical informatics, even in the mid 1990's.

Similarly, experiments with HP95 handheld PC's running DOS failed miserably in a similar time frame at the hospital where I later became CMIO. (One benefit: I did get to salvage two of the devices from the trash bin for my obsolete computer collection!):


HP95LX - full DOS computer equivalent to an IBM PC (except, of course, for screen size).


Therefore, IMO the Advisory Report findings of 2012 merely verify what was obvious almost two decades ago.

Small devices are adjuncts only, suitable for limited uses (and only after extensive RCT's with informed consent even then, in my view).

2. Another issue that arises is more fundamental. The article notes:

“It seems clear that running even a well-designed user interface on a device significantly different than the class of devices it was intended to be run on will lead to additional medical errors,” Advisory’s Klein commented.

The critical thinking person's question is: who knows if the "class of device" the app was "intended to run on" is itself appropriate or optimal?

Commercial clinical IT (with the exception of devices that require special resolutions, pixel densities, contrast ratios etc., such as PACS imaging systems) is usually designed for commercially available hardware.

That means the same size/type of computer monitor you obtain at Best Buy or Wal Mart.

Is that sufficient? Is that optimal?

As in my Feb. 2012 post "EHR Workstation Designed by Amateurs", who really knows?


Click to enlarge. A workstation in an actual tertiary-care hospital ICU, 2011. How many things are wrong here besides the limited display size? See aforementioned post "EHR Workstation Designed by Amateurs".


These systems are not robustly cross-tested in multiple configurations, such as multiple-large screen environments vs. single screen, for example.

In summary, performing an experiment with small devices on top of another experiment - the use of cybernetic intermediaries (HIT) in healthcare that is already known to pose patient risk - is exceptionally unwise.

It would be best to decide what the optimal workstation configuration is, as applicable to different clinical environments, in limited RCT's with experts in HCI strongly involved, before putting patients at additional risk with lilliputian information devices that only a health IT Ddulite could love.

Ddulites: Hyper-enthusiastic technophiles who either deliberately ignore or are blinded to technology's downsides, ethical issues, and repeated local and mass failures.

-- SS

Thursday, February 9, 2012

A Critical Review of a Critical Review of e-Prescribing ... Or Is It CPOE?

In PLoS medicine, the following article was recently published by researchers at the University of New South Wales in Australia:

Westbrook JI, Reckmann M, Li L, Runciman WB, Burke R, et al. (2012) Effects of Two Commercial Electronic Prescribing Systems on Prescribing Error Rates in Hospital In-Patients: A Before and After Study. PLoS Med 9(1): e1001164. doi:10.1371/journal.pmed.1001164


The section I find most interesting is this:

We conducted a before and after study involving medication chart audit of 3,291 admissions (1,923 at baseline and 1,368 post e-prescribing system) at two Australian teaching hospitals. In Hospital A, the Cerner Millennium e-prescribing system was implemented on one ward, and three wards, which did not receive the e-prescribing system, acted as controls. In Hospital B, the iSoft MedChart system was implemented on two wards and we compared before and after error rates. Procedural (e.g., unclear and incomplete prescribing orders) and clinical (e.g., wrong dose, wrong drug) errors were identified. Prescribing error rates per admission and per 100 patient days; rates of serious errors (5-point severity scale, those ≥3 were categorised as serious) by hospital and study period; and rates and categories of postintervention “system-related” errors (where system functionality or design contributed to the error) were calculated.

Here is my major issue:

Unless I am misreading, this research took place in hospitals (i.e., "wards" in hospitals) and does not seem to focus (if even refer to) discharge prescriptions.

I think it would be reasonable to say that what are referred to as "e-Prescribing" systems are systems used at discharge, or in outpatient clinic/offices to communicate with a pharmacy selling commercially and not involved in inpatient care.

From the U.S. Centers for Medicare and Medicaid Services (CMS), for example:

E-Prescribing - a prescriber's ability to electronically send an accurate, error-free and understandable prescription [theoretically, that is - ed.] directly to a pharmacy from the point-of-care

I therefore think the terminology used in the article as to the type of system studied is not well chosen. I believe it could mislead readers not experienced with the various 'species' of health IT.

This study appears to be of an inpatient Computerized Practitioner Order Entry (CPOE) system, not e-Prescribing.

Terminology matters. For example, in the U.S. the HHS term "certification" is misleading purchasers about the quality, safety and efficacy of health IT. HIT certification as it exists today (granted via ONC-Authorized Testing and Certification Bodies) is merely a features-and-functionality "certification of presence." It is not like an Underwriter Labs (UL) safety certification of an electrical appliance that the appliance will not electrocute you.

(This is not to mention the irony that one major aspect of Medical Informatics research is to remove ambiguity from medical terminology, e.g., via the decades-old Unified Medical Language System project or UMLS. However, as I've often written, the HIT domain lacks the rigor of medical science itself.)

I note that if this were a grant proposal for studying e-Prescribing, I would return it with a low ranking and a reviewer comment that the study proposed is actually of CPOE.

That said, looking at the nature of this study:

The conclusion of this paper was as follows. I am omitting some of the actual numbers such as confidence intervals for clarity; see the full article available freely at above link for that data:

Use of an e-prescribing system was associated with a statistically significant reduction in error rates in all three intervention wards. The use of the system resulted in a decline in errors at Hospital A from 6.25 per admission to 2.12 and at Hospital B from 3.62 to 1.46. This decrease was driven by a large reduction in unclear, illegal, and incomplete orders. The Hospital A control wards experienced no significant change. There was limited change in clinical error rates, but serious errors decreased by 44% across the intervention wards compared to the control wards.

Both hospitals experienced system-related errors (0.73 and 0.51 per admission), which accounted for 35% of postsystem errors in the intervention wards; each system was associated with different types of system-related errors.

I note that "system related errors" were defined as errors "where system functionality or design contributed to the error." In other words, these were unintended adverse events as a result of the technology itself.

The authors conclude:

Implementation of these commercial e-prescribing systems resulted in statistically significant reductions in prescribing error rates. Reductions in clinical errors were limited in the absence of substantial decision support, but a statistically significant decline in serious errors was observed.

The authors do acknowledge some limitations of their (CPOE) study:

Limitations included a lack of control wards at Hospital B and an inability to randomize wards to the intervention.

Thus, this was mainly a pre-post observational study, certainly not a randomized controlled clinical trial.

Not apparently accounted for, either, were potential confounding variables related to the CPOE implementation process (as in this comment thread).

In that thread I wrote to a commenter [a heckler, actually, apparently an employee of HIT company Meditech] with a stated absolute faith in pre-post studies that:

... A common scenario in HIT implementation is to first do a process improvement analysis to improve processes prior to IT implementation, on the simple calculus that "bad processes will only run faster under automation." There are many other changes that occur pre- and during implementation, such as training, raising the awareness of medical errors, hiring of new support staff, etc.

There can easily be scenarios (I've seen them) where poorly done HIT's distracting effects on clinicians is moderated to some extent by process and other improvements. Such factors need to be analyzed quite carefully, datasets and endpoints developed, and data carefully collected; the study design and preparation needs to occur before the study even begins. Larger sample sizes will not eliminate the possible confounding effects of these factors and many more not listed here.

The belief that simple A/B pre-post test that look at error rate comparisons are adequate is seductive, but it is wrong.

Stated simply, in pre-post trials the results may be affected by changes that occur other than the intervention. HIT implementation does not involve just putting computers on desks, as I point out above.

In other words, the study was essentially anecdotal.

The lack of RCT's in health IT are, in general, one violation of traditional medical research methodologies for studying medical devices. That issue is not limited to this article, of course.

Next, on ethics:

CPOE has already been demonstrated in situ to create all sorts of new potential complications, such in at Koppel et al.'s "Role of Computerized Physician Order Entry Systems in Facilitating Medication Errors", JAMA. 2005;293(10):1197-1203. doi: 10.1001/jama.293.10.1197 that concluded:

In this study, we found that a leading CPOE system often facilitated medication error risks, with many reported to occur frequently. As CPOE systems are implemented, clinicians and hospitals must attend to errors that these systems cause in addition to errors that they prevent.

CPOE technology, at best, should be considered experimental in 2012.

In regards to e-Prescribing proper, there's this: Errors Occur in 12% of Electronic Drug Prescriptions, Matching Handwritten and this: Upgrading e-prescribing system can bump up error risk to consider; in other words, the literature is conflicting, confirming the technology remains experimental.

This current study confirmed some (CPOE) errors that would not have occurred with paper did occur with cybernetics, amounting to "35% of postsystem errors in the intervention wards."

In other words, patient Jones was now subjected to a cybernetic error that would not have occurred with paper, in the hopes that patients Smith and Silverstein would be spared errors that might have occurred without cybernetic aid.

Even though the authors observe that "human research ethics approval was received from both hospitals and the University of Sydney", since patient Jones did not provide informed consent to the experimentation with what really are experimental medical devices as I've written often on this blog [see note 1], I'm not certain the full set of ethical issues have been well-addressed. It's not limited to this occasion, however. This phenomenon represents a pervasive, continual world-wide oversight with regard to clinical IT.

Furthermore, and finally: of considerable concern is another common limitation of all health IT studies, which I believe is often willful.

What really should be studied before justifications are given to spend tens of millions of dollars/Euros/whatever on CPOE or other clinical IT is this:

The impact of possible non-cybernetic interventions (e.g., additional humans and processes) to improve "medication ordering" (either CPOE, or ePrescribing) that might be FAR LESS EXPENSIVE, and that might have far less IT-caused unintended adverse consequences, than cybernetic "solutions."

Instead, pre-post studies are used to justify expenditures of millions (locally) and tens or hundreds of billions (nationally), with results sometimes like this affecting an entire country.

There is something very wrong with this, both scientifically and ethically.

-- SS

Note:

[1] If these devices are not experimental, why are so many studying them to see if they actually work, to see if they pose unknown dangers, and to try to understand the conflicting results in the literature? More at this query link: http://hcrenewal.blogspot.com/search/label/Healthcare%20IT%20experiment


Addendum Feb. 10, 2012:

An anonymous commenter points out an interesting issue. They wrote:

The study was flawed due to its failure to consider delays in care and medication administration as an error caused by these experimental devices.

Delays are widespread with CPOE devices. One emergency room resorted to paper file cards and vacuum tubes to communicate urgency with the pharmacy. Delays were for hours.

I agree that lack of consideration of a temporal component, i.e., delays due to technology issues, is potentially significant.

I, for example, remember a more than five-minute delay in getting sublingual nitroglycerin to a relative with apparent chest pain due to IT-related causes. The problem turned out to be gastrointestinal, not cardiac; however, in another patient, the hospital might not be so lucky.

Addendum Feb. 12, 2012:

A key issue in technology evaluation studies is to separate the effects of the technology intervention from other, potentially confounding variables which always exist in a complex sociotechnical system, especially in a domain such as medicine. This seems uncommonly done in HIT evaluation studies. Not doing so will likely inflate the apparent contribution of the technology.

A "control ward" where the same education and training, process re-engineering, procedural improvements, etc. were performed as compared to the "intervention ward" (but without actual IT use) would probably be better suited to pre-post studies such as this.

A "comparison ward" where human interventions were implemented, as opposed to cybernetic, would be a mechanism to determine how efficacious and cost-effective the IT was compared to less expensive non-cybernetic alternatives.

-- SS

Wednesday, January 25, 2012

London Ambulance Service: Would You Like Some Death And Mayhem With Your American Healthcare IT?

It seems American companies are good at producing really noisome commercial healthcare IT and foisting it on other countries, such as outlined at "Is clinical IT mayhem good for [the IT] business? UK CfH leader Richard Granger speaks out" and at "Cerner's Blitzkrieg on London: Where's the RAF?".

Yet another example: Software for the London Ambulance Service (LAS). From Wikipedia:

The London Ambulance Service NHS Trust (LAS) is the largest "free at the point of contact" emergency ambulance service in the world. It responds to medical emergencies in Greater London, England, with its ambulances and other response vehicles and over 5,000 staff at its disposal.

Thanks to the U.S., the inhabitants of London are now the unconsenting subjects of an American IT beta-testing experiment that could cost them their lives.

From E-Health Insider.com:

LAS plans for IT go-live and failure
E-Health Insider.com
25 January 2012
Shanna Crispin

London Ambulance Service NHS Trust may terminate its contract with American supplier Northrop Grumman if a second attempt to go-live with a new dispatch system fails.

The trust initially attempted to launch the CommandPoint computer aided dispatch system in early June last year.

However, the technical switch-over to the new system had disastrous effects; with the system failing, staff having to use pen and paper, and then finally aborting the go-live by reverting to the old CTAK dispatch system.


Health IT can kill you ever before you ever reach the hospital...

An investigation into the incident has found the response to calls was delayed by more than three hours in some cases. One patient has lodged a legal claim for the delay he experienced, and the service has received four additional complaints.

A patient died in one of the calls affected. However, a separate investigation concluded that it could not be determined whether they would have survived if the response had been faster.


In other words, the patient very well might have survived without long delays for the ambulance to arrive.

Board papers drawn up for a board meeting next week say an investigation into the 8 June go-live attempt concluded that critical configuration issues were not identified during the testing phase.

It also found there were no operational procedures in place in the event of a critical system failure and that the product failed to deliver the system, technical and operational functionality expected.


At least in this case an absolution for the software itself was not made. One wonder if the vendor was "held harmless" contractually for this somber outcome.


The trust has since been working to further test the system, and is planning to go-live again on 28 March.

However, the trust’s director of information management and technology, Peter Suter, said if that go-live failed then “the contract with Northrop Grumman would need to be reconsidered.”


That will make two chances to get it right. In life-critical IT, I would only have given one.

A defective first-responder system is, on first principles, a public health menace. There is nothing to argue here, nothing to discuss on that point.

I note that disrupting the first-responder system in London would be the envy of terrorists, especially at the time of the London 2012 Olympic Games. However, who needs them when you have U.S. IT personnel who create a system as described?


The trust completed testing the software prior to Christmas, when it began training staff. Leading up the March go-live, the software will be subjected to four separate live runs, with the system staying live for progressively longer periods of time.

If the system fails to go live in March, the trust will abandon any further attempts to go-live before the Olympics in July.


That's still very tight timing to discover all the bugs in an IT system, in preparation for expected increased need during the Olympics...


Instead, it will keep operating the current CTAK system. However, the trust decided to procure a new system in 2007 because CTAK was deemed ‘unstable’ and in need of replacement.

An analysis of the CTAK system has now determined it is stable enough to handle the increased pressure during the Olympics, which is estimated to be an increase of 5.6% to 8.9% on top of the usual volume for this time of year.


One wonders if the "new system" was not needed at all, but was instead sold as vaporware by impressively attired, good-haired, shiny-toothed, fast-talking salespeople to hapless decision-makers with all sorts of promises of cybernetic and financial miracles.

(I've been in that game before from both sides - as a potential customer, and as part of a health IT sales team.)

One also wonders if, should the system be dismantled due to a second failure, the British taxpayers who paid for it will get their tax dollars refunded.

-- SS

Tuesday, March 8, 2011

The Future Pathways for e-Health in NSW

Prof. Patrick has now added a new section to his report on health IT in NSW Australia, entitled "The Future Pathways for e-Health in NSW." It is available at this link (PDF).

It inoculates against most of the 'Ten Plagues' that bedevil health IT projects (such as the IT-clinical leadership inversion, lack of transparency, suppression of defects reporting, magical thinking about the technology, and lack of accountability of the bureaucrats).

Emphases mine:

In Short Term ( 0-3 months)

1. Halt further rollouts of Firstnet or other CIS systems. The current roll-out programs use significant efforts in training staff for a system that is counterproductive to patient well being.
2. Complete a full and thorough risk assessment analysis and usability of the software. The CIS report indicates there are a number of risks in the current software that are not likely to have been assessed in the past.
3. Address the current problems before doing anything else. There are a number of problems that appear solvable in the short term that would improve the situation for current users, such as providing needed reports.
4. Create the NSW IT Improvement Panel composed of ED Directors, IT-savvy clinical and quality improvement staff responsible for advising on the preparedness and process of the rollout.
5. Create an effective error and bug reporting mechanism that is viewable by all ED directors and with the display of the priority of each entry and expected completion time.
6. Initiate a high profile campaign to encourage staff to lodge fault records on anything they discover wrong, problematic or inefficient in using the system.

In the longer term (3-12 months)

1. Review the Health Support Services and make it clinically accountable by appointing a clinical head with an IT education.
2. Create a culture change in the HSS. The current operation of the HSS seems to be devoid of influence from the clinical community.
3. All NSW CIS system procurement should be guided by an IT Advisory Board of IT experienced clinical, academic and medical software industry experts.
4. Create pathways for hospitals that wish to be early adopters and take a lead role in the development of new methods for using and deploying IT systems.
5. Support innovation within the Australian medical software communities that contribute to a culture of innovation and continuous quality improvement.
6. Adopt transparency rules in all new healthcare information acquisitions. Secrecy has bedevilled the efforts of staff and management to get improvements in the CIS systems and hold service agents accountable for their failure to comply to service level agreements. All agreements about a signed contract should be available to the ED Directors.
7. Replace the State Based Build policy with a policy of providing a technology to match the technology experience of the individual departments so that leaders are not dragged backwards with inappropriate technology installation

The de facto "National Program for IT in the HHS" here in the United States needs a similar inoculation.

I can only add that our own ONC office (Office of the National Coordinator for Health IT of the Dept. of HHS) had more granular recommendations about expertise levels required for leadership roles in such undertakings. I wrote about them at my Dec. 2009 post "ONC Defines a Taxonomy of Robust Healthcare IT Leadership."

--SS

Monday, March 7, 2011

Dr. Scott Monteith on "The Best Compromise" on Physicians and Use of Troublesome Health IT

I have posted two guest posts by Dr. Scott Monteith, a psychiatrist/informaticist, at the Jan. 2011 post "Interesting HIT Testimony to HHS Standards Committee, Jan. 11, 2011, by Dr. Monteith" and the Dec. 2010 post "Meaningful Use and the Devil in the Details: A Reader's View".

Here is another, with his permission. He is responding to a talking point from a health IT commentary website that was distributed among the AMIA evaluations special interest group readership.

Dr. Monteith asks some very probing questions. He writes:

I would like to respond to what I see as being one of the most important and challenging “real-life” issues confronting clinicians, and is captured in this excerpt [below, from the multi-vendor sponsored site HISTalk - ed.]:

HisTALK: ... Somewhere between “we vendors are doing the best we can given a fiercely competitive market, economic realities, and slow and often illogical provide procurement processes that don’t reflect what those providers claim they really want” and “we armchair quarterbacks critics think vendors are evil and the answer is free, open source applications written by non-experts willing to work for free under the direct supervision of the FDA” is the best compromise.

That is, this excerpt performs the helpful task of framing “the best compromise” somewhere between two extreme viewpoints.

It would be helpful (at least for me) if this group could discuss what “the best compromise” actually ‘looks like’ in practice. How does one actually understand and live within “the best compromise”?

Let’s start with a relatively simple scenario:

What should clinicians do when they are working with EHRs that have known “problems” that are putting patients at risk, and the problems are not being immediately addressed, either directly or indirectly through, for example, an acceptable “work around,” or other adjustments to the EHR or local business processes?

Should the clinician continue to use the EHR and…

  • assume that others (e.g., vendor, IT department, administration, etc.) will fix the problem(s)?
  • report the problem(s)? Once? Twice? Three or more times? (How many?) To whom?
  • inform the patient of the known problem(s) (if the problem(s) apply to the patient)?
  • inform the patient that we do not have a good understanding of how to balance or even understand the risks posed by the EHR, given the dearth of peer-reviewed literature and algorithms? (What is “acceptable risk” for a given EHR problem? Does the EHR-related problem’s risk/benefit analysis change if the patient is in the hospital for a simple, nonlife-threatening problem vs. a complex, life-threatening problem)?
  • give the patient the option to NOT use the EHR? (Note that we almost always give patients the choice to refuse other “risky ventures” such as diagnostic procedures and treatments.)
  • inform their medical malpractice insurance company of the EHR-related problems?
  • submit the problem(s) to the organization’s ethics committee, if there is one?
  • report the problem(s) to the organization’s risk management staff?
  • report the problem(s) in writing or verbally?
  • stop using the EHR?

Etc. (including some combination of things).

Can providers (especially physicians) legitimately rationalize, given our ethical (to patients and our colleagues) and legal obligations (to patients and the state where we are licensed), the use of tools that are posing risks to patients and providers, when those risks are not spelled-out, not well understood, not peer-reviewed, etc.?

(Obviously everything we do has risks, but we are obligated to revel and discuss those risks as noted above. Further, the risks/benefits of a given diagnostic or treatment intervention are the product of peer-reviewed algorithms. Are patient’s aware of the risks associated with their doctor’s or hospital’s EHR?)

Again, the above excerpt suggests that there is a “ best compromise.” But what is/are “the best compromise(s)”?

I joke with friends that I am a “radical moderate” – that is, I usually find myself committed to the “middle ground” in most complex and thoughtful discussions. But when it comes to EHRs, I am finding it difficult to define or understand what a “moderate” or an acceptable “best compromise” looks like.

Given the current EHR exuberance driven by ONC’s incentive dollars and vendor profits (or hoped-for profits), we all know that the “politically correct” approach is to “go along” and be an “early adopter” (without too many protests). But is the politically correct approach really the “best compromise,” especially in light of our ethical and legal obligations?

I am anxious to hear what other people think about this matter. I am sincerely seeking help in better understanding a sensible, real-life “best compromise” for those of us in the trenches.

Note that if we cannot define “the best compromise,” then what does that say about us? How can we justify “getting on board” with patient care tools (e.g., EHRs, eRx’ing, etc.) that are posing risks (known and unknown), with no clear processes for informing patients, not giving patients their choice to use these e-tools, no clear evidence-based risk/benefit analyses, etc.?

My own pithy, initial responses are as follows:

Re: Given the current EHR exuberance driven by ONC’s incentive dollars and vendor profits (or hoped-for profits), we all know that the “politically correct” approach is to “go along” and be an “early adopter” (without too many protests). But is the politically correct approach [i.e, "go along to get along" - ed.] really the “best compromise,” especially in light of our ethical and legal obligations?

No, for the reasons after your comma. [i.e., in light of our ethical & legal obligation - ed.]

One should look to the past for lessons in what the "compromises" might be.

How about the Flexner Report of 1910 as a start?

The treatises on human research protections penned largely after the Tuskegee experiments and the horrors of WW2, such as these as listed on an NIH web page "Office of Human Subjects Research, Regulations and Ethical Guidelines" might also shed some light:

  • The Belmont Report Ethical Principles and Guidelines for the Protection of Human Subjects of Research

I further opine:

Re: How can we justify “getting on board” with patient care tools (e.g., EHRs, eRx’ing, etc.) that are posing risks (known and unknown), with no clear processes for informing patients, not giving patients their choice to use these e-tools, no clear evidence-based risk/benefit analyses, etc.?

Perhaps with the line that "I never make mistakes ... everything I do is an experiment."

The technology is experimental. Perhaps the best way forward is to treat it as such.

In medicine I think there's a rich history of how to conduct proper (and improper) experimental research.

Again, Dr. Monteith raises some critical questions that need to be answered.

Or, more correctly, needed to be answered a long time ago, and long before planned national rollouts of healthcare information technology.

-- SS