Showing posts with label Patient care has not been compromised. Show all posts
Showing posts with label Patient care has not been compromised. Show all posts

Wednesday, October 3, 2012

Allegheny Health System Computer Crash (Again) and Paper Backups

I reported on a health IT crash in my May 2011 post "Twelve Hour Health IT Glitch at Allegheny General Hospital - But Patients Unaffected, Of Course..."

Now, there's this at the same healthcare system:

Computer system at West Penn Allegheny restored after crash 
Liz Navratil
Pittsburgh Post-Gazette
October 2, 2012


The computer system at West Penn Allegheny Health System crashed about noon today, temporarily leaving doctors and nurses to work off of paper records instead.

Kelly Sorice, vice president of public relations for the health system, said all systems have since been restored. She said the servers crashed about noon today when the system experienced a power surge.

Doctors in the health system keep paper copies of almost all of their records so they can reference them during power outages or scheduled maintenance times, Ms. Sorice said.

Some systems were up eight hours later and others were expected to come online overnight, according to a report at HisTalk.

Assuming the statement about "doctors keep paper copies of almost all their records" was not spin control regarding skeletal paper records, a question arises.

Why, exactly, spend hundreds of millions of dollars on computing if paper records are kept, and are perfectly sufficient to accomplish the following, the usual refrain in health IT crash scenarios?

Ms. Sorice said she did not know of any procedures that had been rescheduled and added that, "Patient care has not been compromised."

As a physician/ham radio enthusiast who did an elective in Biomedical Engineering in medical school, I also want to know:

1)  What caused the “power surge?”
2)  Why were the systems not protected against a “power surge?”
3)  Exactly how did the “power surge” affect the IT?

Note: I've created a new, searchable indexing term for HIT outage stories with the usual refrain along the lines that "patient care has not been compromised." 

See this query link using the new indexing term.

-- SS

Addendum Oct. 3:

Australian EHR reseacher and professor Dr. Jon Patrick opines:

Even if [the paper records are] skeletal they suggest an endemic lack of confidence. I think the hospital spokesperson hasn't seen the implication of their statement.

-- SS

Sunday, September 30, 2012

UK: Another Example of IT Malpractice With Bad Health IT (BHIT) Affecting Thousands of Patients, But, As Always, Patient Care Was "Not Compromised"

At my Dec. 2011 post "IT Malpractice? Yet Another "Glitch" Affecting Thousands of Patients. Of Course, As Always, Patient Care Was "Not Compromised" and others, I noted:

... claims [in stories regarding health IT failure] that "no patients were harmed" ... are both misleading and irrelevant:

Such claims of 'massive EHR outage benevolence' are misleading, in that medical errors due to electronic outages might not appear for days or weeks after the outage ... Claims of 'massive EHR outage benevolence' are also irrelevant in that, even if there was no catastrophe directly coincident with the outage, their was greatly elevated risk. Sooner or later, such outages will maim and kill.

Here is a prime example of why I've opined at my Sept. 2012 post "Good Health IT (GHIT) v. Bad Health IT (BHIT): Paper is Better Than The Latter" that a good or even average paper-based medical record keeping system can facilitate safer and better provision of care than a system based on bad health IT (BHIT).

Try this with paper:

NHS 'cover-up' over lost cancer patient records

Thousands awaiting treatment were kept in the dark for five months when data disappeared

Sanchez Manning
The Independent
Sunday 30 September 2012

Britain's largest NHS trust took five months to tell patients it had mislaid medical records for thousands of people waiting for cancer tests and other urgent treatments. Imperial College Healthcare NHS Trust discovered in January that a serious computer problem and staff mistakes had played havoc with patient waiting lists.

It's quite likely the "serious computer problem" far outweighed the impact of "staff mistakes", as disappearing computer data does so in a "silent" manner.  One does not realize it's missing as there's not generally a trail of evidence that it's gone.

About 2,500 patients were forced to wait longer on the waiting lists than the NHS's targets, and the trust had no idea whether another 3,000 suspected cancer patients on the waiting list had been given potentially life-saving tests. Despite the fact that the trust discovered discrepancies in January and was forced to launch an internal review into the mess, including 74 cases where patients died, it did not tell GPs about the lost records until May.

That is, quite frankly, outrageous if true and (at least in the U.S.) might be considered criminally negligent (failure to use reasonable care to avoid consequences that threaten or harm the safety of the public and that are the foreseeable outcome of acting in a particular manner).

Revelations about the delay prompted a furious response yesterday from GPs, local authorities and patients' groups. Dr Tony Grewal, one of the GPs who had made referrals to Imperial, said doctors should have been told sooner to allow them to trace patients whose records were missing. "The trust should have contacted us as soon as it was recognised that patients with potentially serious illnesses had been failed by a system," he said. "GPs hold the ultimate responsibility for their patient care."

That is axiomatic.

The chief executive of the Patients Association, Katherine Murphy, added: "This is unacceptable for any patient who has had any investigation, but especially patients awaiting cancer results, where every day counts. The trust has a duty to contact GPs who referred the patients. It's unfair on the patients to have this stress and worry, and the trust should not have tried to hide the fact that they had lost these records. They should have let the GPs know at the outset."

Unfair to the patients is an understatement,  However, if one's attitude is that computers have more rights than patients, as many on the health IT sector seem to with their ignoring of patient rights such as informed consent, lack of safety regulation, and lack of accountability, then it's quite acceptable.

The trust defended the delay in alerting GPs, arguing that it needed to check accurately how much data it had lost before making the matter public. It said a clinical review had now concluded that no one died as a result of patients waiting longer for tests or care.

That would be perhaps OK if the subjects whose "data had been lost" through IT malpractice were lab rats.

Despite this, three London councils – Westminster, Kensington and Chelsea, and Hammersmith and Fulham – are deeply critical of the way the trust handled the data loss. Sarah Richardson, a Westminster councillor who heads the council's health scrutiny committee, said that trust bosses had attempted to "cover up" the extent of the debacle. "Yes, they've done what they can but, in doing so, [they] put the reputation of the trust first," she said. "Rather than share it with the GPs, patients and us, they thought how can we manage this information internally. They chose to consider their reputation over patient care."

As at my Oct. 2011 post "Cybernetik Über Alles: Computers Have More Rights Than Patients?", to be more specific, they may have put the reputation of the Trust's computers first. 

Last week, it was revealed that Imperial has been fined £1m by NHS North West London for the failures that led to patient data going missing. On Wednesday, an external review into the lost records said a "serious management failure" was to blame for the blunder.

Management of what, one might ask?

Imperial's chief financial officer, Bill Shields, admitted at a meeting with the councils that the letter could have been produced more quickly. He said that, at the time, the trust had operated with "antiquated computer systems" and had a "light-touch regime" on elective waiting times.

Version 2.0A will, as again is a typical refrain, fix all the problems.

Terry Hanafin, the leading management consultant who wrote the report, said the data problems went back to 2008 and had built up over almost four years until mid-2011. Mr Hanafin said the priorities of senior managers at that time were the casualty department and finance.

Clinical computing is not business computing, I state for the thousandth time.  When medical data is discovered "lost", the only response should be ... find it, or inform patients and clinicians - immediately.

He further concluded that while the delays in care turned out to be non-life threatening, they had the potential to cause pain, distress and, in the case of cancer patients, "more serious consequences" ... The trust said it had found no evidence of clinical harm and stressed that new systems have now been implemented to record patient data. It denied trying to cover up its mistakes or put its reputation before concerns for patients. "Patient safety is always our top priority," said a spokesman.

"More serious consequences" is a euphemism for horrible metastatic cancer and death, I might add.  The leaders simply cannot claim they "found no evidence of clinical harm" regarding delays in cancer diagnosis and treatment until time has passed, and followup studies performed on this group of patients.

This refrain is evidence these folks are either lying, CYA-style, or have no understanding of clinical medicine whatsoever - in which case their responsibilities over the clinic need to be ended in my opinion.

I, for one, would like to know the exact nature of the "computer problem", who was responsible, and if it was a software bug, how such software was validated and how it got into production.

-- SS

Oct. 1, 2012 Addendum:

What was behind the problems, according to another source?   

Bad Health IT (BHIT):

Poor IT behind Imperial cancer problems
e-Health Insider
28 September 2012
Rebecca Todd

An independent review of data quality issues affecting cancer patient referrals to Imperial College Healthcare NHS Trust has identified “poor computer systems” as a key cause of the problem.

The review’s report highlights the trust’s use of up to 17 different IT systems as causing problems for patient tracking.

However, it says the trust should be aware of the risks of [replacing the BHIT and] moving to a single system, Cerner Millennium, because of reported problems in providing performance data after similar moves at other London trusts.

In January 2012, the report says the NHS Intensive Support Team was reviewing the way reports on cancer waiting times were created from Imperial’s cancer IT system, Excelicare.

The team discovered that almost 3,000 patients were still on open pathways who should have been seen within two weeks. In May, letters were sent to GPs to try and ascertain the clinical status of around 1,000 patients.

BHIT must be forbidden from real-world deployments, and fixed rapidly or dismantled (as Imperial College Healthcare NHS Trust appears to be doing), although the "solution" might be just as bad, or worse, than the disease.

-- SS

Thursday, August 23, 2012

From the University of Chicago EHR Helpdesk Call Line

I was alerted this morning (Aug. 23rd) to this message currently in the telephone message of the CBIS [Chicago Biomedicine Information Services] Service Desk at University of Chicago Medical Center:

"Thanks for calling the CBIS Service Desk.  Your call is very important to us. We are currently experiencing troubles with our Citrix logon.  It may log you on under a different profile.  Please check before you go any further when you're logging in to Citrix."

Citrix is a computer program that allows remote access to information systems.

I imagine the meaning of "log you on under a different profile" means "logging you on as a different user."

The chances of a security breach (ability of unauthorized user to peer into patient's charts they have no business seeing), unauthorized history/order manipulation, or even misidentification error (e.g., a clinician inadvertently acting upon a patient of some other clinician who has a similar name to their own patient) and other distracting work disruptions due to the inconveniences this "trouble" creates are worrisome.

One wonders how every user is being informed of this problem, as not everyone makes it a habit to call the service desk before logging in to clinical systems...

But, alas, this is just a "glitch" (the euphemism used by technophiles for malignant software defects), and, of course, patient safety is never compromised by "glitches."


Patient Safety Will Not Be Compromised, We Predict ... So Say Us All.


-- SS

8/29/12 Addendum:

Apparently the problem was finally solved between 5:30 PM and 9 PM CST on August 27.   I first became aware of it at around 8 AM EST August 23.  Brings to life the line "either you are in control of your information systems, or they are in control of you."

Also, see the comment thread to this post here, specifically the comments starting at August 28, 2012 12:16:00 PM EDT, to see yet another demonstration of the illogic, unserious attitudes and feelings of entitlement towards patient risk and transparency characteristic of the health IT industry.  The anonymous commenter also alleges to have firsthand knowledge of the problem, suggesting they are from U. Chicago, but this cannot be confirmed.

-- SS

Wednesday, August 8, 2012

ONC and Misdirection Regarding Mass Healthcare IT Failure

In my keynote address to the Health Informatics Society of Australia in Sydney recently, I cautioned attendees including those in government to be wary of healthcare IT hyper-enthusiast misdirection and logical fallacy (a.k.a. public relations).

In the LA Times story "Patient data outage exposes risks of electronic medical records" on the Cerner EHR outage I wrote of in my post "Massive Health IT Outage: But, Of Course, Patient Safety Was Not Compromised" (the title, of course, being satirical), Jacob Reider, acting chief medical officer at the federal Office of the National Coordinator for Health Information Technology is quoted.  He said:

"These types of outages are quite rare and there's no way to completely eliminate human error."

This is precisely the type of political spin and hyper-enthusiast misdirection I cautioned the Australian health authorities to evaluate critically.

As comedian Scott Adams humorously noted regarding irrelevancy, a hundred dollars is a good price for a toaster, compared to buying a Ferrari.

Further, when you're the patient harmed or killed, or the victim is a family member, you really don't care how "rare" the outages are.

Airline crashes are "rare", too.   So, shall they just be tolerated as a "cost of doing business" and spun away?

(As I once wrote, the asteroid colliding with Earth that caused the extinction of the dinosaurs was a truly "rare" event.)

It seems absurd for me to have to point out that paper, unless there is a mass outbreak of use of disappearing ink, or locally hosted clinical IT, do not go blank en masse across multiple states and countries for any length of time, raising risk across multiple hospitals greatly, acutely and simultaneously.   Yet, I have to point out this obvious fact in the face of misdirection.

Locally hosted health IT, of course, can only cause "local" chart disappearances.  "Local" is a relative term, however, depending on HC organization size, as in the example of a Dec. 2011 regional University of Pittsburgh Medical Center (UPMC) 14-hour outage affecting thousands here.

Further, EHR's and other clinical IT, whether hosted locally or afar, had better offer truly major advantages, without major risks and disadvantages, over older medical records technologies before exposing large numbers of patients to an invasive IT industry and the largest unconsented human subjects experiment in history.

Unfortunately, those basic criteria are not yet apparent with today's systems (see for instance this reading list).

EHR's and other clinical IT, forming in reality an enterprise clinical resource management and clinician workflow control apparatus, have introduced new risk modes including mass chart theft (sometimes tens of thousands in the blink of an eye); also, mass chart disappearances as in this case - all not possible with paper.

At the very least, if hospitals want enterprise clinical resource management and clinician workflow control systems, these should not be relegated to a distant third party.  Patients are not guinea pigs upon whom to test the ASP software model ("software as a service") that, upon failure for any reason, threatens their lives.

Finally, these complications are a further example why this industry cannot go on without meaningful oversight.  The unprecedented special medical device regulatory accommodations must end.

-- SS

Tuesday, August 7, 2012

Massive Health IT Outage: But, Of Course, Patient Safety Was Not Compromised

Having been 'Down Under' in Sydney addressing the Health Informatics Society of Australia on the need to slow down their national health IT program - and on the need to think critically about HIT seller public relations exaggerations and hubris - and being very busy, I missed this quite stunning story of a major health IT outage.

Just a typical "glitch":

Some lessons from a major outage
Posted on July 31, 2012
By Tony Collins

Last week Cerner had a major outage across the US. Its international customers might also have been affected.

InformationWeek Healthcare reported that Cerner’s remote hosting service went down for about six hours on Monday, 23 July. It hit “hospital and physician practice clients all over the country”. Information Week said the unusual outage “reportedly took down the vendor’s entire network” and raised “new questions about the reliability of cloud-based hosting services”.

A Cerner spokesperson Kelli Christman told Information Week,

“Cerner’s remote-hosted clients experienced unscheduled downtime this week. Our clients all have downtime procedures in place to ensure patient safety.  [Meaning, for the most part, blank paper - ed.] The issue has been resolved and clients are back up and running. A human error caused the outage.  [I don't think they mean human error as in poor disaster recovery and business continuity engineering - ed.]  As a result, we are reviewing our training protocol and documented work instructions for any improvements that can be made.”

Christman did not respond to a question about how many Cerner clients were affected. HIStalk, a popular health IT blog, reported that hospital staff resorted to paper [if that was true, that paper was OK in an unplanned workflow disruption of major proportions, then why do we need to spend billions on health IT, one might ask? - ed.] but it is unclear whether they would have had access to the most recent information on patients.

One Tweet by @UhVeeNesh said “Thank you Cerner for being down all day. Just how I like to start my week…with the computer system crashing for all of NorCal [Northern California].”

Tony Collins is a commentator for ComputerWorldUK.com.  He's quoted me, as I wrote in my May 2011 post Key lesson from the NPfIT - The Tony Collins Blog.

This incident brings to life longstanding concerns about hospitals outsourcing their crucial functions to IT companies.  

Quite simply, I think it's insane, at least in the foreseeable future, as this example shows.

It also brings to mind the concerns that health IT, as an unregulated technology, causes dangers in hospitals with inadequate internal disaster and business continuity functions aside from fresh sheets of paper.  Such capabilities would likely be mandatory if health IT were meaningfully regulated.

The Joint Commission, for example, likely issued its stamp of approval for the affected hospitals, hospitals who had outsourced their crucial medical records functions to an outside party that sometimes went mute.  If someone was injured or died due to this outage, they would not care very much about the supposed advantages.

There's this in the article:

... “Issue appears to have something to do with DNS entries being deleted across RHO network and possible Active Directory corruption. Outage was across all North America clients as well as some international clients.”

Of course, patient safety was not compromised.

Finally:

Imagine being a patient, perhaps with a complex history, in extremis at the time of this outage.  

I, for one, do not want my own medical care nor that of my relatives and friends subject to cybernetic recordkeeping unreliability and incompetence like this, and the risk it creates.

-- SS

Aug. 8, 2012 addendum:

The Los Angeles Times covered this outage in a story aptly entitled "Patient data outage exposes risks of electronic medical records."

They write:

Dozens of hospitals across the country lost access to crucial electronic medical records for about five hours during a major computer outage last week, raising fresh concerns about whether poorly designed technology can compromise patient care.

My only comment is that the answer to this question is rather axiomatic.

They also quote Jacob Reider, acting chief medical officer at the federal Office of the National Coordinator for Health Information Technology, who said:

"These types of outages are quite rare and there's no way to completely eliminate human error"

This is precisely the type of political spin and misdirection I cautioned the Australian health authorities to evaluate critically.

Paper, unless there is a mass outbreak of use of disappearing ink, or locally hosted clinical IT, do not go blank en masse across multiple states and countries for any length of time, raising risk across multiple hospitals greatly, acutely and simultaneously.  (Locally hosted IT outages only cause "local" mayhem; see my further thoughts on this issue here).

-- SS

Saturday, January 14, 2012

North Bristol Hits Appointment Problems: Another "Our Lousy IT Systems Screwed Up, But Patient Safety Was Never Compromised" Story

At my Dec. 2011 post "IT Malpractice? Yet Another "Glitch" Affecting Thousands of Patients. Of Course, As Always, Patient Care Was "Not Compromised" referencing prior posts, I wrote:

... At my Nov. 2011 post "Lifespan (Rhode Island): Yet another health IT glitch affecting thousands - that, of course, caused no patient harm that they know of - yet" I wrote:

There's been yet another health IT "glitch" that, of course, caused no patients to be harmed. See other "glitches" here, here, here and at other posts which can be found by searching this blog on the banal term 'glitch'.

Another "our clinical IT crapped out , BUT ... patient care/safety was never compromised" story just arose:


North Bristol hits appointment problems
E-Health Insider
11 January 2012
Rebecca Todd

Clinicians working at North Bristol NHS Trust have expressed concern about disruption to patient care, which they say is caused by appointment problems following the go-live of a new Cerner Millennium electronic patient record system.

I would have entitled the article "North Bristol hit by IT-created appointment problems."

Reported problems include patients being booked into non-existent clinic appointments or not being told about scheduled operations, resulting in some operations being cancelled.

No patient care compromise possible there. Who, after all, needs a timely operation? It frees up a lot of money for IT golf tournaments to let those of no value to society (i.e., the old, and those who will not admit computers in healthcare with deterministically revolutionize medicine because, well, they're magic) simply die due to delayed or cancelled surgery...

Ehealth Insider understands that some of the problems relate to the way the trust configured the EPR system; including setting up dummy clinics for which appointment letters were subsequently sent out.

It's never the software or computer's fault.

As a matter of fact, I have not seen any official response to the work of Dr. Jon Patrick at U. Sydney on the many software engineering flaws of another product of the same company. His work is entitled "A study of an Enterprise Health information System" and is at this link: http://sydney.edu.au/engineering/it/~hitru/index.php?option=com_content&task=view&id=91&Itemid=146. Do they have Class Action lawsuits in Australia?

In a regional BBC news report, aired on Monday evening, anonymous hospital clinicians called the implementation a “complete shambles” and said it represented a “potential danger” to patients.

According to the BBC report, the problems meant patients were being booked for impossible appointment times, such as 12.05 am, and quoted correspondence saying staff and the system were both on the “verge of meltdown."

The clinician comments are anonymous since non-anonymous reporting would get the clinicians declared health IT apostates, and then excommunicated. Non-anonymous 'whistleblowers' could also fear being sued due to possible gag clauses - the kind of clause hospital executives sign in violation of their fiduciary responsibilities to their staff and to patients. (See my 2009 JAMA letter to the editor "Health Care Information Technology, Hospital Responsibilities, and Joint Commission Standards"at this link and the much-expanded essay on the same themes "Health Care Information Technology Vendors' Hold Harmless and Keep Defects Secret Clauses" at this link.)

Martin Bell, director of IM&T at the trust, confirmed to EHI that North Bristol had experienced some “unexpected problems” in the past few weeks with some of the outpatient appointments and theatre lists.

Bell stressed, however, that patient safety had not been compromised and that this continued to be the top priority.

There's that line again. Perhaps it's part of some hospital administrator JournoList-recommended catchphrase for describing how safety was not compromised during a major workflow disruption?

He said the problems were not down to the software itself, but due to “implementation and data migration difficulties in some clinics."

Right. Quite credible.

“Our information management and technology team, supported by our suppliers BT and Cerner, have been working very hard to sort out any initial issues as quickly as possible and we are already seeing improvements,” he said.

Congratulations are due. They are seeing "improvements" in dangerous clinical IT malfunctions that should never have to have been seen in the first place, if the statement is true.

“Many wards, our two minor injuries units and the Emergency Department, are successfully using the new system." The trust is one of the largest in the South of England, with more than 1,000 beds.

Just give them time.

EHI understands that as part of the Millennium implementation, dummy clinics were set up. Patients were then sent appointment letters for these clinics in error.

EHI also understands that some patients had also not turned up for scheduled operations because they had not been informed about the booking.

Bell apologised to patients who had been “inconvenienced during this transition period” and said staff had shown real dedication to continue to deliver patient care.

What if someone had been inconvenienced into their grave, or ends up there as a result of delays? On what wavelength will the apology be transmitted?

“We firmly believe that the new system, once fully implemented, will improve services for our patients and provide real value,” he said.

That seems to be the mantra, but delivery on such promises are rare. See "Pessimism, Computer Failure, and Information Systems Development in the Public Sector" as here, Public Administration Review 67;5:917-929, Sept/Oct. 2007, Shaun Goldfinch, University of Otago, New Zealand. The article is a cautionary article on IT that should be read by every healthcare executive documenting the widespread nature of IT difficulties and failure, the lack of attention to the issues responsible, and recommending much more critical attitudes towards IT.

A £69m contract for BT to deliver Cerner Millennium to three new, or ‘greenfield’ sites in the South of England was agreed in April 2010, under the auspices of the National Programme for IT in the NHS.

That would not be the failed National Programme for IT in the NHS, the NPfIT what went PfffT, would it?

North Bristol was the last of these three sites to go-live with the system in December last year.

It followed Oxford University Hospitals NHS Trust, which went live a week earlier, and Royal United Hospital Bath NHS Trust, which was the first to go-live in July.

Cerner said it was working closely with North Bristol and BT on the recent implementation of Millennium.

“In complex and large deployments, especially when migrating from two different systems, it is always anticipated that it would take time for the new system to bed-in,” it said in a statement.

The patients are given full informed consent on this issue, right? Right?

“Across much of the trust, the deployment has worked well. However, this is a major change management project and there have been some difficulties with outpatient appointments.

Although this is not a problem with the software, Cerner is working in partnership with BT and trust staff to resolve any issue as quickly as possible.”

Link: BBC News

Right. Perhaps this software and claim needs testing - in a court of law.

The only thing missing is the word "glitch", though I am including that term in this posting's index, since I consider it another story in the ever-growing health IT "glitch" series.

-- SS

Addendum:

A reader sent me this comment:

How can anyone claim the problems at N. Bristol are unexpected. they are EXACTLY the same problems encountered in Taunton five years ago.

The Somerset Trust had sixteen cancelled go live dates and when Cerner 'Millennium" (note: they never defined which Millennium...) was switched on the whole hospital went into slow-motion.

Appointments could not be made at out-patient reception desks while patients waited and therefore had to be posted on. Twenty-four whole time equivalent clerks had to be employed to manage the back-up of appointment requests. So much for enhanced efficiency and cost savings.

The only possible response to this news is again to remind people of Einstein's famous definition of insanity: "repeating the same thing again and again and expecting a different result."

As for other Trusts, why no news of transformed performance by Cerner's systems at other Cerner implemented sites, Berkshire, Newcastle, Kingston, Oxford etc. The only 'good' news we get is that the system has been switched on.

If any of this expensive activity had really produced data, efficiency or cost gains, we would be drowning in Cerner press releases, the silence can only mean one thing, that their system is performing as poorly at other sites as it has in the South West.

Contrast this with the output and data produced openly by Birmingham University Hospital from its in-house created IT system.

Unfortunately one can only draw one serious conclusion about the whole Cerner/ NHS debacle - to paraphrase Mr. Clinton - "It's the (imho substandard) software, stupid!

This story needs serious investigation ... Recently US news items have started to discount the supposed efficiency gains for e-Health implementations and started to emphasize data capture and patient safety as the imperative for switch on. Unfortunately for Cerner supporters (and other vendors) the US Institute of Medicine's recent report stated unequivocally that there was (to everyone's apparent surprise) no quality evidence that e-Health improved patient safety.

I would contend no drug, therapeutic equipment or operation would or could be implemented in secondary care in the absence of critical and peer reviewed evidence of benefit [emphasis mine - ed.] that has characterized the rush to switch-on substandard IT solutions in English NHS hospitals.

I note that critical, peer-reviewed evidence, especially based on prospective randomized clinical trials as opposed to anecdotal, weak retrospective observational studies, have been deemed unnecessary in health IT.

Yet serious case reports of risk and injury from credible sources are deemed the true "anecdotes" and discounted. As I've written before, the science of medicine is nearly entirely lacking in the domain of health IT.

To put it in the words of James Le Fanu (channeling Sherlock Holmes) in his very apropos essay entitled "The Case of the Missing Data: The Dog That Didn't Bark", details on contrary strands of evidence that could reasonably have been expected to appear in evidential text are absent.

-- SS

Wednesday, December 28, 2011

IT Malpractice? Yet Another "Glitch" Affecting Thousands of Patients. Of Course, As Always, Patient Care Was "Not Compromised."

At my Nov. 2011 post "Lifespan (Rhode Island): Yet another health IT glitch affecting thousands - that, of course, caused no patient harm that they know of - yet" I wrote:

There's been yet another health IT "glitch" that, of course, caused no patients to be harmed. See other "glitches" here, here, here and at other posts which can be found by searching this blog on the banal term 'glitch'.

Add another case to the health IT glitch file, under the "do we feel lucky today?" patient risk category.

From the Pittsburgh Post-Gazette (I am quoted):


Computer outage at UPMC called 'rare' Systemwide disruption potentially dangerous, expert warns Saturday, December 24, 2011 By Jonathan D. Silver, Pittsburgh Post-Gazette

UPMC's electronic medical records system for inpatients went offline for more than 14 hours at nearly all its hospitals in the region, marking what the health system called a "rare" outage, but one that it claims did not harm patients.

First, as my aforementioned Nov. 2011 post and its contained links point out, these events are not as "rare" as they should be. (The asteroid colliding with Earth that caused the extinction of the dinosaurs - now that's a "rare" event.)

Second, as multiple posts on this blog have pointed out, the claims that "no patients were harmed" is both misleading and irrelevant:

Such claims of 'massive EHR outage benevolence' are misleading, in that medical errors due to electronic outages might not appear for days or weeks after the outage, depending on what information was corrupted/lost/misindentified/or otherwise mishandled after it is 'backloaded' once the system is up. All it takes is one med lost to cause misery and death. (I can speak about that from unfortunate personal experience.

Claims of 'massive EHR outage benevolence' are also irrelevant in that, even if there was no catastrophe directly coincident with the outage, their was greatly elevated risk. Sooner or later, such outages will maim and kill.

The outage affected a system designed by Cerner Corp., a global electronic records company, and customized by UPMC that doctors and nurses rely on for communication about patient records, medical orders and prescriptions.

It was unavailable from about 8:45 p.m. Thursday to 11 a.m. Friday at almost all of UPMC's hospitals except for Children's and UPMC Hamot in Erie, spokeswoman Wendy Zellner said.

"This is rare. This kind of widespread, extensive downtime would be rare," Ms. Zellner said.

Doctors and nurses continued to have access to patients' electronic records through backup systems, she said. They also had to resort to using old-fashioned paper records for documentation and orders.

"These things happen. They have really well spelled-out procedures for what to do when something goes down," Ms. Zellner said.

She acknowledged that doctors and nurses faced some challenges.

Faced 'some challenges?' In other words, care was compromised by the outage and the 'challenges' were to avoid medical error (and, of course, to make sure billing was unaffected):

Compromised -
a. To expose or make liable to danger, suspicion, or disrepute
b. To reduce in quality, value, or degree; weaken or lower.


Thousands of patients were affected, again reinforcing my point about how IT can and does greatly amplify the risks of paper -- as in my Rhode Island post -- such as errors and confidentiality breaches.

I cannot, for example, think of a single instance where thousands of paper records went unavailable simultaneously (unless, that is, someone lost the key to the Medical Records department), were made available to identity thieves en masse, or where thousands of medical orders were scrambled or truncated in a relatively short period of time as in Rhode Island.

These amplified risks could wipe out any advantages of EHR's over paper in a microsecond.


A partial list of facilities apparently affected in this latest episode of EHR mayhem, from this list:

That accounts for several thousand active patients, I am sure.

(12/28 Addendum: Bed counts of PA hospitals are here. Searching on "University of Pittsburgh Medical Center", it can be seen that thousands of beds were indeed involved.)

"Whenever people aren't working in their native system and workflow I have to believe that is more cumbersome for the clinicians, but these folks are well-trained in what to do when these things happen."

This seems at best an insensitive and perhaps even inhumane bit of P.R. More "cumbersome" for the clinicians? What about the poor patients? How would Ms. Zellner feel, I wonder, if it were her mother, child or significant other on the Operating Room table or having an acute MI when the EHR/CPOE systems went down?

Ms. Zellner said UPMC's public relations staff was unaware of the outage until contacted by a reporter.

It appears P.R. is not very high on the list for receiving information when a crisis arises. I may have known of the outage before they did.

The outage was caused by a "bug" or glitch in software designed by a vendor affiliated with Cerner, Ms. Zellner said. She refused to identify the company.

"We're not trying to point fingers at different vendors. It's a database bug, that's all I can tell you."

(That is, it's not our fault, it's the fault of the database vendor. Hospitals, I regret to inform you - you are responsible for unapproved medical devices used in your facilities, no matter what the source.)

And there's that word "glitch" again, accompanied by the equally banal "bug."

It's just a "bug." Cute little critter!

Me again in the Post-Gazette:

Scot M. Silverstein, a doctor and assistant professor Healthcare Informatics at Drexel University in Philadelphia, disagreed with the use of the terms "bug" and "glitch."

"What occurred here was a disruptive, potentially dangerous major malfunction of a life-critical enterprise medical device," he said.

Somehow, when a clinician makes a mistake, the terms "bug" and "glitch" are never used. In fact, when clinicians fail to meet accepted professional standards of healthcare practice, it is called "malpractice."

I think we can all agree that a major near-full-day outage of an enterprise EHR affecting multiple hospitals and thousands of patients does not meet accepted professionals standard of life-critical computing practice. Yet, all this merits is the word "glitch." It seems to me that if patients are harmed by, in reality, what is (on its face) IT malpractice during such events, not only the clinicians affected should be held liable.

Ms. Zellner said the problem was not a "crash" of the system because there were alternate methods used to cope that prevented patient care from being compromised.

The usual refrain. Let me repeat my definition of "compromised:"

Compromised -
a. To expose or make liable to danger, suspicion, or disrepute
b. To reduce in quality, value, or degree; weaken or lower.

A simple question - if extended EHR outages like this never seem to "compromise" care, then why not eliminate health IT entirely and spend the hundreds of millions saved on patient care?

"This is not a crash of Cerner either," Ms. Zellner said. "I think a crash is, 'Oh my God, the sky is falling,' nobody can get anything."

I leave it to the readers to ascertain the computer expertise levels and reasonableness of what Ms. Zellner thinks a "crash" is.

Technicians from UPMC, Cerner and the third company [the 'mystery' database company? - ed.] worked together on-site to identify and fix the problem. Ms. Zellner said she did not know why it took 14 hours to fix and the underlying cause was still unclear.

"They know what the problem is and I believe it's been fixed, but we really don't know what triggered it," Ms. Zellner said. "I think the next step would be some actual software upgrades."

They "don't know what triggered the 'problem'" - is a proper translation that they have no idea what went wrong?

In fact, regarding another Cerner EHR system which was extensively studied (see "A Study of an Enterprise Information System" at this link), Dr. Jon Patrick came to the conclusion that one of the sources of catastrophic failures is poor software engineering that has made the behavior of the studied system "non-deterministic." Further, software upgrades are not protected from incremental changes made by maintenance and customization staff, and may introduce even more instability.

A software upgrade without clearly understanding "what triggered the problem" is simply asking for more trouble. (My bet, however, is that they attempt it anyway.)

A Cerner representative could not be reached for comment.

What's to say?

How about this:

Dr. Silverstein said based on what he was told about the computer outage, it means that hospital medical staff would have been unable to update patient charts and probably would not have been able to issue any orders through the system during the time it was off line.

He also questioned how up-to-date the hospital's redundant records were.

Repeating UMPC's statement from the article that appeared after I gave my quotes to the reporters: "Doctors and nurses continued to have access to patients' electronic records through backup systems, [the UPMC spokesperson] said. They also had to resort to using old-fashioned paper records for documentation and orders."

My stated fears of disruption and increased risk due to compromised care seem well-grounded.

In May, Allegheny General Hospital had to shut its electronic medical records computer system down because of problems with the vendor's hardware.

The hospital used backup procedures to continue care for patients, including using paper orders and record-keeping.

Wait ... I thought I'd heard these events were "rare." Two in the same city within six months?

---------------------------

Truth be told:

The primary rule in computing is:


Either you are in control of your information systems, or they are in control of you.

Clearly the latter was the case here.

The following questions arise:

  • Was the software containing the "bug" properly vetted before being used on live patients? This is not just the vendor's obligation.
  • If it was not vetted properly, why not?
  • Was it an "upgrade" or patch? (If so, the same vetting rules apply.)

Further, the soft-selling of these incidents must end. The use of terms such as "bug" and "glitch" must also end. What occurred here, echoing my newspaper quote, was a disruptive, potentially catastrophic major malfunction of a life-critical enterprise medical device.

System-wide EHR crashes are not merely ‘glitches’ or ‘bugs.’ They need to be considered, as in medicine itself, as 'never events.' From AHRQ:

The term "Never Event" was first introduced in 2001 by Ken Kizer, MD, former CEO of the National Quality Forum (NQF), in reference to particularly shocking medical errors (such as wrong-site surgery) that should never occur. Over time, the list has been expanded to signify adverse events that are unambiguous (clearly identifiable and measurable), serious (resulting in death or significant disability), and usually preventable.

Further, re: "patient care was never compromised." How do they know that? In fact, this is 'spin' and word games on its face. By definition, if CPOE and chart updating was unavailable, patient care was compromised, where "compromised" means "increased levels of risk for error were created, requiring workarounds."

Further, as mentioned earlier, harms might not show up for some time. Lost orders, corrupted data, errors of omission or commission transcribing backup paper records into the computer ("backloading"), etc. can take their toll later. Post-outage vigilance is essential, putting even more stress on clinicians that increases likelihood of further error and that they certainly do not need. Clinicians are stressed enough already.

Finally:

IT personnel have not only deliberately inserted themselves into clinical affairs (e.g, via the HITECH Act of 2009), they have also done so with a stunning arrogance and unproven braggadocio about their systems "revolutionizing" medicine (whatever that means).

Indeed, they need to accept the medical responsibility and obligations this territorial intrusion entails.

On its face, this massive outage was the result of issues that did not meet accepted professional standards of IT practice for life-critical environments. Res ipsa loquitur.

Something was not vetted properly, there was a lack of redundancy, the IT personnel were NOT in control of their systems.

Just as when physicians don't provide care that meets accepted professional standards of healthcare, this incident and others like it are, by definition, a result of IT malpractice.

If patients are harmed, IT personnel and their management (often non-IT C-level officers) involved in this system need to be held accountable.

If they can't take the clinical heat (as clinicians do daily since the time they enter medical or nursing school), then they need to get out of the clinical kitchen.

-- SS

Note: see this take on these matters at the HIStalk blog:

UPMC’s Cerner systems go down for 14 hours at most campuses last Thursday and Friday, forcing them to go back to paper. The PR person blamed “a database bug,” which makes the above Oracle press release from this past summer a particularly fun read. Cerner and UPMC have an atypical vendor-customer relationship since they’ve invested big money together in innovation projects and UPMC runs a Cerner implementation business overseas.

Now we know who the unnamed "mystery database vendor" is...

-- SS

Dec. 29, 2011 Addendum:

Was UPMC acting as a "proving ground" for some Oracle-Cerner-UPMC experimental health IT technology that resulted in the crash? The claim of being an IT "proving ground" has been made in the past:

Pittsburgh Tribune
May 2, 2006
UPMC partners with technology provider

The University of Pittsburgh Medical Center is taking another step in a quest to commercialize new medical technology.

UPMC on Monday signed a three-year deal with health care information technology provider Cerner Corp. to develop and market medicine-related technological advances. Both parties will contribute $10 million in cash, services and intellectual property to the effort.

The deal is a smaller version of an April 2005 deal between UPMC and information technology behemoth IBM.

As is the case in the IBM deal, UPMC will serve as a built-in proving ground for jointly developed technologies and products, with Cerner marketing the products and UPMC awarded a share of profits.

As I wrote at "Proving Ground for IT Tests On Children: Pioneers in Health IT, or Pioneers in Ignoring the Past?":

"A hospital and patients are not a learning lab for HIT vendors. The appropriate "proving ground" for new medical technology is the controlled clinical trial where participants (in this case, patients and healthcare professionals alike) have freedom of choice whether or not to participate, and a chance to give (or deny) consent after being fully informed of potential risk."This is a fundamental human rights issue.
-- SS

Wednesday, January 26, 2011

Orderless in Seattle: Software "glitch" shuts down Swedish Medical Center's medical-records system

A commenter yesterday commented that after 25 years in practice, they had one lost [paper] chart (as opposed to an IT systems crash, where every chart is lost temporarily).

As coincidence would have it, there's this story in the news:

Software glitch shuts down Swedish medical-records system
Tuesday, January 25, 2011
By Carol M. Ostrom
Seattle Times health reporter

A four-hour shutdown of Swedish Medical Center's centralized electronic medical-records system Monday morning was caused by a glitch in another company's software, said Swedish chief information officer Janice Newell.

There's that word "glitch" again that I see so frequently in the health IT sector when a system suffers a major crash that could harm patients. Why do we not call it a "glitch" when a doctor amputates the wrong body part, or kills someone?

The system, made by Epic Systems, a Wisconsin-based electronic medical-records vendor, turned itself off because it noticed an error in the add-on software, Newell said, and Swedish was forced to go to its highest level of backup operation.

Turned itself off? Back we go to the old Unix adage that "either you're in control of your information system, or it's in control of you."
To prove that point, note that "the highest level of backup operation" had a bit of a problem:

That allowed medical providers to see patient records but not to add or change information, such as medication orders.

I'm sure sick and unstable patients such as in the ICU's, as well as their physicians and nurses, appreciated this minor "glitch." Look, Ma, no orders!
(Do events like this ever happen in the middle of a Joint Commission inspection?)
The "glitch" didn't just affect a few charts:

The outage affected all of Swedish's campuses, including First Hill, Cherry Hill, Ballard and its Issaquah emergency facility, as well as Swedish's clinics and affiliated groups such as the Polyclinic.

I cannot imagine a paper-based "glitch" that could affect so many, so suddenly, other than a wide-scale catastrophe.

During the outage, new information was put on paper records [that 5,000 year old, obsolete papyrus-based technology that's simply ruining healthcare, according to the IT pundits - ed.] and transferred into patient records in the Epic system after the system went back up in the afternoon. [By whom? Busy doctors? - ed.] Epic, Newell said, is "really good at fail-safe activity," and if it detects something awry that could corrupt data, it shuts itself off, which it did Monday at about 10 a.m.

Which means that interfaced systems need to undergo the highest levels of scrutiny in real world use, if they can in effect shut down an entire enterprise clinical system.
I note the identity of the "other company's software" that brought the whole system to a grinding halt was not identified, nor was the nature of the "other vendor's" software "glitch" itself. Was the problem truly caused by "another vendor" via a bug in their product, via a faulty upgrade, or an internal staff error related to the "other vendor's" software?
It seems we now have yet another defense for HIT "glitches" other than "blame the users": it's not OUR fault; blame the other vendors.

Newell said the shutdown likely affected about 600 providers, 2,500 staffers and perhaps up to 2,000 patients, but no safety problems were reported.

As I've noted at this blog before, it is peculiar how such "glitches" never seem to produce safety problems, or even acknowledgments of increased risk.

Staff members were notified of the shutdown via error messages, e-mails, intranet, a hospital overhead paging system and personal pagers.

"Warning! Warning! EHR and CPOE down! Grab your pencils!" Just what busy doctors and nurses want to hear when they arrive for a harrowing day of patient care.
I wonder if the alert was expressed in a manner not understandable to patients, i.e., "Code 1100011" (99 in binary!) or something similar as in a medical emergency.

Newell said she was "99.9 percent sure" other hospitals have had similar shutdowns [that's certainly reassuring about health IT - ed.], because software, hardware and even power systems are not perfect. [That's why we have resilience engineering, redundancy, etc. - ed.]
"Anybody who hasn't had this happen has not been up on an electronic medical record very long," Newell said. "I would bet a year's pay on that."

A logical fallacy to justify some action or situation can take the form of an appeal to common practice. Is what I am seeing here what might be called an appeal to common malpractice?
Or is the fallacy simply a manifestation of the adage "misery loves company?"

Newell said this is not the first shutdown of Epic, which was fully installed in Swedish facilities in 2009 after a nearly two-year process. But it was the longest-running one, she acknowledged.


Swedish is exploring creating "more sophisticated levels of backup" with other hospitals, Newell said, locating a giant server in a different geographic area to protect against various disasters such as earthquakes or floods.

Maybe they should have done that after the aforementioned other "glitches."

I repeat the adage:

"Either you're in control of your information system, or it's in control of you."

Indeed, if the information system is mission-critical, and you cannot control it, you literally have no business disrupting clinicians en masse and putting patients at risk by letting it control you.

Finally, on the topic of 'cybernetic extremophiles', I note that we have several Mars Rovers and very distant space probes such as Voyager 1 whose onboard computers (in the case of Voyager, built long ago with much less advanced technology than today's IT) have been working flawlessly in environments far more hostile than a hospital data center, and long beyond their stated life expectancies.

The Voyager 1 spacecraft is a 722-kilogram (1,592 lb) robotic space probe launched by NASA on September 5, 1977 to study the outer Solar System and eventually interstellar space. Operating for 33 years, 4 months, and 22 days, the spacecraft receives routine commands and transmits data back to the Deep Space Network. Currently in extended mission, the spacecraft is tasked with locating and studying the boundaries of the Solar System, including the Kuiper belt, the heliosphere and interstellar space. The primary mission ended November 20, 1980, after encountering the Jovian system in 1979 and the Saturnian system in 1980.[2] It was the first probe to provide detailed images of the two largest planets and their moons.

As of January 23, 2011, Voyager 1 was about 115.963 AU (17.242 billion km, or 10.8 billion miles) or about 0.00183 of a light-year from the Sun. Radio signals traveling at the speed of light between Voyager 1 and Earth take more than 16 hours to cross the distance between the two.


While these are exceptional case examples of resilience in IT systems far less complex than hospital IT, I believe healthcare can do better in terms of computer "glitches" affecting mission critical systems that are a bit closer than 10 billion miles away.

-- SS