Planning Keeps Joplin Hospital Data Online
(Editor’s note: to read the mainbar story on Joplin’s data maintenance and recovery efforts, click here.)
When so little was going Joplin’s way during the May 22 tornado that destroyed parts of the city, planning and a bit of good timing spared St. John’s hospital from what might have been additional tragedy. In April, the Mercy hospital system, which includes St. John's, opened a state-of-the-art data center for mission critical applications and clinical data for its 28 acute care hospitals across a four-state region.
The new data center, in Washington, Mo. (and backed up at another location), is about 250 miles from Joplin and was unaffected by the violent weather. Three weeks before the tornado hit, St. John’s went live on its scheduled switch to its new Epic electronic health records system, the last major hospital in Mercy’s system to do so.
As the tornado ravaged the hospital and data center, and tossed some X-rays as far as 70 miles away from Joplin, residents in dire need of attention were able to get fast, accurate medical data from Mercy.
“We’ve got the connectivity, so for us it doesn’t really matter where it’s at physically,” says Mike McCreary, chief of services for Mercy Technology Services.
Joplin resident Paul Johnson, 78, was hospitalized with pneumonia at St. John’s when the tornado struck. Guided safely by his family from the facility to a triage center, Johnson was then taken to another Mercy hospital in Springfield, Mo., where all of his electronic records were available.
“I knew that they would want to know my medications, dosages and what tests had been done, and I knew that I couldn’t remember all of it,” Johnson said, according to the release from Mercy. “The doctors in Springfield were able to pull up my records and ask me questions. It worked out beautifully.”
Analysts say many small steps add up to disaster planning that looks prescient when an unpredictable event strikes, as was the case with the tornado that destroyed parts of Joplin.
Jason Schafer, research manager of data center technologies for Tier1 Research, says risks like weather and geography are obvious for data center and IT managers. What is less clear is the ability to maintain data operations and quickly recover multiple systems from the top down, something symptomatic of data plans that omit day-to-day users or hurry the training process, Schafer says.
Commissioning, integration and final testing by operational staff at a data center is critical to effective emergency procedures.
“You may not be able to prepare for everything, but if you run through 10 or 15 different emergency scenarios and you know how the system reacts in every case, your ability to respond to an incident outside of your typical [emergency] list is enhanced,” says Schafer.
While the cost of protecting mission critical services can be daunting, the plurality of data and applications can be covered by lower cost options like replicating disk-based backups or the cloud, says Rachel Dines, a Forrester Research analyst who follows infrastructure and operations. For data center operations, Dines says the most resilient sites have n+1 or 2n (redundant) infrastructure for power, network feeds and cooling, and recommends the Uptime Institute’s tier rating system as a yardstick for resiliency and availability.
When mitigating for disruptive events, the major risks to data centers are usually less outwardly catastrophic, Dines says.
“For many companies, this won’t necessarily arrive as a natural disaster, but in more mundane events like a power failure or human error that can cause a major data center ‘disaster,’” she says.