In recent months, an onslaught of ransomware attacks has been directed against healthcare institutions.

Kansas Heart Hospital in Wichita paid hackers a $17,000 ransom only to receive demands for a second ransom before the hijackers would release all of its data; Hollywood Presbyterian Medical Center in Los Angeles also paid a $17,000 ransom; San Diego-based Alvarado Hospital Medical Center, Chino Valley Medical Center and Desert Valley Hospital, both of which are also in Southern California, King’s Daughters’ Health in southeast Indiana, Methodist Hospital in Kentucky and Ottawa Hospital are all among the latest high-profile victims of a ransomware attack.

These incidents likely represent just the tip of the iceberg. According to a recent HIMSS study, more than half of all U.S. hospitals have been the victims of some form of ransomware. The Department of Homeland Security and Canadian Cyber Incident Response Centre, along with private companies like Intel and Symantec, are all warning that the incidence of these data hijackings is skyrocketing and that the occurrences are becoming much more sophisticated.

In June, Information Management's sister publication, Health Data Management, hosted an executive roundtable to discuss how hospitals and other healthcare providers are coping with this threat.

Moderated by HDM Editor-at-Large Elliot Kass, 10 industry executives participated in the roundtable. They included Robert Dalrymple, enterprise information security officer at Thomas Jefferson University and Jefferson Health; John Donohue, associate CIO of technology and infrastructure at Penn Medicine; George Dunn, manager of IT security at South Nassau Communities Hospital; Arthur Harvey, vice president and CIO for Boston Medical; Kathy Hughes, vice president and chief information security officer for Northwell Health (formerly the North Shore–LIJ Health System); John Mertz, vice president and CIO at South Nassau Communities Hospital; Jamie Nelson, senior vice president and CIO at Hospital for Special Surgery; Scott Ruthe, vice president of network and security for CIOX Health; Daniel Sergile, director of security operations at CIOX Health, and Maria Suarez, chief information security officer for Hackensack University Medical Center.

The event was sponsored by CIOX Health, a company that offers products to manage health information. What follows is an edited version of the lively and informative discussion that took place.

 

Elliot Kass: Just how widespread a threat is ransomware? Do you believe the results of the HIMSS survey that one out of two hospitals in this country has already received this kind of threat?

George Dunn: I believe the numbers. I think it might be higher than that. I think the associated questions would be: Did they pay the ransom? How did they act on it? Was it single incident—a ransom note on a random workstation—or something that affected the operation of the hospital? I think that’s the real question—were they impacted by it? Ransomware is like any other malware threat—you see it all the time.

Jamie Nelson: When I speak to colleagues, it’s not if; it’s when you’ll get hit. So if you go in with that mind-set, then you prepare and you build your insurance policy. We talk about healthcare, but this affects all industries. It affects your home PC. It’s everywhere. So, yeah, it’s going to happen.

John Donohue: We’ve taken the same type of approach: It’s not if; it’s when. And we spend as much time putting protections in place as we do preparing for what to do when it happens. Do we pay the ransom? What are the ethics around that? Is it in our best interest to do that? So we’ve been balancing protection with what I’ll call our breach response protocol. Making sure it happens properly; making sure we reach out to law enforcement; making sure our public relations people have the right message for the media. So there are a lot of things that we’ve done on that front that we think are important.

Kathy Hughes: I don’t think it’s a threat; I think it’s a reality. And I don’t know of any healthcare system that has not been a victim of ransomware. Everybody I’ve reached out to has had some kind of event, and it’s been very painful. So I agree—it really is a matter of learning from other people’s experiences, so you can be better prepared and respond in an effective way.

Daniel Sergile: But it also highlights another issue: Companies don’t do a very good job with their backup and recovery. If I were doing monthly backups and daily intermittent backups, then I wouldn’t have to pay a $17,000 ransom. I’d literally take a snapshot, lose a day’s worth of data, and it would probably cost less than $17,000. It goes back to the basics of information security: Do employees have administrative rights across the entire environment? Are those rights a little too elevated, allowing them to modify their systems? And at the system level, are we investing in all the latest and greatest flavors of antivirus and employee analytic tools? If we go back to basics and do what needs to be done—not to the point where it cripples the business, but secures it—then I think you’d see a lot less people paying that ransom.

John Mertz: The FBI estimates that over $200 million ransom was paid during the first quarter of 2016, and estimates that $1 billion in ransom will be paid in 2016. That’s serious money, and this problem is going to be immense, because this billion dollars is going to be reinvested. The attackers are getting more sophisticated. And great backups are not going to solve the problem. They get encrypted or they’re off-site … Try to restore seven terabytes of database from a tape. It’s just not going to happen.

Elliot Kass: In light of the whole wave of ransomware attacks, have you put special precautions in place or changed your security strategy?

Jamie Nelson: The Hollywood Presbyterian attack brought attention to the problem and gave us more access to dollars to build up our security function, to get certain staffing in place, to add tools. The boards are now interested. Our CEO wants a plan in place. Getting it out there has been very helpful—but it’s still just another type of security that we had to prepare for.

Maria Suarez: What’s different is that your user population needs to know what to do if a ransom message appears on their screen. Do they power off, disconnect from the network or do both? Your user community has to know exactly what to do. By the way, the right answer is to disconnect from the network and not power off—rely instead on whatever mechanism you have to trigger an incident response. Do not power off. So the users have to know that. Assuming that you have the basic hygiene—the incident response plans, the remediation, the patching, the hardening, the configurations—in place, then the only other additional consideration is that if you don’t have a fast, automatic way of detecting and responding to zero-day malware—either at the network level or at the end point level—you need to get one. Because what’s happening with ransomware is that the hijackers take old ransomware, change it a little bit, morph it, and the signature-based antivirus solutions don’t work. So we need to make sure we have those automated systems in place to detect and respond to zero-day malware.

George Dunn: One other big difference is that you have a different decision tree, because now you have the option of paying the ransom. You have still had a major incident. You have a major cleanup, and you have to follow through. But there is still that decision to make—do we pay and get back online? So, you have to have a decision tree in place, along with the correct criteria to determine what the impact is if you don’t pay, and then make the decision accordingly. And if you want the option of paying, setting up a bitcoin account isn’t necessarily a five-minute operation.

Jamie Nelson: Our compliance department has put together a document about what it would take to fund a bitcoin account, which ATMs around the hospital we could go to—we’ve actually started thinking that through. You just have to.

Arthur Harvey: How many people in this room have a bitcoin account just for that purpose? I’ll admit it; I got one. I’m not saying we should necessarily pay the ransom, but if push comes to shove, take the administrative burden out of it.

Maria Suarez: Even if you pay the ransom, there’s never any guarantee they haven’t looked at your data and kept it stored someplace. So even if they give you the decryption key so you can have your data back, there’s no guarantee they won’t return and extort you again.

John Mertz: That’s the real danger. They put this payload in your system and let it sit there for two months before they encrypt everything. So your backups, they’re as bad as your live data.

Kathy Hughes: I think 146 days is the average to detection.

Robert Dalrymple: What we have seen is that most of our vulnerability is coming through e-mail. So we are trying to educate our users on the proper way to handle e-mails. If it’s an unsolicited e-mail, just ignore it. Send it to the spam folder; we’ll investigate. And—of course—don’t click on any attachments within the e-mail.

Maria Suarez: How many people have a whitelist-only type of e-mail system? Because most e-mail systems allow everybody through and put the onus on the user or the e-mail tech team to blacklist those you don’t want. Why not flip it? Why not disallow everybody and then establish only those you want to get e-mails from?

Arthur Harvey: The challenge is that our users—and in many cases, our lords and masters—are physicians. And they have genuine concerns that if we make the security too intrusive, we will interfere with their ability to care for patients. Look, I’m an engineer. I get it. I could have a whitelist; I could eliminate web-based e-mail; I could prohibit all mobile devices… and I’d be looking for another job in about four minutes.

Maria Suarez: But alternatives are needed. For example, the strategy of shutdown and allowing remote access to e-mail only through the Citrus portals—there are alternatives to make things more secure.

Jamie Nelson: We’ve done a few things: For OWA [online web access], we have two-factor authentications. We tag every external e-mail with a big, ugly “EXTERNAL” in caps so people know that this is not coming from a hospital employee. We also sandbox. We actually look at attachments, and if we don’t like them, we explode them. So we’re trying to help our users, because we’re with you 100 percent. You can’t shut down their personal e-mails, but you need to make them safe.

John Donohue: It’s a balancing act. We have the same issue with physicians who are dissatisfied if they can’t get access as quickly as they need to. It’s striking that right balance, and constantly changing that balance as the threats change.

Elliot Kass: Why isn’t more frequent, off-line backup the perfect answer to ransomware? What are the limitations there?

Arthur Harvey: It’s complicated and hard to do from an operational perspective. It worked great when this nonsense started. They were encrypting right away, so you could go to your backup from yesterday and just lose a day’s work. No big deal. But now, in some cases, they’re hanging around, so you’re backing up files that are already corrupted or already encrypted.

Kathy Hughes: We’ve been having discussions with a number of vendors, because you really need to make sure that you not only have a backup, but that the backup is clean and not on the network. Traditionally, that’s been done through tape, but it’s no longer feasible to rely on tape in the event of an attack. That’s where air-gap technology comes into play.

Daniel Sergile: Only administrators, not your user population, should have access to your backup discs. So if a disc backup gets encrypted, one of your administrators had to be compromised. In our environment, administrators surf the web on one account; when they administer, they do it on another management account and never the two shall meet. So, for a backup disc to get encrypted, it seems like the administrator was out surfing the web and maybe didn’t have a separate account with which to do it.

Maria Suarez: Ransomware is evolving and has become more sophisticated. Now it actually spreads the encryption to map drives.

Arthur Harvey: If you set up your network with the appropriate security protocols, this type of encryption shouldn’t get to the backup. If you do it right, you theoretically should have a backup device on your network that is completely separate from the normal traffic on the operating network. Now, does that always happen? I don’t know. We certainly test and look for that.

Scott Ruthe: That scenario should be part of your risk assessment. It should also be considered when you plan your architecture. How are you isolating your network? How are you controlling the users? We’re bringing together four companies into one. One of those companies didn’t have quite the same security, and their users talked directly to the disc in the production department. So we’re rearchitecting to make sure that doesn’t happen. They shouldn’t be able to go directly into the production discs and kill off everything else.

Elliot Kass: What kind of business continuity planning are you currently doing? And what kind of drills and tests go along with that?

Jamie Nelson: When you start doing business continuity planning, you have to pretend that your network got cut and you have nothing. I think that came out when we learned about Hollywood [Presbyterian]—the fact that everything was gone.

Robert Dalrymple: We want to establish a formal BC program, so we are working with the various business units to set the criticality for our systems and applications. That way we can determine how best to have high availability; restore point objectives, restore time objectives. It’s new for us at Jefferson, but it’s been elevated to the level of the board. They really want us to have a comprehensive plan for how we would tackle an event such as ransomware, where we are brought off-line. Because we have so many systems in place, we’ve found that tabletop exercises are really handy. You bring your clinicians to the table; you bring in your senior leadership and just walk through scenarios. It’s eye-opening what you don’t think about and what you haven’t considered. That’s been really effective for us.

Arthur Harvey: We have a director of emergency management whose job it is to—when the excrement hits the rotary cooling device, she is making sure that we’re good to go as a hospital. This includes hostage situations, helicopter crashes, bombs going off during the marathon, that sort of thing. So we do those exercises, and I said to her, “Maureen, if somebody whacks one of the data centers, what do we do? Why don’t we do an exercise around that?” She thought that was spiffy keen, and we got a number of the Boston hospitals to participate, because part of our planning is for when we have to go code black and move patients somewhere else. And it was eye-opening for us all of the stuff that the IT folks—particularly the newer IT folks—don’t think about. So we thought that was really helpful.

Elliot Kass: What kind of drills and testing are you running? You talked about tabletop exercises before. Is it limited to that, or are you taking systems off-line?

Robert Dalrymple: We’ve had mock drills where we will go into, say, the emergency room and select some individuals within that department and indicate that you are now off-line. You have no system access. What do you do? And normally we have a playbook for the users to follow. So we want to see if they know where that playbook is and how to actually follow it if a system goes down.

Jamie Nelson: We made a decision to remote host, so our EMR is sitting in Verona, Wisconsin. It’s actually fantastic and it helps safeguard us, because now it’s just an image of the data that we’re looking at in New York. And, of course, we’re trusting that Epic has their security. And, actually, I’ve looked very closely and they have a fantastic program.

Arthur Harvey: We made the decision to spend the dough to have dual data centers, and they’re both active for the EHR. My chief medical officer’s a really good guy and gets the fact that testing’s important to us. So he worked things out with the medical staff, and once a year, I actually get a real downtime to do a fail-over practice. We all know it’s hard to get downtime, and most people don’t have that luxury. But I get one for two hours a year, and that’s invaluable for testing.

John Donohue: We did a sort of hybrid between the two. We set up a remotely hosted DR site at Verona, and we host our own primary. So instead of standing it up with an extra set of hardware, we’ve done it in Verona. And to keep those two systems in sync, a lot of stuff has to happen. So we’re going to do our first test this summer, and we know we’re going to have issues.

Scott Ruthe: From a DR standpoint, the focus is entirely on getting access to the backup. But when you look at something like a ransomware scenario, now you’ve also got to consider how to cut off access from the infected area. How do you isolate it, and how do you fail back to it once you rebuild things? Those issues are a bit of a shift from the standard scenario. We’ve talked about them, but we’ve never actually done anything that tries to simulate that scenario.

Kathy Hughes: If the database is down or there’s a hardware problem, those attacks are pretty straightforward and easy to identify. Ransomware, though, sometimes you don’t know where it’s coming from. You just know that it’s out there on a hundred computers, but you have no idea where it is. And a lot of companies don’t have the tools to detect it. They usually have to pick up a phone and bring in a cyber security firm to help figure out where it is.

We’ve outfitted ourselves with those tools so we can do it ourselves, but it’s not easy to identify where it came from; how to stop it, which machine to disconnect. So it takes a lot of time to figure out how to contain it, and then you have to figure out how to recover and do a post-event analysis so you can update your plans.

We also have a retainer with Mandiant—they’re excellent, by the way. And we have not only used them for forensics and helping to identify the root cause of ransomware, but also to help us build out our incident response plan and program. We also have a retainer with a forensics firm, because one of the things we always have to be mindful of is a HIPAA breach. Did they take our data? Did they copy our data? What’s the patient population that was affected by this? If it’s determined, after all the analysis was done, that it was a breach, then you have to go through the notification, and you only have a certain number of days to do that and to make that determination.

Elliot Kass: Given that these attacks are increasing and the stakes keep rising, does that influence how your boards and your senior management view this? Are they setting aside more resources to help you address this or is that still a big obstacle?

Jamie Nelson: Hollywood was the new “meaningful use” for us. Whose CEO doesn’t send at least one e-mail a week about the latest breach or what’s out there? I know in our case, it certainly helped us to be able to justify additional expenses, because they’re significant.

John Mertz: The board is a thousand percent for us doing whatever we need to do for security. But then the finance people go, “Okay, where are you getting this money? Because we’re just barely getting by as it is now.” So that’s the struggle.

John Donohue: Even when you get the money, we’ve had trouble getting resources. We just can’t ramp up fast enough. To get the people with the right skills is really difficult.

Kathy Hughes: Even when you have budget, getting those technologies and initiatives integrated with the ones that are needed just to run the hospital is a constant competition between priorities and resources. But I also find there’s so much that can be done that doesn’t cost money—such as process governance and end user training and awareness—that we can benefit from significantly.

Robert Dalrymple: I have board-level support and support throughout the organization. The funding is there. But the constant question is: If I give you X amount of dollars, can you go faster? The problem is you still have to deal with change management, and you can give me $5 million, but I may not be able to spend $5 million within a year, because I need to follow the appropriate process to implement the change.

Daniel Sergile: The whole past decade the attitude was “I’m going to do the bare minimum to meet regulations.” Now people are starting to recognize that is no longer acceptable. People are losing their jobs because they didn’t do the due diligence they needed to secure the environment. And the last thing anybody wants is to be written up in the paper. So being able to sell security is extremely easy these days. So when we point out that ransomware attacks have spiked almost 600 percent over the past six months, the response is “Okay, we need to do something.”

Elliot Kass: Are there circumstances under which you would make a decision to pay a ransom?

Jamie Nelson: Yep.

John Donohue: It’s very situational.

George Dunn: It’s a complicated decision. But once you pay it, you’re considered an easy mark and also you help perpetuate these types of attacks more generally.

John Mertz: Hollywood Presbyterian had to be at the point where they said, “We pay the ransom or we’re out the business.” It’s what you do.

Daniel Sergile: So let’s put it this way: I get a call or I get an e-mail saying, “Oh, by the way, we’ve updated the firmware on your dialysis machines. Oh, by the way, we’ve updated the firmware for all your heart defibrillator patients.” I don’t think there’s a person at this table who would say, “No, we’re not going to pay the ransom.”

Kathy Hughes: If the ransom is under $20,000, it’s anonymous. That’s why a lot of these settlements don’t require disclosure. And that’s not a lot of money in the scheme of things, but for the cyber criminals, that’s a big paycheck.

Elliot Kass: Any final thoughts? What’s your biggest fear or expectation for what happens next?

John Donohue: For us it’s just a matter of when. We’ve had some very small incidents, but it’s just a matter of time before it’s a big incident, and we’re doing the most we can to prepare. Who makes the decision to pay the ransom? How do you reach them? If they’re not available when you need them, how do you track them down? Who’s second in command? So, for us, it’s really preparing to make sure the whole organization is ready when it comes.

Jamie Nelson: I’m comfortable with the tools we have in place, with the access controls. My larger concern is where we are in terms of business continuity, and really making sure our end users are prepared. Are our people prepared not to have their systems for a few days? That’s where we have to focus a lot of our efforts right now.

Maria Suarez: I’d like to see more information sharing about the details of what goes on. There needs to be much more information regarding the specifics of the malicious software provided through some automated means, so it can be shared globally and we can all take advantage of it.

John Mertz: You’ve got to focus on education. You’ve got to focus on zero-day protection, which is so important. You’ve got to have your backups. You’ve got to keep them off-line, make sure they’re not touchable. And you’ve got to have your business continuity in place in case all else fails.

George Dunn: There’s no silver bullet. It comes down to basic, very broad security practices across the board. The only thing that makes ransomware unique is that it’s immediate and very visible and has a dollar amount connected to it. It’s like any other malware incident in terms of disaster recovery and business resumption. I agree that training and having the tools in place that allow sandboxing and sharing of that sandbox information are important. But they’re not a silver bullet, either. There are delays in sandboxing—half an hour to an entire day, sometimes. So that’s not going to always work. Somehow, we have to get additional funding for security; we need to bring more resources to bear.

Arthur Harvey: There’s no easy answer. This whole topic—18 months ago, we wouldn’t have been talking about it. So I don’t know what’s going to be 18 months from now, but it’s probably something different. The hard part is maintaining a never-ending vigilance on this. People get bored with it. You’ve got to refresh people from time to time, and you must find ways for your users to keep all this stuff in mind.

Robert Dalrymple: With all the activity around the acquisitions at Jefferson, I’m just trying to draw a clear data map of where my sensitive information resides, so I can put in the appropriate controls for access and recovery and backups and so on. I’m also trying to leverage as much technology and socialization as possible to bring my users into the fight against cyber attacks.

Kathy Hughes: There are so many different facets of the government that it’s quite a challenge to know who to call in the event of an incident. It would be helpful if there was just one point of contact—one number for us to call—along with the understanding that we would be protected, that it would not be held against us. And involving them shouldn’t delay things from getting remediated. Because that was another lesson learned from Hollywood; When the attack happened, the government got in their way.

It’s also about balancing clinical care with security. The third dimension to that, I learned the hard way, is employees. We use a simulated phishing tool. During our last campaign, we hit on a very sensitive nerve with the employees, which is what the real cyber criminals do. In fact, I had taken a real phishing e-mail that we had received and used it for our campaign. It had to do with salary increases, so you can imagine the nerve that hit with staff. Lesson learned: We didn’t engage our HR department and let them know we were going to do this. So it did set off a nerve.

But you’re trying to educate people, and that means training them to look for red flags. If there’s money being offered—whether it’s a salary increase, a car, a bonus or whatever it might be, that’s a kind of red flag. Creating a sense of urgency; threatening people with consequences if they don’t do something by a certain date, or if the e-mail is requesting sensitive or personal information, those are all red flags that should raise questions in the staff’s mind that it could be a phishing e-mail. So these are things you have to be mindful of when you use simulated phishing campaigns to prevent ransomware from coming into the healthcare system.

Scott Ruthe: This is something we can put in front of senior management and get their attention—and get more money to help shore things up where we need to. And we need to address more scenarios; not just the front-end risks, but also the DR piece in the aftermath.

(This article appears courtesy of our sister publication, Health Data Management)

Register or login for access to this item and much more

All Information Management content is archived after seven days.

Community members receive:
  • All recent and archived articles
  • Conference offers and updates
  • A full menu of enewsletter options
  • Web seminars, white papers, ebooks

Don't have an account? Register for Free Unlimited Access