Leveraging big data to improve population health requires the analysis of a tsunami of lifestyle, genomic, medical, and financial data, but current privacy safeguards may not be enough to keep in check rapidly advancing analytics that can also be used for harmful or discriminatory healthcare practices. As analytics and big health data gain momentum, the federal government is weighing the need to carve out special data use authorities for the industry.
While big data holds tremendous potential for advancing medical science, reducing the costs of healthcare delivery, and providing better health outcomes, the analysis of this data also poses significant risks to individual privacy—a reality the Obama administration is grappling with.
To ensure that individual privacy is protected while capitalizing on big data, the White House asked the Department of Health and Human Services—as part of a September 2014 announcement by President Obama on open government initiatives—to “consult with stakeholders to assess how federal laws and regulations can best accommodate big data analyses” and to “develop recommendations for ways to promote and facilitate research through access to data while safeguarding patient privacy and autonomy.”
Stan Crosley, co-chair of the Health IT Policy Committee’s Privacy and Security Workgroup, presented preliminary recommendations on Tuesday at a virtual HITPC meeting. “Patients should not be surprised or harmed by collections, uses, or disclosures of their information. Nowhere is this more difficult than with big data,” said Crosley, who added that the White House’s charge was to consider what would constitute “appropriate” use of big data while protecting patient privacy.
Specifically, a May 2014 report from the Executive Office of the President called for a government-led “consultative process to assess how the Health Insurance Portability and Accountability Act and other relevant federal laws and regulations can best accommodate the advances in medical science and cost reduction in healthcare delivery enabled by big data.”
Towards that end, the Privacy and Security Workgroup held three separate public hearings—two in December 2014 and one in February 2015—which included presentations from 21 stakeholders including patient and privacy advocacy groups as well as researchers. Among the workgroup’s findings: leveraging big data in healthcare has the potential for harmful or discriminatory practices. For example, the workgroup cited health insurers’ discriminatory uses of health data for insurance decisions.
“We found this to be a very significant challenge—ensuring both the responsible use of big data analytics and understanding the increasing risk of harms that might result, of which discrimination is one of the harms that could arise,” said Crosley.
Some U.S. laws prohibit discriminatory uses of health data, he told the committee. However, other discriminatory uses of health data are not expressly prohibited by law or are, in fact, expressly permitted. And, when it comes to HIPAA, Crosley said that there are covered entities that are highly regulated “and then the exact same information held by a non-covered entity is completely unregulated with respect to health information protection specifically.”
Crosley also pointed out that there are no overarching standards for de-identification of data outside of HIPAA and that even HIPAA standards for de-identification are often voluntary. “The workgroup believes there’s an over-reliance on de-identification while no accountability for re-identification,” he cautioned.
The workgroup called on the HHS Office of Civil Rights to be a better “steward” of HIPAA de-identification standards and conduct. Nonetheless, Crosley said that while the workgroup “desires accountability for re-identification or negligent de-identification” it recommended against specifically asking Congress to pass legislation at this time to criminalize unauthorized re-identification of data.
However, part of the difficulty in making policy to address these issues, according to Crosley, is that there is no consensus on which uses are “harmful” and predicting which future uses could be harmful is also a challenge.
“It’s going to be very difficult to make meaningful progress here until we try to get to some type of a national consensus on what constitutes harm,’” he said. “The ends of the spectrum are fairly easy to address—those things that are clearly bad and those things that are clearly good that everyone can agree on. But, we have really no consensus about the 80 percent that’s in the middle and makes up most of the use cases.”
As a result, the workgroup recommended that the Office of the National Coordinator for Health IT and other federal stakeholders promote more public inquiry to fully understand the scope of the problem—both harm to individuals and to communities. But, Crosley warned that “it can’t be a point in time.” Policymakers have to continuously monitor the use of health big data—both health data and data used for health purposes—to identify gaps in law and regulation and areas for further inquiry, he concluded.
Crosley said the workgroup will present its final recommendations to the committee later this summer.
This story was first posted to Health Data Management's web site.
Register or login for access to this item and much more
All Information Management content is archived after seven days.
Community members receive:
- All recent and archived articles
- Conference offers and updates
- A full menu of enewsletter options
- Web seminars, white papers, ebooks
Already have an account? Log In
Don't have an account? Register for Free Unlimited Access