Ethics standards and protocols are urgently needed in the data world
In the last few years, we’ve seen the proliferation—some would say explosion—of access to data and, as an extension, of data analytics. Every industry—from energy to education, from manufacturing to media—is harnessing the power of information to optimize outcomes.
At the same time, this unprecedented amount of data is creating incredible opportunities for unauthorized access or release of private information.
We’ve seen one of the worst: three billion Yahoo accounts impacted in 2013—to some of the most frequent: already in 2019, there have been an estimated 3,800 disclosed data breaches, totaling 4.1 billion records. And these breaches occur in every industry, in companies as varied as Panera Bread to Wells Fargo to Facebook.
And in many of these cases, we often saw a significant lag before people were informed that their data had been compromised. (With Yahoo, LinkedIn and others, the lag was years.) Companies often defend these lags by pointing to their own internal investigations. But let’s be honest—the negative impact to the bottom line is what keeps most of these breaches quiet.
The general public also plays a role in these data breaches. They occur with such frequency, we’ve developed data breach fatigue. As a consequence, with every new breach, the general consumer becomes desensitized, feeling powerless, thinking “so much of my data is already out there; what can I do about it now?”
We too often fail to emphasize what data breaches could really mean to the individual—e.g., financial ruin, loss of security clearance—which often have a ripple effect on the economy. In fact, a 2018 IBM study found that the average cost of a data breach globally is $3.86 million, indicating the number of breaches in the first half of 2019 total cost could rival the GDP of some countries.
Yet, it doesn’t look like the collection and use of data or the data breaches and the complacency around them are going to go away any time soon.
It is time to be proactive before the breaches are unsustainable. One urgent solution is the formation of a global governance body to establish protocols for the ethical use and protection of data.
We don’t have to look too far for a credible model—the Stanford Prison Experiment, the Tuskegee Syphilis Study and the Milgram Experiments are all examples of research that so quickly and so detrimentally led to long-lasting emotional issues that review boards and training protocols focused on ethics and participant protection were instituted in academia and eventually spread to private companies.
If something similar around data use existed, ethical decisions regarding how and when to use data would not be arbitrary, resting solely with the value system of individuals in power.
For example, Chicago Mayor Lori Lightfoot announced recently that the Chicago Police Department will not provide ICE access to its databases for federal immigration enforcement activities.
A different mayor, a different city or a different political affiliation and the mayor might have made a different choice. An oversight board would prevent this type of subjective ethics by making decisions based on the data itself.
If an oversight board held corporations to a disclosure standard and corporations understood there were ramifications from failure to disclose, more might be done to prevent breaches from happening in the first place.
Despite the bottom-line risk to corporations, if a solution is not found soon for the multiple uses of data and the consequent data breaches, there will be a resulting erosion of corporate trust and a decline in confidence toward reporting agencies in general.
For data analytics to remain viable well into the Fourth Industrial Revolution, we must be conscientious, proactive and empathetic—with an eye toward building shared processes and protocols that will lead to stronger security—even if it means more regulation.
When we include ethics into the data management equation, everyone benefits.