The role of graph technology and the data supply chain for responsible AI

Register now

In 2019, we saw artificial intelligence and machine learning continue to permeate just about every industry and facet of our lives. Everywhere we turn, it seems like everything humans interact with is being powered by some sort of AI. It’s becoming so commonplace, people are starting to manipulate it, which caused Facebook to take an official stand against banning deep fake videos (one’s manipulated heavily by artificial intelligence) on its site.

Now more than ever, as creators and users of AI, we have a duty to guide the development and application of this technology in ways that fit our social values to increase accountability, fairness and public trust.

AI today is effective for specific, well-defined tasks, but it struggles with ambiguity which can lead to subpar or even disastrous results.

For example, if a company is using AI as a way to sift through resumes for new candidates, there might be some inherent biases due to race or gender that the AI may learn bias from. Humans deal with ambiguities by using context to figure out what’s important in a situation and then also extend that learning to understanding new situations. AI systems require context and connections to have more situationally appropriate outcomes and make decisions in the same way humans do.

While there’s no one-size-fits-all solution to tie these efforts together, there are fundamental elements we need to put in place to be responsible and ethical. In 2020, I believe we will start to see the data supply chain play a larger role, companies and governments will create more standards for measuring ethical AI and graph technology will become a standard addition for AI and machine learning-.

Why context and connections are the key

In 2020, graph technology will be considered the low-hanging fruit to easily increase AI accuracy. Graphs are designed to treat the relationships between entities as equally important to the data on the entities themselves. The data is stored as we might first draw it out – showing how each individual entity connects with or is related to others.

Since we know that relationships are highly predictive of behavior, it makes sense that organizations are using graphs to get more predictive power out of the data they already have. If your team is not boosting your machine learning (ML) with graph features, you will be playing catch up.

Take the German Center of Diabetes Research (DZD), for example. Graph technology gives its team of scientists and medical doctors a holistic view of highly heterogeneous types of data. The context the technology provides enables them to gain valuable insights to fight diabetes by connecting data from various species, locations and disciplines, that were previously unattainable.

Adding various types of context to AI will be a practical breakthrough for many uses today. These include: adding contextual and adjacent data so we’re not over-focused in our learning which leads to very narrowly applicable AI, leveraging relationships and network structures in machine learning to improve model accuracy and reduce false positives, using context to make heuristic AI smarter with a framework for probabilistic decisions, and using AI supply-chain tracking to help ameliorate human failings (such as unintentional bias). If we want AI systems to be inherently more transparent from the onset, they need to be built in conjunction with relevant context.

The role of the data supply chain

The data supply chain, or the lifecycle of data, will become increasingly important to ethical and responsible AI in two key areas. First, it will help us better understand our data, its sources, and details. Second, data lineage and protection against data manipulation is foundational for trustworthy AI.

This past year, we saw a tipping point of public, private, and government interest in creating guidelines for AI systems that better align with cultural values. In August of 2019, the federal government released a plan for prioritizing federal agency engagement in the development of standards for artificial intelligence (AI). Vendors, developers, organizations have also collaborated and created their own tools and methods to address AI bias. Just recently, Sundar Pichai, the head of Google and parent company Alphabet has called for artificial intelligence (AI) to be regulated but with a sensible and cooperative approach.

In 2020, I expect that momentum to increase and see frameworks published such as the EU Ethics Guidelines (with updated checklists expected in early 2020.) The role of a data supply chain will vary by industry and it may still be a few years before the full impact is recognized.

Looking past 2020

Until we feel it’s proven to be trustworthy, AI will continue to struggle to be widely adopted and reach its full potential. At the same time, we can’t rely on AI alone to solve all of our issues since humans are inherently hard to predict. Vivienne Ming summarized it best: “If you’re not thinking about the human problem, then AI isn’t going to solve it for you.”

However, graphs enable us to address a number of shortcomings in our technically based AI systems, such as the difficulty of adding behavioral data to machine learning models. Graphs also help address some of our human flaws, too, whether it’s considering the context of our data sources or guarding against data manipulation. There’s a long road ahead to guiding AI but we don’t get to sit this obligation out and there are practical ways to get started today.

For reprint and licensing requests for this article, click here.