The Internet of Things – all those connected objects and devices, and the vast data they produce – is giving way to the “Internet of People.” Big Data is allowing digitized humanity in the form of the growing network of predictive models describing what people have done, are doing, and are likely to do. Each of us is becoming a mathematical construct that others study to make better decisions about you. No one can yet model the way you think, but we are getting better and better predictions of how you will behave.

How can this be? Scientists and philosophers alike agree that humanity is not easy to define. But according to Oxford University researchers, two uniquely human characteristics are advanced planning and decision-making. As a data scientist, I see a great deal of overlap between predictive analytics and these two areas. So is it therefore possible to create a mathematical model of “humanity”? Can we infuse that model with enough data about decision-making to create an algorithm that can reliably predict our decisions?

At the 30,000-foot level, we’re already doing that. For example, we’re able to model consumer credit risk with high levels of predictability. There’s a distinction, however. These models don’t predict whether you individually will default on a loan. The algorithm delivers the odds you will default on your loan from a much larger population group. Banks attune the level of default risk they want to take to the credit score assigned to that borrower within the context of that population. Certainly, the models can’t analyze your thoughts; they compare your credit history with those of thousands, even millions of other people. Marketers use similar models to predict what you will buy.

Similarly, data scientists have become very good at predicting which candidates will win elections. This is not done by predicting the votes of each individual voter, but rather by predicting overall vote totals based on numerous bits of data (e.g., opinion polls, economic conditions).

We’re all swimming in the vast ocean of predictive analytics. Will you take your medicine as prescribed? Will you steal from your employer if you are hired? Will you come out to vote, and what issues resonate for you? When will you buy your next flat screen? Do you prefer organic milk? What books, movies and music are you likely to want? How good of a credit, health or insurance risk are you? Is that really you using your credit card to buy something you’ve never bought before in a city you’ve never bought in? Analytics are already offering answers to all these questions.

Clearly, predictive models like these are becoming much more widely used. Now, the rise of Big Data presents us with the tantalizing proposition that we will soon be able to analyze enough information about individuals to accurately predict what you will do next.

From Things to People

Virtual models of objects already exist. The so-called Internet of Things refers to the vast number of objects and devices that exude data about their usage, location and other factors. You also exude data. This data exhaust spills from you every time you go online, use your phone, make a purchase or pay a bill, even when you watch television (an especially if you do so online). All this data is being fed into models to create a digital you that exists in what I call the “Internet of People” — the growing network of models describing what people are doing, and are likely to do.

This digital you is becoming more and more important as companies and even governments study your behavior to make better decisions about you. No one can yet model the way you think, but we are getting better and better predictions of how you will behave.

In short, while we can’t yet model the way you make decisions – one of the unique elements of humanity — we’re increasingly able to model a close second: your future decision-making.

This phenomenon is behind the frequent debates about whether a given application of data or analytics is acceptable for individuals and even society. People are becoming much more aware of this tracking and analyzing -- and the resulting decisions increasingly made about them. This is raising ethical and even legal questions.

Who’s going to make sure these models are accurate and beneficial? Will there be legal consequences and remedies for flawed models that adversely affect individuals? How can someone fix the data — or even the model — to make it more accurate? And even if it is accurate, who wins and who loses when our digital selves are driving so many decisions about us? And do we need to redefine our concept of privacy as a result?

These questions need to be asked. We need to debate the way our increasingly digitized society develops to ensure it provides the maximum benefits for people, businesses and society.

The Internet of People is relatively new, and as it grows we will need to continually debate and revise our shared agreements for how analytics about people work and can be used. We can’t stop the data from flowing into predictive models. We can’t stop the creation and study of our digital selves. But we will want to stop them from being manipulated and exploited.

Big Data analytics is raising questions about the very essence of our humanity. The big question is: How can we digitize our identity without losing it?

Dr. Andrew Jennings is chief analytics officer at FICO, an analytics software company focused on Big Data and mathematical algorithms to predict consumer behavior.

Register or login for access to this item and much more

All Information Management content is archived after seven days.

Community members receive:
  • All recent and archived articles
  • Conference offers and updates
  • A full menu of enewsletter options
  • Web seminars, white papers, ebooks

Don't have an account? Register for Free Unlimited Access