(Editor’s note: This post is one in a series by Kobielus on the role of the data scientist. Click the following titles to read his previous posts “Data Scientist: Important New Role or Trendy Job-Title Inflation?” and “Data Scientist: What Skills Does It Require?”, "Data Scientist: Is This Really Science or Just Pretension?" and "Data Scientist: Which Adjacent Roles are Central?")
Data science has historically had to content itself with mere samples. Few data scientists have had the luxury of being able amass petabytes of data on every relevant variable of every entity in the population under study.
The Big Data revolution is making that constraint a thing of the past. Think of this new paradigm as “whole-population analytics,” rather than simply the ability to pivot, drill, and crunch into larger data sets. Over time, as the world evolves toward massively parallel approaches such as Hadoop, we will be able to do true 360-degree analysis. For example, as more of the world’s population take to social networking and conduct more of their lives in public online forums, we will all have comprehensive, current, and detailed market intelligence on every demographic available as if it were a public resource. As the price of storage, processing, and bandwidth continue their inexorable decline, data scientists will be able to keep entire population of all relevant polystructured information under their algorithmic microscopes, rather than have to rely on minimal samples, subsets, or other slivers.
Clearly, the Big Data revolution is fostering a powerful new type of data science. Having more comprehensive data sets at our disposal will enable more fine-grained “Long Tail” analysis, micro-segmentation, next best action, customer experience optimization, and digital marketing applications. It is speeding answers to any business question that requires detailed, interactive, multidimensional statistical analysis; aggregation, correlation, and analysis of historical and current data; modeling & simulation, what-if analysis, and forecasting of alternative future states; and semantic exploration of unstructured data, streaming information, and multimedia.
But let’s not get carried away. Don’t succumb to the temptation to throw more data at every analytic challenge. Quite often, data scientists only need tiny, albeit representative, samples to find the most relevant patterns. Sometimes, a single crucial observation or data point is sufficient to deliver the key insight. And—more often than you may be willing to admit—all you may need is gut feel, instinct, or intuition to crack the code of some intractable problem. New data may be redundant at best, or a distraction at worst, when you’re trying to collect your thoughts.
Science is, after all, a creative process where practical imagination can make all the difference. As data scientists push deeper into Big Data territory, they need to keep from drowning in too much useless intelligence. As this dude said recently, keep your Big Data pile compact and consumable, to facilitate more agile exploration of this never-ending, ever-growing gusher.
This post originally appeared at Forrester Research.
Register or login for access to this item and much more
All Information Management content is archived after seven days.
Community members receive:
- All recent and archived articles
- Conference offers and updates
- A full menu of enewsletter options
- Web seminars, white papers, ebooks
Already have an account? Log In
Don't have an account? Register for Free Unlimited Access