If you follow the drum beat of innovation in analytics, you probably already know that graphics processing units are great for high performance computing and AI, especially for machine learning and deep learning.
This is because a GPU contains far more processing cores than CPUs. It can break the processing tasks up into many components and compute them in parallel. A December 2017 Forbes article entitled "For Machine Learning, It's All About GPUs" described how GPUs are more specialized for data computation and better than CPUs for machine learning applications.
Previously the domain of computer gaming and supercomputers, GPUs are now becoming a major piece of computing infrastructure in conventional data centers. This is because GPUs process huge amounts of data in parallel and offer incredible performance at a fraction of the overall cost of CPU compute.
The rapid adoption of GPU servers reflect the demand for their processing efficiency. Nvidia is the main provider of GPUs, and the company’s data center revenue has grown by 230 percent year over year from FY2017 to FY2018, from $820 million in 2017 to $1.93 billion in FY2018 (which closed on January 28, 2018).
While AI and machine learning projects have been a key driver of those GPU investments, more recently organizations are discovering the benefits of GPU compute for a more ubiquitous challenge: how to do conventional analytics on the torrent of data organizations face today.
Every day, millions of analysts go to work and attempt to fulfill their mission to find insights in exponentially growing amounts of data. It’s a battle, and one they aren’t winning.
Conventional analytics tools built for the CPU-only era are simply unable to cope with the amount of data. As a result, the work of analytics and data science has become laborious and slow. Worse though, is the fact that the goal of analytics and machine learning--to find actionable insights in data--is simply not being met.
So what is the connection between machine learning, general purpose analytics and GPU adoption?
It’s this: businesses and government agencies don’t need to pick and choose which type of data workloads to run on GPUs. They should do both, and this speeds time to value for GPU investments.
Even when machine learning and AI drive GPU purchases, general purpose analytics accelerate their adoption.
This is true for two reasons, and the first reason is curiosity.
It’s about rekindling our passion for discovery.
Millions of us have chosen jobs in research, analytics and data science because we love asking questions, learning, exercising our curiosity. We love the thrill of discovery and solving problems that cannot be solved without studying data. Yet most of us spend too much time waiting on query results and pouring ourselves another cup of coffee while we sit and wait for answers.
The cost of curiosity has become so high that analysts actually stop asking questions. One analyst told me recently “I couldn’t stand waiting minutes for the wheel to stop spinning, so I’ve actually stopped drilling down into the data.” This is soul-destroying for a person who is paid to be curious. Quite simply, his employer isn’t giving him the technology to do his job properly.
In fact, I founded MapD after my own frustrating experience with slow, tedious and inaccurate analytics while preparing my thesis at Harvard on the Arab Spring.
Today, every business and government agency that we engage with is looking for two things: new ways to exploit the currently hidden insights in all the data they collect, and new ways to attract and retain analysts and data scientist. Extremely fast, visually interactive analytics help with both of those goals.
For the talent retention, they tap into a joy of discovery and make analysts fall in love again with the work they do. GPU-accelerated analytics and machine learning re-open paths to curiosity because they make analytics fast, visual, interactive and fun, both for business analysts and for data scientists that need a faster platform for feature engineering, and to visually explain their models to stakeholders.
Capturing Immediate Value from GPU Investments
The second reason to move general purpose analytic workloads onto GPUs is inertia.
It takes time and talent to build a mature ML practice. Even after you buy GPUs and hire a few scarce data scientists to build ML models, your organization may still take months or years to make those models a fully integrated part of your operations.
In fact, a November 2017 article in Information Management titled “Majority of firms now adopting AI; more investments planned” cited a survey where “91 percent expect to see barriers to AI realization, with lack of IT infrastructure (40 percent) and lack of access to talent (cited by 34 percent) leading the challenges.”
On the other hand, general purpose analytics is already mainstream. Most Fortune 500 companies and federal agencies have been doing SQL queries for decades, and they already have armies of business analysts, each with libraries of queries that will just run faster on GPUs.
While GPU buyers work to overcome barriers to AI adoption, they can capture immediate value by using their GPU resources to accelerate that familiar SQL analysis.
GPU performance is growing, on average, about 50 percent year-over-year. This easily outpaces the growth rate of data, which is itself growing at a very rapid 40 percent annually. There is huge potential value in the difference between those two numbers.
Rather than falling farther and farther behind the growth in big data, GPUs can help you feel like you’re on the fast train--speeding ahead of both the avalanche of big data and your competitors who are buried by it.
Register or login for access to this item and much more
All Information Management content is archived after seven days.
Community members receive:
- All recent and archived articles
- Conference offers and updates
- A full menu of enewsletter options
- Web seminars, white papers, ebooks
Already have an account? Log In
Don't have an account? Register for Free Unlimited Access