Day one of the GPU Technology Conference in San Jose and I'm still glowing from watching Steve Wozniak "travel to Mars" through NVIDIA's photo real virtual reality. Or, holding my stomach as Jen Hsun Huang, CEO of NVIDIA took us soaring over Everest. Or cringing, as I watch the early attempts at a car teaching itself to drive and being reminded of how my 16 year old daughter is learning to drive (there were a few similarities...).

Each emotion illustrates what everyone will experience shortly on NVIDIA's next gen compute platform with announcement for AI, VR, self-driving, SDK and new deep learning appliance.

This is not your traditional or even big data analytic platform. It's a complete overhaul of the computing architecture. It's a complete rethink of data management. It will also change how you think about analytics.

Stepping back from what may seem like hype and examples steeped in robotics, VR and infrastructure, the truth is, the announcements today show that deep learning in action is at most a year away, and as soon as now. In addition, the innovation coming out of robotics, VR and infrastructure will allow introduction of new form factors and channels to engage with customers and shape our workforce. In the end, it is a data challenge for the very reason that for every channel we use and add, it always ends up being a data challenge.

The implications for how you manage data are radical. Here is what you need to think about:

◾Deep learning systems are voracious eaters of data. If you think you have volume issues now, it will only get worse. Traditional integration won't cut it. You need bigger compute on GPUs not CPUs for speed, performance, and efficiency. Don't you want to train your data in 2 hours vs. 2 weeks?

◾The deep learning algorithm (convolutional neural network - CNN) is the data management tool. Traditional anlaytic models sit outside the infrastructure. In the case of AI, a generic CNN sits with the processor. Classification and inferencing happens on data ingest.

◾The deep learning system is the expert - experts need not apply. Traditional analytics and expert systems relied on coding instructions. This included coding for transformations and mapping. SMEs were roped in to "teach" what data decisions should be made. Today, you say what the data should do, and the system aligns it to the purpose.

◾Deep learning systems learn data governance. Data governance policies will need to be taught. Stewards won't create business rules. DL systems need to be told what you want them to do and provide training data they can test and learn on.

◾Deep learning systems have deep memories. CNNs will maintain up to 180 layers (maybe more) that have to be immediately referenceable in real-time. You need to consider volume not only in what you take in, but how you store it. Graph isn't a nice to have anymore, it is a necessity to hold contextual memory. Cloud and big data lakes allow data scale at lower cost. Storage and GPU compute are converged.

If there is one thing to keep in mind when embarking on this new journey of deep learning in pursuit of artificial intelligence, it begins with data. Keep these five strategy shifts in mind as you introduce AI compute platforms into your organization.

(About the author: Michelle Goetz is a principal analyst at Forrester Research servicing enterprise arthitecture professionals. This post originally appeared on her Forrester blog, which can be viewed here).

Register or login for access to this item and much more

All Information Management content is archived after seven days.

Community members receive:
  • All recent and archived articles
  • Conference offers and updates
  • A full menu of enewsletter options
  • Web seminars, white papers, ebooks

Don't have an account? Register for Free Unlimited Access