Researchers will be able to dream up groundbreaking new applications driven by Hewlett Packard Enterprise’s new data-intensive computer, called The Machine, say industry experts. HPE showed a prototype of the processor last week.

From advanced machine learning to real-time fraud detection to large graph inference techniques used for data security to seismic research and genomics, the new memory-driven architecture holds the promise for new avenues of extreme big-data research.

Hewlett Packard Enterprise is losing one of its biggest customers in Microsoft.
Hewlett Packard Enterprise is losing one of its biggest customers in Microsoft.

“There’s an infinite backlog of problems that need high-power analysis,” said Richard Fischera, vice president and principal analyst at Forrester Research. Fischera was originally skeptical of HP’s ambitious plans, but now feels the prototype – while running simulations -- has real promise.

The computer runs on an optimized Linux-based operating system and a customized System on a Chip design (using ARM chips). HPE is part of the Gen-Z consortium of companies working on next-generation computing.

At the center of the breakthrough is a simple fact: Electrons can only go so fast. They’re also highly energy inefficient. The new architecture taps much speedier photons to carry information within the machine. Normally, fiber optics is used to carry data much longer distances – from 30 feet to around the globe. (HPE uses more than 1,500 tiny lasers to carry data inside the computer’s enclosures).

A standard computer consists of a plethora of processors that must work together to access and handoff data from storage. HPE estimates that 90 percent of processing time is spent on this task. The new computer uses what the company calls a “memory fabric” that allows for far more storage, but most important, is available at all times to all processors.

The prototype holds 160 terabytes of information – about five times the amount contained in the collections at the Library of Congress.

“This is not just 160 terabytes in accessible memory,” said Kirk Bresniker, chief architect at HP Labs. “Every individual byte can be accessed by as many computing devices as you want.” The current prototype is handling 200 devices but the Gen-Z consortium is working on a specification that would allow 160,000 working simultaneously.

“They’ve turned the whole computer ecosystem inside out,” said Chirag Dekate, a research director for Gartner. “It’s now memory-driven. That will enable users to look at problems in a very different way.”

The consortium has big plans for the system and says it could be scaled up in the future to hold more than 4,000 yottabytes. How much is that? About 250,000 times the amount of data in the current digital universe.

Bresniker predicts the new technology will reach the market by 2019 or 2020.

Randy Barrett

Randy Barrett is a veteran business and technology editor