November 22, 2010 – IBM formally unveiled a storage architecture that will convert quadrillions of bytes of information into "actionable insights" twice as fast as previously possible, the company said.
The technology, developed by IBM scientists, was described and demonstrated at the Supercomputing 2010 conference in New Orleans, where it won the event's Storage Challenge. The competition measures performance, expandability and system utilization.
The architecture, designed to speed up analytics, is aimed to be used in dispersed cloud computing applications, such as financial analytics, data mining and management of digital media.
"Businesses are literally running into walls, unable to keep up with the vast amounts of data generated on a daily basis," said Prasenjit Sarkar, master inventor, storage analytics and resiliency, IBM Research – Almaden, in a news release prior to the demonstration. "This new way of storage partitioning is another step forward on this path as it gives businesses faster time-to-insight without concern for traditional storage limitations."
The scientist calls the architecture General Parallel File System-Shared Nothing Cluster. The approach uses clusters of servers with real-time management and transfer of files and advanced data replication techniques. By "sharing nothing," availability, performance and expansion of services are more achievable and reliable, IBM said. Each node in the network of devices that is used is self-sufficient, tasks are divided up and no node waits on another.
IBM's current GPFS technology is used in IBM's High Performance Computing Systems, Information Archive, Scale-Out Network Area Storage, and Smart Business Compute Cloud. The "share nothing" approach will help tackle "big data problems," IBM said.
For instance, large financial institutions run complex algorithms to analyze risk based on quadrillions of characters of data, from billions of files across multiple computing systems and found around the globe, at the same time.
This architecture is desinged to make that process more efficient by establishing a common file system and naming conventions across all the platforms. This in turn will actually reduce the amount of disk space needed to handle analytical problems, IBM said.
This originally appeared on Securities Technology Monitor.
Register or login for access to this item and much more
All Information Management content is archived after seven days.
Community members receive:
- All recent and archived articles
- Conference offers and updates
- A full menu of enewsletter options
- Web seminars, white papers, ebooks
Already have an account? Log In
Don't have an account? Register for Free Unlimited Access