Does your organization still think in terms of “windows of opportunity”? Given the rapid pace of change driven by customers and technology, opportunity today tends to appear in cracks rather than windows. This is why accelerating insights from data is the most critical function of your data supply chain.

Recently, I wrote about what makes a modern data supply chain. In short, a data supply chain enables data to flow easily through an organization to unlock the real value of its data – and a competitive advantage. Acceleration is the next battleground in data. An enterprise that generates actionable insights from data faster than its rivals can outperform them.

Accelerating data helps organizations surmount three challenges: how to move data swiftly from its source to places in the organization where it is needed; how to process data and gain actionable insights as quickly as possible; and how to foster faster responses to queries submitted by users or applications.

The Data Architecture Needed for Data Acceleration

You can’t really drive a Ferrari through bumpy alleyways. Businesses will only be able to pursue data acceleration and take action in fleeting “cracks of opportunity” if they start building a high-speed data architecture.

Businesses can work with a variety of data technology components– none of which in isolation will fully facilitate the data acceleration that businesses now require. Rather, the idea is to combine several components, capitalizing on their complementary advantages to build an architecture that improves the enterprise’s data movement, processing and interactivity.

The major components to be considered are as follows:

Big data platforms: A big data platform addresses the challenges of data movement and processing. It can also address interactivity if the platform used is enabled to host query engines. Inside a big data platform sits a big data core – a cluster of computers that supports distributed data storage and processing power. The core holds both semi-structured and unstructured data, and query engine software on some occasions to power the creation of structured data tables and common standards for queries.

Ingestion: Ingestion solutions support data acceleration by enabling the capture, storage and movement of large amounts of information very quickly. The data is moved from the source to a holding area, where a queuing system buffers the data and waits for the end user to pick it up when required. Ingestion was traditionally done in an extract-transform-load method aimed at ensuring organized and complete data, but the focus today is ensuring that all data is collected.

Complex event processing: CEP tracks and analyzes streams of data about events – anything from site clicks to video feeds – and draws a conclusion from them. For example, it might assess the threat level of an attempt to access a secure Internet site by comparing previous attempts classified as security breaches. CEP can also assess multiple sources of data simultaneously in order to identify patterns that suggest particular circumstances. As such, it is particularly useful for real-time analytics, with the added capability of triggering events and actions based on what the processing infers.

In-memory databases: While traditional databases traditionally reside on disk storage, an in-memory database sits on the computer’s main memory. The database is, therefore, inherently much faster, supporting acceleration, as it is immediately available and requires fewer instructions to access. Complexity is also lower with the whole database and associated applications sited in a single location. In-memory databases have also become much more cost-effective thanks to a dramatic fall in random access memory prices and increases in RAM capacity.

Cache clusters: Cache clusters offer high-speed access to frequently accessed data. Query data builds up over time as the cluster stores the queries entered into the system. As a result, when a query is re-entered, it can be answered more speedily, since there is no need to return to the original data source.

Appliances: An appliance is an all-in-one data and analytics solution – a prepackaged set of hardware, software and support services that often relies on a common database for online transaction and analytical processing. It can accelerate data movement, processing and interactivity all at once.

This “plug-and-play” concept can be an excellent route to data acceleration for businesses that lack the IT expertise to maintain their own high-performing database solutions. Appliances support the processing of huge amounts of data very quickly and are relatively easy to use without constant support from IT specialists.


Accelerate to Value

As stated, these architecture components cannot function in isolation but should be combined. My next column will outline four fundamental technology stacks that enable data movement, processing and interactivity at high speed. The goal is to build data supply chains that will let businesses seize the fleeting opportunities that exist in today’s digital world.

Register or login for access to this item and much more

All Information Management content is archived after seven days.

Community members receive:
  • All recent and archived articles
  • Conference offers and updates
  • A full menu of enewsletter options
  • Web seminars, white papers, ebooks

Don't have an account? Register for Free Unlimited Access