Financial firms need quick access to accurate data to make correct trading and risk management decisions. They also want to lower cost, to improve operational efficiency.
But when it comes to access that data, too often it is stored in “silos.” Each department of the firm has its own store of information, its own applications and, in effect, a customized world.
This means that if a firm’s trading desk, risk management, research and back office employees are looking for any data that isn’t already stored in their own database, they may not be able to find it. Or, at best, it won’t have the same format or naming convention. They may not be able to respond, quickly, to a request.
Such discrepancies might not seem important to a middle- or back-office executive who will file regulatory or investor reports on an end-of-day basis. But they certainly will prove to be a headache to a high-frequency trader or order management system making split-second decisions.
So what’s a firm to do? For starters, it could integrate the applications and the data through cumbersome hand-coding, replication or the process known as “extract, transform and load,” to get the information from one database into another.
But that’s a pretty costly and time-consuming approach.
Enter data virtualization. It’s a cloud-based approach to data management.
But it’s not an outsourced or external cloud. It’s an internal data network, managed as a service across the enterprise. The data is organized in a sort of “middleware” layer and the contents defined by metadata. Data about the data.
Such a scenario offers the opportunity to massively reduce the time needed to move data from one application to another. Instead, the data integration is focused on the virtual data layer, where information about the data, its sources and locations makes it easier to access and deliver whatever data is needed anywhere in the organization.
A report by Tabb Group last week called data virtualization an integral part of data fabric. But that report isn’t the first time technology analysts have pointed to the importance of data fabric and it likely won’t be the last.
Back in 2005, Forrester Research published a ground-breaking research report titled “Information Fabric: Enterprise Data Virtualization.” The concept was based on the principle that applications need a single version of the truth for enterprise-wide information. Information fabric focuses on a virtual view of cleansed data which is consistent and reliable enough to support all types of applications and end user requirements. The Forrester report headed by Mike Gilpin identified several key components required to build this data architecture.
Here are just some of them:
Business intelligence and dashboards: As information flows through the data fabric, it provides data cleansing and the ability to offer aggregation, summarization and transformation of the data.
Compliance and audit reporting: Information fabric allows firms to get information quickly and accurately to meet compliance and auditing requirements; and
Searching and browsing of enterprise data: This could well be the biggest benefit of information fabric. Developers or even users can access any type of information more quickly without having to know the data structure or other metadata information or its location.
So six years later – in 2011 –the financial services industry is not as far along as it should be, considering the growth of global trading, the power of high-frequency trading and even potential regulatory requirements.
The United States will also have a new Office of Financial Research and systemic risk regulator evaluating an array of daily data from financial firms within the next few years. So will some overseas regulators who are trying to coordinate their efforts with the Securities and Exchange Commission and other U.S. agencies. Firms will definitely need to ensure that whatever data they provide regulators is accurate and consistent.
What’s the holdup to embracing data virtualization? As Gilpin noted even with the best technology, adoption of data virtualization requires support of top business executives who would be willing to consider new ways of thinking about data management and systems architecture. So this is what they have to do:
Look at vendors that offer comprehensive information fabric solutions: Forrester’s report cited IBM Corp., Oracle Corp and Red Hat as the top players in the sector, while Composite Software, Endeca Technologies, Ipedo and Microsoft Corp. as strong options. Volante Technologies, a New York-based data integration and metadata management company, says that its technology also meets almost all of Forrester’s criteria for architecting the virtual information fabric; and
Expect customization and integration efforts: Since no single vendor offers a complete package, data architects and developers will need to expend additional efforts to fill gaps and integrate best of breed add-ons.
This article can also be found at SecuritiesTechnologyMonitor.com.
Register or login for access to this item and much more
All Information Management content is archived after seven days.
Community members receive:
- All recent and archived articles
- Conference offers and updates
- A full menu of enewsletter options
- Web seminars, white papers, ebooks
Already have an account? Log In
Don't have an account? Register for Free Unlimited Access