© 2019 SourceMedia. All rights reserved.

Overheard: On Broad-Based Virtualization of Data Sources


VirtualWorks is led by an experienced management team, including founding chairman and CEO Edward Iacobucci, formerly of Citrix.

As an industry veteran, how have you seen information in the enterprise evolve?

I’ve been in the industry long enough to see some cycles. When I started in the business, everything was IBM mainframes. Insofar as infrastructure goes, we’ve been around the loop. We went from a very closed system to a very open system that was out of control. Now we have the problem where data is not centralized. The urgency is to deal with data across the board that allows access to a number of different repositories, in and out of the cloud.

What compelled you to launch VirtualWorks?

I’ve been living the problem like everyone else. We went through the infrastructure thing and had to give issues a lot of thought. We were setting the table for what data workflow was going to be like. The biggest issue is the scattering of data in different repositories. I’ve been fortunate (or unfortunate) to see this coming. Users are just starting to see it at midmarket. High-end companies know they’ve got problems consolidating and making sense out of data for quite a while. The enterprise search market, data warehousing, document management approaches were nibbling at the problem of ‘how do I get a consolidated view?’ The problem is they were all custom software; the packaged predefined function software could never reach in there to access the data. I view that as essential. The same concept applies in the core middle-tier markets where there are thousands of commercial applications. Our strategy is to provide the broadest cross-indexers across those applications.

Can you describe what you call “data sprawl” and “content virtualization”?

Data sprawl is a handle we settled on to explain the amorphous problem. People know they’ve got data in a lot of places, they know it’s hard to find, they know they’re rebuilding data. They know they’re reduplicating efforts. They’ve got big issues with the consumerization of data, where users are handling IT-sensitive data in consumer devices. I believe no one is really addressing this problem. A cross-platform agnostic view is not to be found. Having spent most of my career working in one platform company or another, I know that companies’ data views are skewed toward their platforms. If you want to integrate from another platform, you’re on your own.

The problem with the term data virtualization is it drives to a lower level. The content is beyond the data. Our differentiation for content virtualization is that we embrace the application data schema, whereas data virtualization deals with dealing with mechanical bytes and bits for extracting data.
How do you think virtualization will be used in the future of information architecture?

The common explanation for data sprawl is the drop in cost of disk storage. There’s been deflation in cost of storage, but that’s not enough. The problem is expansion of storage and services built all these silo pileups – cloud adds to it, and mobility adds to it too.

It’s going to be, has been and will in the future be an explosion in complexity. Virtual applications in the cloud add complexity. Administration, management of data, and desktop virtualization … all these trends of the last 10 years have done a wonderful job of simplifying the data management job. But 2011 clients are not too different than the client of 1998 - you have to know where to look to find something. The more we deploy, the more complex it becomes. We may not have all the virtualization tools we need yet, but the majors are on the way to filling that in with their ecosystems. I easily see a future where we migrate workloads and raw storage out of data centers. It can be powerful but makes things more complicated.

For reprint and licensing requests for this article, click here.