The origins of the single database concept hark back to the roots of the computer industry itself. In the beginning were punched cards. Next came paper tapes. Next came magnetic tapes and then disk storage where data could be accessed directly. Along with disk storage came a piece of software known as a database management system (DBMS). The DBMS provided easy user access and storage of data. Along with direct access of data came the capability of online transaction processing (OLTP) systems. From a conceptual standpoint, the idea of a single database was born. The idea was to bring data from the master files and consolidate them in a single database. Once the single database was built, there would be one source to turn to in order to access data. There would be no more inconsistency of data and there would never be a problem accessing data. Entire careers, books and products were built on the single database concept. Nearly the entire industry embraced the single database concept without questioning whether it was really the right thing to do; such was the power and the appeal of the single database concept.

Fast forward a quarter century. Take a look at the industry. ACR of Phoenix, Arizona, conducted a survey of organizations engaging in computer processing. A quick look at the survey shows that not one out of 10,000 surveyed has a single database management system. Every company surveyed has multiple DBMSs and most have many DBMSs. The theory of the single database concept was a good idea at one time, but time and reality have shown that it was just a theory and nothing more. In fact, it was an incorrect theory despite the vigorous support that it received.

Indeed, under a few circumstances, the single database concept proved to be successful. There was a case where an organization had a relatively small amount of data and processing, and the organization was willing to surround the database with huge amounts of processing power. Under this circumstance, it was possible to build and operate a single database; but there were some nasty flies in this ointment. The first was that as soon as the volume of data and the volume of processing grew to any amount of significance, there was a need for separate databases. The differences between transaction processing and other database processing and the need for high-performance response time mandated that there be a separation of databases for different kinds of processing. The second fly in the ointment was that the boundaries of the volumes of data to be handled by the single database environment could be extended by adding hardware. Those organizations that stubbornly attempted to make the single database concept work soon found that they were buying more and more hardware. The only choice they had was to throw hardware and money at the problem. The attempt to implement the single database concept led to the creation of a huge hardware infrastructure.

The forces that destroyed the single database concept were as follows:

OLTP existing on the same machine and in the same environment as other database processing. OLTP entails a lot of processing which is unique to OLTP. Integrity of data, integrity of transactions, back up, recovery and the standard work unit ­ all these facilities and more are peculiar to the OLTP environment. Trying to mix the OLTP environment with the other needs of a database is similar to trying to mix oil and water. When OLTP came along, the single database concept dissolved.

Volumes of data. Not noticed as quickly as the differences between OLTP and other database processing was the fact that as the volumes of data grew in a database application, they separated into two distinct classes ­ data that is actively accessed and data that is not actively accessed. Given the terabyte range of data that some organizations acquired, there soon was a separation of data into these distinct classes. It made no sense to try to keep all data in one database. Dormant data needed to be handled quite differently. Thus, there was a second powerful force pulling against the single database concept once the volumes of data exceeded terabyte.

Data marts. Data marts are the departmental manifestations of data warehouses. Finance, accounting, sales and marketing are notorious for having their own data marts. However, there is no such thing as a common data mart or even a common data set that serves as a basis for a data mart because each data mart is physically different. Creating different "views" of the same data for different data marts does not solve the problem. Each department requires its own unique physical store of data that is related to – but fundamentally different from – any other department's data mart. It is naive to think that all departments can have their views of data served by a single source of data.

Exploration processing. Exploration processing of data occurs when the statistician gets a large quantity of detailed data and searches for business patterns in the mix. Data mining and exploration processing require their own database design, their own development techniques, their own transaction design and so forth. Trying to wrap exploration processing into a single database wreaks havoc on all other users of the environment.

Organization ownership. Different organizations want to own and manage their data. They don't want to share data – at least not all of the data they need to run their environment. They don't want anyone poking his or her nose in their business and they don't want anyone using their machines when it is time to do important processing. There is powerful organizational pressure for separate organizational machines. The single database concept runs contrary to this ownership.

Economics. A centralized environment has always been the most expensive environment in which to do computing. It has always been less expensive to break processing into smaller pieces. The cost of processing, the cost of storage and the cost of development have all shrunk mightily as the environment has been decentralized.

Technology. Centralized database technology is just fine, but database technology exists on a wide variety of products such as Palm Pilots, PCs, client/server processors and mainframe processors. The push of technology is away from centralized database systems.

Specialization. Some of the most interesting applications and technology are written in distributed environments. Analytical processing, wireless processing and online analytical processing (OLAP) are not written for large centralized database processors. Instead, these most valuable technologies are written for very distributed database processors.

There are a host of reasons why the single database concept has become a dinosaur. Consider that the automobile profession began with the Model T that was all things to all people at the time. What do we see on the road today? We don't see a bunch of Model T's (or their progenies). We see a wide variety of cars – sports cars, trucks, motorcycles, mopeds, pickups, SUVs, etc. Similarily, specialization is the wave of the future for database technology and databases – one technology cannot fit everyone's needs.

Register or login for access to this item and much more

All Information Management content is archived after seven days.

Community members receive:
  • All recent and archived articles
  • Conference offers and updates
  • A full menu of enewsletter options
  • Web seminars, white papers, ebooks

Don't have an account? Register for Free Unlimited Access