© 2019 SourceMedia. All rights reserved.

The Value Proposition for Columnar Databases

Columnar databases are surging with the NoSQL movement, but actually they play in both SQL and NoSQL camps. From a SQL perspective, some columnar databases have been around for two decades (i.e., Sybase IQ had product momentum in the 1980s) and the SQL language works just fine against columnar databases. However, if you define NoSQL as having a different data layer from the relational "all columns successively" row-based layout, columnar databases are NoSQL. Furthermore, if you define NoSQL as being not ACID (atomicity, consistency, isolation and durability) compliant, some columnar databases are and some are not.

Whether columnar databases are part of the NoSQL movement or not by your definition, they are becoming an essential component of an enterprise infrastructure for the storage of data designed to run specific workloads. When an organization embraces the value of performance, it must do everything it can to remove barriers to the delivery of the right information at the right time to the right people and systems. There is no ERP for post-operational data; no one-size-fits-all system. Some gave that role to the relational, row-based data warehouse, but that ship has sailed. In addition to columnar databases, very-large data stores like Hadoop, real-time stream processing and data virtualization to bring together result sets across all these systems are required today.

Columnar storage reduces the primary bottleneck in analytic queries: input/output. I/O is such a bottleneck that the number of I/Os a query will require is absolutely a fair relative measure of the time it will take to execute. It wasn't always this way. CPUs were the initial bottleneck, and symmetric multiprocessing, clusters and massively parallel processing came to the rescue, providing the ability to get thousands of CPUs as busy as possible. However, they continue to sit idle more often than desired. This is due to the inability of the pre-CPU layers of memory, L2 (especially) and L1 caches to throughput data rapidly. One of the primary reasons for this is that complete records are sent through the layers by row-based systems, which are designed to process rows instead of columns.

Think about it from an efficiency standpoint. When I want just a few songs from an album, it's cheaper to purchase only those songs from iTunes that I want. When I want most of the songs, I will save a couple bucks by purchasing the whole album. Over time, I may find that I also like one of the other songs. However, when it comes to a query, the query either wants a column or it doesn't. It will not come to like a column later that it was forced to select. This is foundational to the value proposition for columnar databases.

The major significant difference between columnar and row-based stores is that all the columns of a table are not stored successively in storage - in the data pages. This eliminates much of the metadata that is stored on a data page, which helps the data management component of the DBMS navigate the many columns in a row-based database as quickly as it can. In a relational, row-based page, there is a map of offsets near the end of the page to where the records start on the page. This map is updated as records come and go on the page. The offset number is also an important part of how an index entry would find the rest of the record in the data page in a row-based system. The need for indexes is greatly minimized in column-based systems, to the point of not being offered in many.

Columnar databases know where all column values begin on the page with a single offset calculation from the beginning of the file. There is no value-level metadata. All column data stores keep the data in the same row order so that when the records are pieced together, the correct concatenation of columns is done to make up the row. This way "William" (from the first name column file) is matched with "McKnight" (from the last name column file) correctly - instead of matching William with Smith, for example. Columnar systems used to store the record number alongside the value, but few do any more. Many match values to rows according to the position of the value (i.e., third value in each column belongs to the third row, etc.).

For some columnar databases, with fields above 8 bytes (characters and variable-length characters) a separate dictionary structure is used to store the actual values along with tokens. These 8-byte tokens are used in place of the value in the data page and allow the data page to continue to operate without value-level metadata. The higher the repeat level of the values, the more space will be saved.

For example, 1=State Grid Corporation of China, 2=Nippon Telegraph and Telephone and 3=Federal Home Loan Mortgage Corporation could be in the dictionary and when those are the column values, the 1, 2 and 3 are used in lieu of the actual values. If there are 1,000,000 customers with only 50 possible values, the entire column could be stored with 8 megabytes (8 bytes per value). The separate dictionary structure, containing each unique value and its associated token, would have more page-level metadata. Since each value can have a different length, a map to where the values start on the page would be stored, managed and utilized in page navigation.

Columnar databases provide a range of benefits to an environment needing to expand the envelope to improve performance of the overall analytic workload. It is usually not difficult to find important workloads that are column selective, and therefore would benefit tremendously from a columnar orientation. Columnar database benefits are exacerbated with larger amounts of data, large scans and I/O bound queries. In providing the performance benefits, they also have unique abilities to compress their data.

This is excerpted from William's white paper: "Best Practices in the Use of Columnar Databases."

For reprint and licensing requests for this article, click here.