In recent years, we've seen a lot of progress but little consolidation in operating environments. The mainframe has experienced a partial resurgence, UNIX has gained acceptance in many data center applications and Windows NT has come on strong in the enterprise. Each of these platforms has clear attributes that explain their presence in the IT infrastructure. However for the data center manager, the issue of investing in the appropriate technology has become fraught with risk due to the dynamic position of these technologies. The question becomes, "Which technologies merit the most investment to keep in business now and in the future?"

A survey of 500 IT professionals, commissioned by Sequent Computer Systems, showed that data growth was a major concern given the state of existing IT infrastructures.1 Ninety-five percent of those surveyed expect data volumes to at least double over the next three years, while 25 percent expect data growth between four and five times the current level in the same period. In addition, Web-based IT architectures result in many more users demanding connectivity to that data through more complex operational and decision support applications.

The mounting pressure of exploding data, users and application requirements is forcing a rationalization of platform technologies that IT managers have been seeking for years. This rationalization will probably lead to an IT infrastructure where mainframes will continue to play a significant role. NT will expand aggressively in the enterprise, driven by enterprise resource planning (ERP) vendors such as SAP; and fewer varieties of UNIX will offer major gains in scalability, availability, manageability and interoperability.

Toward the end of 1998, the first mixed-mode platforms will begin to emerge from the UNIX world embracing NT and, in some cases, MVS. They will offer tightly coupled tiers in a single-managed platform designed to leverage the relative scalability of the operating systems and relational databases. These mixed-mode environments will ultimately combine the strength of distributed and centralized models on hardware platforms that are increasingly OS neutral.

The success of the Intel Architecture and widespread desire for resource consolidation and lower total cost of ownership (TCO) are driving the trend to mixed-mode computing. While Intel's 64-bit Merced processor is the true inflection point, we will see an evolution beginning this year toward mixed-mode platforms accommodating multiple operating system environments and deployment architectures. This evolution will prove too much of a burden for existing Symmetric Multi-Processing (SMP) and Massively Parallel Processing (MPP) architectures, which lack either the flexibility or scalability to make the transition.

These mixed-mode environments will require a single system architecture that supports:

  • Hundreds of processors;
  • The ability to leverage price/performance of new processors and the useful life of existing processors;
  • High availability;
  • Increasingly flexible and integrated systems management, including partitioning;
  • Distributed shared memory (DSM) and distributed message passing (DMP) or vertical and clustered approaches to system scaling; and Multiple operating environments.

Scalability Limits

For more than 10 years, IT managers have depended on the flexibility and low cost of SMP systems. SMP has been an excellent solution for enterprise computing due to its ease of programming and application porting. An SMP system looks to the programmer like one microprocessor, one memory and one instance of the operating system (OS). Additionally, because of the maturity of the SMP environment, a broad range of applications and development and tuning tools is available.

Although SMP has proven mission-critical reliability, it does not scale well beyond 16 to 24 processors due to the physical limitations of the system bus. After a certain point, there is not a proportional increase in performance for every processor added to the system. Therefore, SMP rarely runs large enterprise-level decision support systems (DSSs) or large, complex on-line analytical processing (OLAP) applications. The option of clustering for scalability is a short-term solution as it fails to address the trend toward consolidation and simplicity of management for lower TCO.

MPP Highly Specialized

MPP systems are very popular in technical computing environments where a certain class of very large problems can best be attacked as many smaller problems in parallel. In commercial environments, this narrow specialization means MPP has fallen short where flexibility and availability of general purpose applications are critical.

Instead of a single-system image, MPP systems have multiple operating system images, creating difficulties in synchronizing tasks on MPP nodes. With an MPP distributed-memory system, the programmer must be concerned about the location of the original data, its integrity in a transaction and how it gets updated and moved from place to place. Because of these issues, applications for the MPP environment are relatively few.

Used most frequently to host extremely large data warehouses, MPP systems become harder to justify as next generation distributed shared-memory systems achieve comparable scalability and performance.

NUMA's Scalable Framework

Recognizing the scalability limitations of SMP and the complexity and application availability issues of MPP, most major system vendors are developing some form of NUMA server architecture to give their customers the performance, scalability and manageability they will require over the next five years. The best of these systems will evolve into mixed-mode environments, offering the price/performance and flexibility of client/server systems with the scalability, manageability and availability of mainframe systems.

First designed to meet the growing challenges of very large DSS and OLTP applications, NUMA represents the next stage in the evolution of SMP systems, leveraging the same applications base.

Because NUMA is a general purpose computing platform, it is finding commercial success across a variety of industries, including telecommunications, finance, manufacturing, retail and health care. Companies in these market segments require their data infrastructures to support the processing of huge amounts of on-line and historical data for customer relationship management, database marketing, fraud detection, analysis of competitive information, supply-chain optimization and other existing and emerging DSS applications.

NUMA is a distributed shared- memory machine that uses a series of shared-memory segments, instead of a single centralized physical memory along a "big bus." With NUMA, the access time to a memory block varies according to the location of the memory segment containing the block with respect to the processor that needs to access it. However, using clever techniques in the operating system, the "off-quad" memory accesses can be minimized. In addition fast, scalable interconnects can further reduce the impact of the few "off-quad" memory accesses that need to happen. These techniques and technologies can minimize the impact of the "non-uniform" part of the architecture and thereby make scalability more real. With the addition of fibre channel connections for optimal, multiple paths to storage, cascading switches to achieve unlimited I/O scalability and robust management as well as backup-restore, NUMA-based servers can provide the levels of performance and service once the exclusive domain of the mainframe.

But the real success of the architecture is the flexibility it gives the user.

Based on a NUMA architecture, a mixed-mode environment would house both a database server, running UNIX and in some cases NT, and perhaps multiple application servers running UNIX or NT. Each of these servers would reside on one or multiple four-processor boards known as "quads." These quads would exist within a hardware and/or software partition of the system.

Such a system indeed would look like and deliver many of the benefits of centralized computing, but would do so by incorporating application and database servers in a single system with the flexibility to deliver the whole range of services users require, including Web-based applications, e-mail, database marketing and other formerly departmental or peripheral functions.

A fundamental goal of the mixed-mode, managed environment is to extend the life of mainframe-based data through a high-speed channel, providing better scalability, faster performance and a more flexible environment for improving and updating applications. Issues to consider:

  • Where application re-hosting is possible, NUMA vendors must demonstrate a core competency in understanding the world of the mainframe before migrating these high-end systems.
  • The system must support multiple client devices, including: the dumb terminal, PC and NetPC, workstation and network computer or thin-client.
  • The platform must be able to support thousands of client hardware devices, including Java- and Citrix WinFrame-based systems, due to requirements imposed by the Internet, intranet and extranet.

This new server concept aligns IT with business goals by consolidating strategic applications and data on a flexible, adaptable infrastructure that supports rapid change and growth. Housing massive amounts of enterprise data for storage and analysis, a mixed-mode system simultaneously handles large numbers of users for sophisticated next-generation applications. It also creates a smooth, low-risk migration path for NT into the data center that helps integrate NT with UNIX.
For companies that are focused on future growth and maximum system performance, NUMA provides an optimal platform to handle the complexities of today's data center. As the foundation for mixed-mode, managed-computing environments, NUMA also puts customers on the right path to centralizing the functions and management of ERP applications, data mining and data warehousing, building custom operational systems and Web-enabling the front end of the enterprise.

With so many disparate interests to serve, the data center manager should not be forced to choose technologies, but should be prepared to embrace them all. An open architecture, implemented thoughtfully, will provide respite from the chaotic situation at the high end so that companies can get back to business of the enterprise.

1 "Revolution and Risk: IT into the Millennium," commissioned by Sequent Computer Systems and conducted by the RONIN Corporation.

Register or login for access to this item and much more

All Information Management content is archived after seven days.

Community members receive:
  • All recent and archived articles
  • Conference offers and updates
  • A full menu of enewsletter options
  • Web seminars, white papers, ebooks

Don't have an account? Register for Free Unlimited Access