Traditionally, mainframes and distributed systems have been viewed as separate entities. Today, they are merging into a single distributed IT environment. As a result, people from two previously separate cultures - mainframe and distributed - are coming together to architect, develop and manage the resulting unified infrastructure.
The groups have arrived at this point through very different evolutionary paths. Along the way, both have had considerable, although dissimilar, experiences and have learned valuable lessons. Each group can learn much from the others history.
During the evolution of the mainframe, people in that community gained valuable insight into management practices for large-scale environments. People who have grown up in the distributed environment can learn from their experience and avoid making mistakes similar to those that mainframe groups made over the years.
The experience of the mainframe environment has already helped by bringing the IT Infrastructure Library (ITIL) to the distributed world. In the early 1980s, original systems management concepts, many of which were developed for the mainframe, were documented in a series of publications called the yellow books. These books were key inputs to the original set of ITIL books, and have been adapted and enhanced in ITIL for the distributed environment.
Over the years, mainframe specialists have developed remarkably strong processes for change management, virtualization, clustering and resource standardization. Lets examine how what they learned can help bring equally strong processes to distributed systems.
Reliability and stability are fundamental tenets of the mainframe mindset. Change is a major threat to reliability and stability. Consequently, mainframes required painstakingly rigorous change management processes to ensure that changes were approved and tested before they hit production. You can improve stability in the distributed environment by continuing to apply the rigorous change management processes that were developed for mainframes. These include processes for patches, software updates and new software deployments.
Effective change management achieves the optimum balance between agility and stability. To determine the level of stability required, and hence the rigidity of change control necessary, you must know the business services and processes supported by the various infrastructure components and the business criticality of those services. Thats because the more critical the business services or processes supported, the higher the level of stability required, and the tighter the change control must be for the components that support those services or processes.
The distributed mindset holds that hardware is cheap. Cheap hardware has translated into server sprawl, with hundreds or even thousands of servers consuming valuable data center space, running up power bills and creating management headaches. High costs and low server utilization rates are translating into low ROI.
Mainframe CPU cycles and other computing resources have always been expensive. So mainframe developers had to focus on conservation. Virtualization to run multiple applications on a single physical host server is one of the techniques theyve used. Moreover, through experience, they learned that spending extra development effort upfront to achieve resource efficiency would pay off later because there would be fewer computing resources to operate and manage. This approach keeps costs low and resource utilization high.
Virtualization is now making its way into distributed environments, where it presents both application and management challenges. Distributed applications have traditionally been designed to run on dedicated servers. When these resource-hungry applications are smashed together into virtual environments they can misbehave. Developers of distributed applications can learn from the approach mainframe developers use to build applications that operate effectively in virtualized environments.
With respect to management, distributed system management tools were not designed for the dynamic, virtualized world. These tools operate under the assumption that each physical server runs specific applications and jobs. In dynamic environments, a physical host plays many roles and roles change rapidly. Operating systems and applications bounce from one server to another as workloads change. Rules based on a static environment no longer apply. People in the distributed environment can leverage mainframe experience to develop and manage applications more effectively in dynamic, virtualized environments.
When mainframe people used virtualization to reduce the number of physical mainframes, they ran into a problem. When a physical machine goes down, all virtualized applications running on that machine go with it. They solved the problem with clustering, which lets them take advantage of virtualization without jeopardizing availability. By combining clustering and virtualization, they created a horizontally sliced architecture that disperses applications across multiple physical mainframes. Clustering has two major purposes: scaling and redundancy. It reduces the number of machines without sacrificing performance and availability. It also permits redundancy if a machine fails.
Next, the mainframe staff added load balancing to dynamically allocate computing resources to the heaviest workloads. The result is fast performance, even under peak workloads. The lesson is to reduce the number of servers you need with virtualization, while providing scaling and redundancy with clustering. Then you can throw load balancing into the mix for great performance.
In the distributed environment, freedom of choice has been a way of life. The result is a diversity of hardware, operating platforms, application programming languages, databases and packaged applications - and an IT infrastructure that is complex to manage and maintain. It takes multiple skill sets to manage and support the diverse application stacks that result. In addition, multiple application development groups and multiple operations groups use disparate system management tools.
Mainframe application developers have always operated in an environment of limited choice. When choices widened as new programming languages and databases emerged, the mainframe operations staff discovered that freedom of choice meant greater complexity. So mainframe professionals opted to standardize, limiting the number of programming languages and databases for application development. Standardization can help you get a handle on the growing complexity of distributed systems.
Standardization also delivers substantial buying power. The mainframe staff in many large organizations uses this power to encourage application software vendors to provide applications on the organizations preferred platforms. In distributed environments, that standardization can give you power as well, enabling you to negotiate better pricing and encourage vendors to supply better tools for supporting the systems you have in place.
Looking Ahead to IT Service Maturity
As you progress toward IT service maturity, you can learn from people who struggled with challenges similar to the ones we face today. Their experience can guide you in overcoming the obstacles that block our path. You can leverage the wisdom they acquired to improve decision-making, avoid pitfalls and accelerate progress toward IT service management maturity in an increasingly distributed world.
Register or login for access to this item and much more
All Information Management content is archived after seven days.
Community members receive:
- All recent and archived articles
- Conference offers and updates
- A full menu of enewsletter options
- Web seminars, white papers, ebooks
Already have an account? Log In
Don't have an account? Register for Free Unlimited Access