The mainframe gets little attention in the media these days—in part because it’s perceived as a moribund technology and in part because relatively few organizations depend on one.

But the mainframe is hardly moribund. And while the number of organizations that run applications on these systems may be relatively small, those organizations include the largest and most transactionally active on earth.

The overwhelming majority of those companies are seeing increases in mainframe database and transaction volume. An estimated 60 percent are even seeing an increase in the number of databases hosted on the mainframe.

So if you work for a major global financial institution or other large organization of a certain vintage, your mainframe systems of record probably continue to generate massive volumes of high-value data day after day after day.

Unfortunately, despite the incalculable value of mainframe data and applications—as well as the mainframe’s unmatched performance, reliability, scalability, security, and operational economics—many IT leaders continue the twin toxic trends of mainframe disinvestment and denial.

Neither makes sense in 2017. Here are three reasons why:

1) You’re not going to re-platform. The track record of major mainframe re-platforming projects is abysmal. So if you have complex mainframe applications with rich business logic that has been refined over many years, they’re here to stay.

After all, even a successful re-platforming effort requires massive spending—yet leaves you with nothing more than similar logic in another codebase. That’s of exactly zero value to your customers.

There’s thus no point in pretending you’re going to re-platform someday. You won’t.

2) You’re bleeding mainframe expertise. The last generation of mainframe specialists are leaving the workforce. Soon you’ll be left with no one who understands your most critical systems.

Outsourcing won’t solve this problem. Outsourcing is only appropriate for commodified disciplines that don’t differentiate your business, like desktop support or network troubleshooting. Core systems of record don’t fall into this category.

3) You must leverage your mainframe assets more effectively and more nimbly. Given that your mainframe databases and application provide essential back-end support for your customer-facing mobile/digital engagement and your highest-impact analytics, twice-a-year COBOL code-drops or 12-week turnarounds for modifications to your DB2 calls have become wholly unacceptable.

You absolutely must bring the same agility to your mainframe as you’re achieving with distributed and the cloud—even as you lose your mainframe-related “tribal knowledge.”

If the mainframe status quo is no longer tolerable, what form should mainframe re-investment take? What concrete measure will help you get maximum value from your mainframe information assets over the coming years?

Here are five suggestions:

1) Acquire Millennial-friendly tools. To do the work you need done on mainframe data and code, your next-gen database admins and DevOps staff need less arcane tools. Fortunately, forward-thinking mainframe vendors are finally rolling out these types of tools. Uptake for these tools hasn’t been tremendous, because veteran mainframers have little reason to upgrade. But if you’re going make your mainframe an integral element of your Agile analytic enterprise, you must give your younger staff more intuitive tools.

2) Investigate visual mainframe mapping. Mainframe applications and database calls have been very poorly documented over the years, even as they’ve become more complex. This lack of documentation poses a real impediment to the mainframe newcomers who must inevitably connect your back-end systems of record to your front-end systems of engagement. The good news: New visualization tools empower these newcomers to quickly understand existing database calls and inter-program dependencies.

3) Automate COBOL unit testing. Unit testing is de rigueur in the Java world—but has been difficult to consistently perform on the mainframe. Innovative test automation and stubbing tools, however, now make it easy to execute unit tests on COBOL code. This simple improvement in mainframe DevOps practice dramatically impacts mainframe agility, enabling faster and more iterative modifications to mainframe code while safeguarding the precious integrity of mainframe systems.

4) Integrate mainframe databases and applications into the enterprise DataOps/DevOps environment. Many IT leaders remain unfamiliar with new solutions that integrate mainframe development, test, and deployment tools with popular non-mainframe technologies such as Atlassian, Git, and Jenkins. These integrations can help your data and application management teams better coordinate changes to front-end apps with corresponding changes in back-end mainframe code.

5) Commit to cultural transformation. There is nothing inherently less Agile about the mainframe. Data is data, and code is code. Impediments to mainframe agility are instead caused by processes, tools, and people. Change those, and you can achieve anything. But it won’t happen magically. Someone must champion the requisite investment and cultural change. If that person isn’t you, find out who it is—and encourage them to become the agent of mainframe change your organization needs.

Register or login for access to this item and much more

All Information Management content is archived after seven days.

Community members receive:
  • All recent and archived articles
  • Conference offers and updates
  • A full menu of enewsletter options
  • Web seminars, white papers, ebooks

Don't have an account? Register for Free Unlimited Access

Lenny Liebmann

Lenny Liebmann

Information Management contributing editor Lenny Liebmann has been covering cybersecurity for more than 25 years.