Continue in 2 seconds

Year 2000 Compliance Test

Published
  • April 01 1998, 1:00am EST

Show of hands--how many of you have successfully petitioned for an extension to your Year 2000 project?

Although most organizations have completed the first two phases of Year 2000 compliance--recognize the problem and assess current applications for ALL dates that impact calculations-- less than 20 percent have begun their actual conversion and testing projects.

The few companies that have begun testing are discovering a great deal of complexity. The tasks of converting applications and data, testing those changes and re-deploying the applications are posing daunting disruptions to the operations. The "down time" required for testing and the lack of tools to isolate Year 2000 programs from the day-to-day production environment make the production process extremely slow and costly.

Companies ahead of the game have been early to recognize the strategic role enterprise storage will play in the testing process.

To illustrate the point, this article will answer six critical questions from a fictional test given by Professor Willie B. Dunn, University of Retired COBOL Programmers, Testinfast, PA.

What role(s) does enterprise storage have in Year 2000 compliance projects?

Simple Storage: At the most basic level, enterprise storage is the resource where Year 2000 test data is kept and manipulated; the physical location where sign-out logs, change histories and code versions can be kept; the program management tool and resource that organizes information for testing; the repository where Year 2000 compliant code can be stored until it is deployed throughout the organization; and the I/O resource for CPUs dedicated to Year 2000 testing activity.

Data Movement Enabler: Enterprise storage is the underlying technology that allows movement of data, including programs, from the production environment into the Year 2000 test environment. Advanced enterprise level storage will allow this movement at the storage subsystem level so that host resources (MIPS, I/O loads, etc.) are not spent on this activity.

Data Manipulation Enabler: Enterprise storage allows data to be manipulated, monitored and changed. Techniques include setting up multiple mirrors of data that are uniquely and separately addressable by the Year 2000 LPAR (logical partition) or Year 2000 dedicated test system. Such mirrors can then be used for iterative and often destructive testing operations. This enabler allows no cross contamination of data between the production environment and the Y2K test environment. Advanced enterprise storage will allow this manipulation at the storage subsystem level so that host resources (MIPS, I/O loads, etc.) are not spent on any of these activities. Facilities in this area also include the ability to monitor (logical) volumes on a continuing basis with facilities supplied by the storage subsystem.

Code Redeployment Enabler: Once programs and data have been made Year 2000 compliant, enterprise storage can be used in the dissemination of that code. Storage based point-in-time backups can be taken. Those images can then be spun to tape for eventual deployment throughout the organization. This transfer to tape takes place as a background task so the host need not go into a "batch window" activity that forces the on-line community to be interrupted. Advanced enterprise storage facilities, once again, will allow these actions to take place without the intervention of host resources.

Why is enterprise storage most appropriate for Year 2000 testing? What characteristics tell me whether I have enterprise storage or not?

Information critical to operating systems, utilities, database management systems, application programs and data warehouses can all reside on enterprise storage. This places data at the center of an IS organization rather than distributing it to dispersed processing units.

True enterprise storage allows the simultaneous co- residence of data between mainframes and open systems. As such, one storage subsystem can be used for the Y2K testing effort with differing processors and their applications as well as facilities being cycled through for compliance checking. Point storage products, or products that connect to only one platform (such as mainframes only or open systems only), are at a distinct disadvantage in this regard.

Enterprise storage includes storage-based software to facilitate simultaneous operation between the production environment and the Y2K test environment. Such software enables a separate mirror which is uniquely addressable by the Y2K test system or Y2K LPAR (logical partition).

True enterprise storage must display the following characteristics:

  • Connectivity across a wide range of different brands of mainframes, open systems, midrange systems and NT.
  • Connectivity across a wide range of technologies such as ESCON, Fast and Wide SCSI, fibre channel and others throughout the enterprise.
  • Simultaneous connection to--and the ability to share data between and among--multiple mainframes and open systems.
  • Storage-level establishment and manipulation of local and remote mirrors.
  • Storage-level establishment and manipulation of multiple mirrors with separate addressability for protection of both Y2K and production data from each other.
  • Simultaneous Year 2000 testing as production work continues.
  • Scalability of storage in terms of capacity and speed within a single frame so that as the Year 2000 project (inevitably) grows, the storage asset can grow with it.
  • Storage management facilities for both open systems and mainframes.
  • Reusability of the storage asset in other future applications.

What is a storage- oriented definition of Year 2000 compliance and why is it so important that products involved in Year 2000 compliance projects be themselves Year 2000 compliant?

"Year 2000 Compliance" means that the product is capable of creating, storing and processing records for the Year 2000 and beyond; the product is capable of managing data involving dates including single-century formulas and multi-century formulas; the product will recognize the rollover to Year 2000 and the fact that the Year 2000 is a leap year; the product will continue to function and operate in accordance with the product specifications including performance specifications and will provide the required output; and the product will experience no interruptions and no abnormal endings and/or incorrect results caused by the use of said product in the year 2000 and thereafter.

It is important to remember that storage must be faithful to its prime directive: whatever is stored will be retrieved unchanged. That is why a storage-oriented definition of Year 2000 compliance may differ from other such definitions.

Products involved in Year 2000 compliance projects must themselves be Year 2000 compliant for the following reasons: A) Users cannot risk the introduction of errors by the hardware and software being used in the project. All the money, time and effort being consumed in making IT systems compliant cannot be compromised or negated. B) Vendors who are selling noncompliant products put the buying organization needlessly at risk. Alternative suppliers must be found, and found quickly. Most Year 2000 hardware and software will be "cascaded," or reused, once the compliance project is completed. In their subsequent roles, they too must be Year 2000 compliant.

What are some good ways to move data from the production environment to the Y2K test environment? Explain the advantages of each method.

Local mirroring, including multiple mirroring (mirrors of data that are in the same storage subsystem). Advantages include establishing a protected Year 2000 domain in available storage on a single storage subsystem. This domain is fully separate and uniquely addressable vis-à-vis the production environment and can be accomplished without using expensive host resources. Multiple mirrors can be either local or remote and enable multiple project teams to work on common code without interruption.

Remote mirroring (a mirror of production data that is remote by distance of a meter, thousands of meters or even several miles). Advantages include: A) a copy of the data at another location that can be accessed in the event of a planned or unplanned interruption, and B) copies of actual live data that can be used as a baseline or anchor point from which to embark on Year 2000 testing.

Storage-based file transfers (file transfers between two storage subsystems without involving host resources). Advantages include high-speed data movement including application programs to a location where they can be made Year 2000 compliant. If this transfer is from one storage subsystem to another and does not involve usage of host resources, those host resources can be used for on-line transaction processing.

Sneaker Net (also known as CTAM--the Chevy Truck Access Method--involving a delivery person wearing tennis shoes carrying tapes of data from one location to another, often employing a Chevrolet truck). Advantages include lowest cost bandwidth available so long as there are no tape issues or tape handling problems. Cheap, but it moves the data!!

Do you need to test code that has been through an offshore (or onshore) "Year 2000 factory?" Why or why not?

Absolutely, yes. The value of Y2K software factories is that they leverage existing resources with focused expertise. They find, fix and repair noncompliant code usually using standard, repetitive measures, either manual or automated. However, it remains incumbent upon the actual owners of the systems to verify that the conversions made on their behalf by the factory actually do fix the Y2K problem.

Since it is the IT organization which is entrusted with verifying compliance, it is to their benefit to ensure that the Y2K tasks really were accomplished per specifications.

Although unlikely, it is possible that the remediation process itself introduced new errors. Such new errors need to be found and then shaken out. It is too big of a risk to the organization to just assume that compliance is automatically achieved upon arrival of the returned code. Time must be set aside in the Year 2000 Project Plan to verify the Y2K factory's efforts.

Once Year 2000 compliance is achieved, what should be done with the equipment in the Year 2000 test bed? If you recommend disposing of the storage upon project completion, explain how you would justify that decision to your auditors and financial personnel. If you recommend keeping the storage, for what use?

It is inappropriate to simply throw away an investment in technology once the project is over unless there is no other use for it. Auditors and financial personnel should and do look upon this policy unfavorably.

Rather, a more prudent user will look at this investment in terms of three dimensions: growth demands, new applications and the long-term viability of the test-bed concept. In all cases, the storage must have features that enable its longevity: programmability, substitution of new denser disk drive technologies, growth of on-board cache memory, interchangeability with differing connection schemes such as Fast and Wise SCSI, fibre channel, ESCON, and so on plus scalability within a single frame. Assuming the presence of such features, new growth can drive storage due to simple business expansion, merger and acquisition activity, as well as an increased propensity to save data given the falling costs per megabyte of magnetic storage.

New applications drive storage growth from data intensive Internet/intranet projects such as electronic commerce, data warehousing and data mining, enterprise-wide applications, client/server evolutions, and even data that used to be stored on microfiche and other near- line storage mechanisms.

Perhaps most importantly, if the Year 2000 experience with a special test environment is successful, it will validate the use of an enterprise storage-based test bed. Leaving the test bed "up" will enable it to be used for the testing of new software, new releases of database management software and even everyday maintenance changes. In short, testing in a specialized area with dedicated processing and storage facilities could be an "industry best practice" worthy of continuing long into the new millennium.

EMC Corporation asked Find/SVP to survey 700 IT executives and managers working for major North American, European, Australian and South African companies representing key industries. The objectives of the study were to determine quantitatively the level of concern of IT executives about several critical information management issues; the degree to which information is being shared between platforms and the issues executives face when doing this; the pace at which organizations are consolidating information; the financial and business impact of planned and unplanned downtime on organizations; and the trends driving the demand for storage. Some of the key findings from this survey are:

  • Much of the information being transferred is between mainframe-based production systems and newer, open systems-based relational databases for analysis and decision support. That transfer has a negative impact on the production system, with 43% of the respondents reporting that data transfer consumes more than 20% of their mainframe MIPS. One in ten report that half of their mainframe MIPS are spent on data transfer.
  • With many brands of "open" servers using their own version of a particular operating system, sharing data between these disparate systems presents a technical challenge. The respondents cited the proprietary attitude of server vendors, inadequate network-based file and data transfer and inadequate bandwidth as the greatest impediments to successful data sharing between systems from different vendors.
  • As part of the consolidation trend, IT organizations are well on their way toward building enterprise storage frameworks--those which store data for all platforms simultaneously--in order to manage their consolidated data pools. 62% prefer consolidated enterprise storage over traditional CPU-based storage.
  • As network computers begin to ship, most (47%) feel that central control and centrally managed storage are the top two benefits of this architecture, surpassing the low initial price often cited as the primary benefit of NCs.
  • The demand for storage capacity continues to grow, triggered by accelerating data consolidation and data sharing. More than one-third (37%) of the respondents report that their storage capacity will increase by more than 50% over the next two years.
  • For the past three years, more than 70% of the executives surveyed reported that their storage needs were increasing rapidly or very rapidly.
  • Those new applications are also the greatest source of unexpected storage purchases for close to half (47%) of the respondents.
  • Electronic commerce over the Internet will also drive storage demand, with 67% in favor of replicating corporate databases accessible by the Internet in order to heighten data security. More than half report they are using their Internet sites to transact business or interact with customers in a way that is more than just providing information via Web pages.
  • 32% of respondents report having an enterprise storage plan and a chief storage manager.
  • 35% of respondents indicate they have a storage plan but no one person who is responsible for it.
  • 15% of respondents are evaluating enterprise storage but have no plan.
  • 18% of respondents purchase storage but not enterprise storage.
  • Accelerating data consolidation and data sharing are triggering a rapid growth of data storage. 33% of respondents indicate storage increasing very rapidly, 42% indicate storage increasing rapidly, 24 % indicate storage increasing slowly and 1% indicate storage not increasing.
  • IT managers see total storage capacity increasing rapidly over the next two years. 30% of respondents expect their storage needs to increase 20-30%, 30% expect an increase of 31-50% and 24% expect a growth of 51-75%.
  • New enterprise-wide applications (62%), such as SAP, Baan, PeopleSoft, Oracle Financials and others, are the leading reason for the rapid growth of storage. Other reasons are data warehousing/data mining (45%), increasing size of customer records (44%), Internet/intranet applications (39%), on-line archival information (30%), and mergers and acquisitions (26%).

Register or login for access to this item and much more

All Information Management content is archived after seven days.

Community members receive:
  • All recent and archived articles
  • Conference offers and updates
  • A full menu of enewsletter options
  • Web seminars, white papers, ebooks

Don't have an account? Register for Free Unlimited Access