Continue in 2 seconds

Systems Development Methods for Enterprise Integration, Part 1

  • February 01 2004, 1:00am EST

Author's note: Earlier columns have separately addressed concepts of enterprise architecture and also of XML (extensible markup language) and Web services for enterprise integration. In this and following months, I will discuss changes in systems development methods necessary for the 21st century enterprise.

Enterprise architecture and enterprise engineering achieve business integration in the enterprise for more effective technology integration. Before we examine these further, we first need to review the evolution of systems development methodologies in the information age.

Methodologies evolved from the start of the information age to help us examine current manual processes so we could automate them. By the 1970s, these had evolved into the software engineering methods. Michael Jackson, Ken Orr, Ed Yourdon, Tom de Marco and others were key originators of software engineering methodologies, also called structured methods. These methods analyzed manual processes, documenting them with data flow diagrams (DFDs) and functional decomposition diagrams (FDDs). The structure of modular programs to automate these processes was documented using structure charts (SCs). Programs were then written in various programming languages to execute the automated processes.

As discussed in earlier months, in automating manual processes as-is, we moved from manual chaos to automated chaos. This was due to the fact that common manual processes used in various parts of the enterprise had often evolved in quite different ways. For example, a manual order entry process may be executed differently according to how the order was received, whether it was by mail, by phone or from a salesperson. The order entry process may also be executed differently depending on the specific products or services ordered. The result was the evolution of different manual processes, all intended to achieve the same objective: accept an order for processing. When these manual processes were automated, we found we also had many automated order entry processes. We had lost sight of the principle of reusability, first identified by Adam Smith in his book, Wealth of Nations.

This added to the automated chaos. When a process change was required, the same change had to be made to every version of that process throughout the enterprise. Every program that automated the different versions of the process had to be changed, often in slightly different ways. The result was program maintenance chaos!

With software engineering, each DFD identified the data that it needed as data stores. Each different version of the same process resulted in a different data- store version of required data. Redundant data versions were implemented for each automated process. In fact, we saw this problem start to emerge with the 19th century industrial enterprise. As we discussed in earlier months, insurance and bank workers each kept separate hand-written applicant or customer ledgers to answer queries. With these redundant data versions, we moved to data maintenance chaos!

Whenever a change to data values was made for maintenance purposes, such as changing a customer's address, every version of that address had to be changed. This was redundant data maintenance processing, and it required redundant staffing to do this redundant work.

Because redundant data maintenance programs were developed independently, data maintenance workers had to be trained in the different operating procedures that were used for data entry by each redundant data maintenance program. This resulted in redundant training.

This approach to systems development has led to the major problems that we have today of redundant data and processes with stovepipe systems. All of these redundant costs are regularly incurred by every enterprise in redundant data maintenance costs for redundant data value changes and in redundant staffing and training to carry out this redundant work.

These are all redundant costs that negatively affect the bottom line ­ in reduced profits for commercial enterprises and reduced cost- effectiveness for government or defense enterprises.

In this same period –­ from the late '60s through the early '70s, Edgar Codd developed the relational model from set theory. This was the foundation of the relational database technology that we use today. The first relational database management systems were released by IBM Corporation and by Oracle Corporation in the late '70s and early '80s.

The mid-'70s saw the emergence of three approaches to apply concepts of the relational model to the methods that were used for database design. Each addressed development of data modeling methods using normalization to eliminate redundant data versions.

One approach, developed in Australia, evolved into integrated methods for information using a rigorous engineering discipline called information engineering (IE). I originally developed IE in 1976. This latter IE variant evolved into what is today called enterprise engineering. We will discuss these methods further next month.

Register or login for access to this item and much more

All Information Management content is archived after seven days.

Community members receive:
  • All recent and archived articles
  • Conference offers and updates
  • A full menu of enewsletter options
  • Web seminars, white papers, ebooks

Don't have an account? Register for Free Unlimited Access