Not long ago, I was in a meeting listening to some bright consultants expound on an area of IT I am rarely involved in. They were using jargon that was new to me, and I tried to guess the concepts behind it. Experience told me that there might be no concepts, or the concepts might be trivial or so unclear that they seemed to represent quite different things from one sentence to the next. I also thought that some of the terms might just be convenient labels for great swathes of the unknown - to pretend that there was knowledge of something simply by giving it a name.


At the same time, I was enchanted by the confidence and virtuosity of the consultants. Despite the voice of experience, I wanted to believe that here was something solid, coherent and meaningful. As the minutes ticked by, my mind began to drift back in time, recalling similar emotions from untold numbers of meetings whose details were forgotten long ago. I began to recall a few memorable data management terms from years past that have turned to dust. These jewels of hype once dazzled the entire IT industry but can now only be found in yellowing documents and the dim recollections of a few aging practitioners.


The canonical synthesis. I think I understood what the canonical synthesis was for possibly a fortnight, and that was after years of effort. By “understood” I really mean I could remember how it was explained to me, rather than what it meant, and it was certainly not anything I could put into practice. I didn’t know how. I assumed other people could, but as chance would have it, I did not know any of them personally. Yet there was a time at the end of the ‘70s and in the early ‘80s when the canonical synthesis was held up as the answer to all the difficulties of managing data at the enterprise level. You did your modeling, took all the data models so produced, pushed them through canonical synthesis and somehow they all melded together. Obviously, if you did not get it to work, you were not doing it properly. Apparently, most people were not but were just too embarrassed to admit it. I have not heard the term used seriously since “The A-Team” was first broadcast on TV.


The information center. The information center (IC) was a strategic IT response to the advent of departmental computing in the second half of the 1980s. The PC revolution meant users had alternatives to the mainframe monopoly controlled by IT. Before long, IT departments came under considerable threat and felt forced to offer users new ideas. The IC was one of these. It was conceived of as a unit that would draw data from multiple sources, integrate it and deliver it in a variety of reporting formats. I personally saw one of these units built by an organization I was working for. Several full-time employees were dedicated to it, a good deal of infrastructure was put in place and the unit was rolled out with much fanfare. In other words, it was based on the time-honored principle of “if you build it, they will come.”


At the time, when I questioned what requirements the IC would fulfill, I was told to go and look at the industry literature. This proclaimed what a wonderful concept the IC was and had all kinds of detailed advice from newly minted IC industry experts. There were also many articles from individuals who were engaged in building an IC and who were very sure it was going to fill a huge void in their enterprise.


By about 1990, there were none left.


Fourth-generation languages (4GLs). Part of the appeal of hype can be stated in the form of a fallacious syllogism of traditional logic, as follows:

This hype can meet some requirements.

You have requirements.

Therefore, this hype can meet your requirements.


Luckily, traditional logic had not been taught in schools for more than a century, which has been excellent news for hype. That brings us to 4GLs. 4GLs did in fact do some things. But they did not, and could not, do everything. Typically, they solved rather simple use cases that in my opinion could have been done just as easily by cutting and pasting code from a third-generation language (3GL). It was incredibly difficult to implement more complex use cases in 4GLs. They were only built to let programmers quickly deal with simple requirements.


Whenever I asked what the difference between a 4GL and a 3GL was, I was told that a 4GL was “nonprocedural.” Yet, I could never see how the code I wrote in a 4GL was really so different than what I wrote in a 3GL, so I could never understand what “nonprocedural” meant either. Maybe it was just me.


4GLs were intimately connected with James Martin’s famous 1982 book entitled Applications Development without Programmers, which had enormous influence at the time.


There is only space in this column to blow a little dust off a few index cards in the archives of hype. Marketing buzzwords like “plus,” “turbo” and “next generation” merit detailed attention. “Standards” are a rich area too, and the whole dot-com debacle would fill volumes.


Despite all this, there is no let-up in the manufacture of hype. By the time professionals find out they are being served Kool-Aid rather than Dom Perignon, things have moved on. Projects have ended - whether in success, failure or abandonment matters little because the promise of new ones always beckons on the horizon. Or perhaps technology has changed and we are again on the cusp of a paradigm shift. Sometimes there really is a paradigm shift, and we can cheerfully bid adieu to the hype of yesteryear with a good conscience.

Register or login for access to this item and much more

All Information Management content is archived after seven days.

Community members receive:
  • All recent and archived articles
  • Conference offers and updates
  • A full menu of enewsletter options
  • Web seminars, white papers, ebooks

Don't have an account? Register for Free Unlimited Access