I have recently been assigned to assist in getting in control of a BI environment where chaos and anarchy prevail.
This is not the first time – actually, it is the fifth time – and every time it has been an environment based on SAS Institute software. This has nothing at all to do with the software from SAS Institute, and the reason behind chaos is often the lack of version control and lack of distinction between development and production. I think one of the primary reasons why SAS Institute software seems to be overrepresented as part of chaotic BI environments comes from the fact that SAS Institute over the years have been successfully selling to Business where most competitors have been more successful selling to IT.
BI environments anchored within the IT organization draw on a strong legacy of version control, change management and a well-defined path from development to production that is missing or at least very rudimentary in BI environments anchored within lines of business.
As chaos evolve, it becomes increasingly difficult to maintain and administrate the environment. The costs of updating BI functionality gradually increase and software upgrades become both time consuming and costly. Consequently, many environments have become both functionally and technically outdated. On top of that, the new European data privacy regulation makes the environment a huge liability for the organization.
You need to run two sets of activities to mitigate these consequences:
- A governance program to prevent future developments to contribute to the mess.
- A clean-up program to fix existing issues.
I am quite fond of Gartner’s three tier BI model that distinguishes between an environment for self-service BI reporting, an environment for ad-hoc analysis and an environment, a lab, for data science activities. This model indicates the necessary governance structures and processes. You need:
- A controlled path to production for reports in the self-service BI environment quality-assured data for the self-service BI environment as well as the ad-hoc analysis environment.
- A controlled process to add data discovered in the data science lab to the quality assured data in the other environments.
- A process to implement algorithms from the data science lab into programs in the self-service BI environment.
Related to this you also need to define roles and responsibilities as well as measures to monitor, control and make the governance system effective and efficient.
As one of the first steps in the clean-up process, I would recommend implementing a data virtualization layer – one of the tools in the reference architecture for a business data lake – as these tools typically allow logging which users access which datasets and gives a possibility to isolate technical changes from the end users.
You will also need to map and analyze all datasets and data manipulation programs in the environment, document data lineage, document content of datasets and compare with legal and business restrictions on data access and use.
Among the things, you will be looking for are old data manipulation programs no longer used, long chains of data manipulation programs operating on the output from each other, redundant copies of data sets with no or limited access control mechanisms. One of the mechanisms that can come in handy in some environments is an operating system time stamp of “last access” though in some environments these time stamps are useless because of backup routines updating the time stamp…
Bad examples or War Stories
I have seen several environments where data sets containing sensitive customer data were sitting on laptop hard-drives with no encryption and no backup mechanism.
I have met examples of people creating a data manipulation program in a development environment that reads data from the production environment and writes modified or categorized data back into the production environment – because it is easier to update the manipulation program in the development environment.
I have seen an example where a user contacted the service desk because the test server was down – that person had not been notified of the close because that person had never been logged on to that server – he was using another person’s login!
I see these examples as the results of business driven agile processes with ‘no rules’ that are often the consequence of an IT organization being too rigid and thus avoided.
Don’t exclude IT if they are rigid – help IT become agile.
There are often very good reasons for IT to be rigid and those reasons are often coming from the business in the form of harsh security requirements, low budgets or …
(About the author: Erik Haahr is an IT governance and enterprise architecture consultant with Sogeti Labs. This post originally appeared on his Sogeti blog, which can be viewed here)
Register or login for access to this item and much more
All Information Management content is archived after seven days.
Community members receive:
- All recent and archived articles
- Conference offers and updates
- A full menu of enewsletter options
- Web seminars, white papers, ebooks
Already have an account? Log In
Don't have an account? Register for Free Unlimited Access