Software is like a house. It must be adequately maintained or peeled-up wallpaper might reveal the foundation underneath isn’t as stable as you thought.

This could not be more true than in big data environments, which are inherently distributed systems. The probability of issues -- such as partial failures or unpredictable latencies -- only increases with scale - and, constructing a massively scalable big data system creates an immense software architecture challenge for software engineers and program managers.

So, while teams may be focused on accelerating the role of analytics from data to decision, one other piece of the analysis puzzle that needs to be considered is a quality analysis program that looks at the underlying source code. Without it, big data management can just end up as a big data disaster.

As a preemptive measure, it’s a good chore for IT managers to put a quarterly cleanup on their checklist– especially if you don’t want to spend countless hours time answering help desk tickets or handling customer complaints. The two main “appliances” to check are reliability and technical debt.

Chore #1: Check Your Reliability Status

U.S. companies are losing up to $26 billion in revenue due to downtime every year. That’s big number, even for big business, and implementing routine application benchmarking tests can eliminate much of it.

Take into consideration the recent realization of missteps by the US Department of Corrections (DoC), which accidentally released more than 3,200 prisoners over a span of 12 years in Washington State due to an unaddressed software glitch.

Considering the US keeps a detailed record on the 2.2 million individuals incarcerated, that’s a lot of data to manage, and this was just in one state. In this situation, some officials knew it was happening but failed to take some practical steps to eliminate the core problem. This resulted in a complete breakdown of the system and the release of potentially dangerous criminals back into society.

While it does take some elbow grease, IT managers must take the appropriate steps to measure not just the output of the data in systems, but also the structural quality of systems. Uncovering vulnerabilities in critical systems unveils possible issues that will disrupt services, impact customer satisfaction and negatively impact the company’s brand before they take hold.

By establishing an application reliability benchmark, managers can see if systems have stability and data integrity issues before they’re deployed into production – meaning secure data, 24/7 uptime and happy customers.

Chore #2: Measure Your Technical Debt

Technical debt refers to the accumulated costs and effort required to fix problems that remain in code after an application has been released. CAST Research Labs estimates the technical debt of an average-sized application is $1 million, and according to Deloitte, more CIOs are turning their attention to technical debt to build business cases for core renewal projects, prevent business disruption and prioritize maintenance work.

Despite the heightened awareness of technical debt, many businesses still struggle to correctly estimate it and make the use case to business decision makers. When it comes to Big Data, technical debt can be exacerbated by the enthusiasm and urgency that often comes with trying to make sense of disperse information sources.

To overcome this, IT mangers should task development teams to configure structural quality tools to measure the cost of remediation and improvement of core systems. This includes the code quality and structural quality of company software.

Structural quality measures can identify code and architecture defects that pose risk to the business when not fixed prior to a system release. These represent carry-forward defects whose remediation will require effort charged to future releases. Code quality addresses coding practices that can sometimes make software more complex, harder to understand and more difficult to modify – a major source of technical debt.

Equipped with real-world estimates of technical debt — including the remediation and validation of code defects — businesses can accurately plan, allocate resources and create realistic budgets for future projects.

As business success becomes more dependent on the software house that IT builds, giving systems a regular “Cleanup” will remain ever important. Today’s systems are a ‘volume driven’ world and the code is just one more big data piece that needs to be scrubbed, in addition to being monitored and analyzed.

Even for those of us resigned to living in a pigsty, now is always a good time to get out our mops, brooms and hammers.

(About the author: Lev Lesokhin is executive vice president of strategy and analytics at CAST Research Labs)

Register or login for access to this item and much more

All Information Management content is archived after seven days.

Community members receive:
  • All recent and archived articles
  • Conference offers and updates
  • A full menu of enewsletter options
  • Web seminars, white papers, ebooks

Don't have an account? Register for Free Unlimited Access