Regardless of your industry or business model, the next couple of years will continue to be financially tight. The global economy has had and will have an impact on every decision, and the pressure is on to cut costs. Information systems security costs are difficult enough to justify in the boom times, and now the task may seem impossible. Given the environment we are faced with, we must become much smarter about the way we go about our business.

As information systems managers, we need to think about how to reduce information security costs while ensuring that we maintain sufficient controls against risk. We are all doing the obvious things like getting better at haggling with vendors and looking for less expensive product and support alternatives. And senior management applies pressure on any business unit that does not show a return on investment. 

As a chief information systems officer (CISO), I have experienced first-hand the need to see the world through senior management’s eyes and offer solutions with which everyone can feel comfortable. Motivated by the business bottom -line, senior management focuses on productivity first, compliance second and security much later. Calculating ROI for information security is difficult because security is about cost avoidance, not return. Yet the damage caused by the absence of effective controls is far greater than the cost of implementing them.

Medidata Solutions embarked on an ambitious trek in January 2011 to address this issue head-on. Our goal was to bring senior management and security engineers together to improve the effectiveness of our information security activities, while maintaining costs at our current levels. Sounds impossible, doesn’t it?

What we discovered was a technique to improve our results and lower our costs. Consequently, we were able to significantly improve the ROI in information security activities. We increased the security awareness of our internal staff 1150 percent, increased the number of identified vulnerabilities prior to software release by 25 percent and, perhaps most amazingly, we lowered our information security costs by 50 percent. 

Step 1: Solidify an Information Security Policy

The bedrock of any security architecture is always its policies. We selected a vetted ISO 27000 framework that was open and offered many key advantages. It covered the essential aspects of policy required by a global company and it was already familiar to our clients, which saves critical time during regulatory audits. While the ISO standard does not provide the substance of the policy, it does provide sufficient guidance for us to flesh out the details.  

Step 2: Dedicate Appropriate Resources

Once the policy was in place, the next step required that we dedicate resources and instantiate the policies in hardware, software, procedures and training. We wanted to avoid falling victim to what Bruce Schneier calls “Security Theater,” or a situation where a company develops policy to achieve compliance but never makes it a reality - it is all for show.

One of our key policy statements included the requirement to perform application penetration tests. The results of these tests were provided to our customers to achieve a level of assurance that our products were free from vulnerabilities like cross-site scripting, SQL injection and others. Without an internal testing lab, we secured the services of third-party security testing firms as our only resource.

Our third-party application penetration tests were very effective in terms of discovering known vulnerabilities. They were so effective that our software engineers began to test and repair the issues, and then retest to verify the fix. The policy required software to be tested prior to each major release, but I soon discovered that tests were being performed prior to minor releases as well. That was music to this CISO’s ears. However, with an average cost of $12,000 per test, the accumulated costs started to exceed the corporate testing budget. The demand for testing had exceeded our capacity to purchase the needed resources.

Step 3: Consider Organic Resources

The cost of third-party testing was high and growing higher. Although the data was valuable from an information security standpoint, it did not justify the increased costs. In addition, we noticed that the third-party testing was missing a few important vulnerabilities unique to our applications.

During a meeting between the senior vice president of engineering and the vice president of information security on how we could get the software engineers to “think security” as they developed their projects, we decided to hold a four-hour exercise where each software engineer would attempt to hack into another engineer’s project. The winners, those who were able to break someone’s project, would be given a prize. We conducted the first exercise on a Friday, and it was named “Black Hat Friday.” The results were astounding. Many of the findings were identical to those discovered by our third-party testing, but other findings went much deeper. The engineers were able to document previously undiscovered idiosyncratic vulnerabilities.  

When an analysis was performed, we confirmed that we had achieved our objective by increasing the awareness factor over 1,000 percent simply by actively involving more engineers. Above that, the actual number of vulnerabilities discovered increased by 25 percent and the various types of vulnerabilities increased 35 percent. The final area of analysis involved a comparison of actual vulnerabilities unique to the third-party tests and those unique to our staff – in essence, not tabulating those vulnerabilities that both discovered. In that view of the data, our internal staff scored 600 percent better than our testing vendors. (See related graphic, left.)

Step 4: Expand Internal Capabilities

The next step in our experiment involved the procurement of our own automated testing tools and the establishment of a testing lab that was accessible to whomever needed it. Over the course of the first three months, we researched the top application penetration testing software tools and concluded that the cost of the commercial tools was prohibitively expensive. However, after additional research and testing, several promising open-source tools were discovered. We set up a side-by-side test and compared ease of use, ability to operate from a server and ability to detect known vulnerabilities. Our initial goal was to pick the best, but we soon realized that by using several we could create a testing environment that caught as much vulnerability as the commercial products.

Our selection included w3af, WebScarab and skipfish. Each had strengths and weaknesses, but in combination they provided the capabilities required. Total cost amounted to finding some servers, and we needed about one day to get everything operational.

Several months later we decided to expand again, and we are now in the process of building a complete testing environment where we can bring up all of our products, individually or as units, to further stress the systems at their core as well as at their seams. This testing environment serves two purposes: 
1. It is an improvement in the way we verify the security of our software.
2. It provides a safe environment to execute what-if scenarios. 

In this environment we are looking into what happens when unexpected events occur, like a malfunctioning hard drive or one component failing unexpectedly. The results on this latest addition are still coming in, but we are confident that it will far exceed our initial estimates.

Step 5: Count the Cost

At this point the collected data seemed compelling enough to warrant a total switch from vendors to an internal program. But, the key factor that initiated the entire exercise is the analysis of cost.

We calculated the average cost of 25 engineers spending four hours each performing 10 penetration tests at an average yearly cost per person of $135,000. When compared to the average third-party cost of $12,000 per test, our cost was roughly 50 percent less. (See second related graphic, left.)

In terms of ROI, we had a win-win scenario. Our internal efforts produced a significantly better discovery of vulnerabilities at a lower cost. We clearly had a better testing environment and a superior set of results. But the fact remained that some vulnerabilities were missed by our staff, and in reality we could not dedicate them to routine penetration testing because lost development time, not easily factored, was an unknown. 

Step 6: Hybrids are a Good Thing

In the end we decided that the best approach was a hybrid, which allowed us to use our own organic testing environment to test and retest as needed, but to then allow a third-party firm to do one final test as a certification of SaaS-worthiness. The costs were still considerably lower. Although not the 50 percent reduction we saw from the purely organic testing environment, they were a respectable 35 to 40 percent lower.

Register or login for access to this item and much more

All Information Management content is archived after seven days.

Community members receive:
  • All recent and archived articles
  • Conference offers and updates
  • A full menu of enewsletter options
  • Web seminars, white papers, ebooks

Don't have an account? Register for Free Unlimited Access