Since my early IT days, I've been a part of developing custom businessintelligenceand customer relationship management applications. I love the process of bringing to life the idea or strategy that was hatched by a marketer or strategist to help the company acquire new customers, save ones that were planning to leave, or produce reports based on a single version of the truth. Those ideas need to be translated into actionable requirements, the design and development have to meet those requirements, and the deployment needs to be well-planned for a smooth transition and user adoption. Every piece of the project lifecycle is important. But perhaps the most crucial part of the project lifecycle is the testing methodology. The testing methodology covers many areas - unit, integration, system, security, regression, performance/stress, acceptance and others as dictated by the project, and all are paramount. In my experience, unit, system and acceptance testing are the bare bones required testing forms for all application development.
The goal of custom application system testing should be to ensure that each work unit or module (those identified during unit testing) interacts with one another as designed. While unit testing procedures ensure that each component works independently of one another, the system test process ensures that interactions between those units have no unintended consequences. Specifically, the end-to-end business functionality, including both front and back-end components, will be tested. While sometimes broken out, the system test plan could include security testing and performance testing as well. These are just additional test cases in additional sections. The goal for each component should be to define quantifiable test cases that cover all interrelated functionality of the application.
The system test approach starts with a plan and test cases. The plan should include sections that relate to all application components. Those sections should include test cases that relate back to each and every functional requirement component. This assumes that all functional requirements have been written in a way that makes them testable, which should be the case. Your system test plan should not be a redo of the unit test. In many cases, this will add too much time to system testing and cloud the focus on the larger system. The system test plan should be written with an overview of the technical approach, but not based on coding specifics. For reports it might mean watching a trend over simulated time or comparing different reports to verify continuity. The key is that individual pieces of code logic are not the focus; the overall system function as a whole is the key.
System test execution takes on many forms, from packaged applications designed specifically for testing, recording and notifying, to the old standby of Excel and email. The goal and approach are the keys, but the execution can pull it all together and make the difference between success and acceptance test disappointment. From the get-go, develop your execution plan so that you won't need to reinvent the wheel each time you need to execute a system test for your application. Create scripts or use a tool to verify that you have all necessary data and scenarios to perform all tests. If you don't have that data, develop a reusable process to mock up the data entry or source system, as opposed to the data in your application. Create a process that will execute the scripts in a batch if possible, and keep a record of historic results. Write a process document that explains how to execute the system test automation as part of the system test plan. These steps will help to ensure a robust and thorough regression test as well as new functionality validation.
System testing is the development team's conscience. The focus is on the entire application meeting the overall technical requirements. Independently verify that all the pieces fit together and find the holes before users have their first look. First impressions are crucial to early user adoption. Users seem to forgive and forget delays and agree to scope cuts much quicker than they forgive (and maybe never forget) a bug-ridden application that misses service level agreements and causes problems. A good system test can make this difference, so don't cut time here when the project timeline is squeezed. I've never seen such a "shortcut" lead to a more successful, earlier deployment. In fact, it usually takes longer once the issues unfold in acceptance testing. Follow the process, be complete, and focus on reusability and automation for a system test process that will help make a successful application.
Register or login for access to this item and much more
All Information Management content is archived after seven days.
Community members receive:
- All recent and archived articles
- Conference offers and updates
- A full menu of enewsletter options
- Web seminars, white papers, ebooks
Already have an account? Log In
Don't have an account? Register for Free Unlimited Access