DCS Consultancy Services Limited
Quality & Efficiency Driven
Performance and Non-Functional Testing Documentation

We provide a comprehensive 5 stage template testing solution for non functional testing with supporting resource management workflow for test preparation and test execution stages.

Delivery Document
High Level Content
Risk Assessment
 
This document provides high level detail of the changes to be made to the system and defines the associated impact to the business. Analysis of the concerns from each of the departments is collated (database, OS, security, networking, capacity planning etc) and an evaluation of the risk involved is made. Expected load is defined either from current existing production load or from capacity planning investigation. A complete catalogue of test scenarios are devised that can be mapped to cover the risks and concerns identified. Details are supplied of the intended test environment and a comparison is made to the production environment. Up to three test strategies are then proposed, each with very high level test plan based on estimated budget, resource requirements and numbers of test days that provide a variety of levels of eliminating risk.
 
Test Plan
 
Details found within the risk assessment define the test strategy to be implemented whilst the risks and concerns are translated in to measurable key process indicators that are to be captured or monitored during test scenario execution. From the test strategy, a test schedule is designed detailing expected execution dates, low level test scenarios and the method of monitoring metrics is documented along with the resource requirements for the roles and responsibilities necessary for achieving test preparation completion and test execution. Resource requirements from each of the departments are made know to the test manager and project manager so that these resources can be agreed with each of the business departments. Business transaction names and system processes are defined and then mapped to the key process indicators to ensure complete test coverage and the volumes for load are determined, such as number of concurrent users, the number of transactions to perform, connections to the system, batch process file size and time limits, and queued messages. The initial and end conditions should be considered for the complete swathe of processes. There will be several levels of success criteria that cover the overall scenario test execution, individual test scripts behaviour, key process indicator tolerance levels and test execution suspension criteria. Data requirements for the system under test (such as database initial bulk size and populated test data) and for the test script execution (such as determining whether test data is reusable and how it will be generated) are defined and storage requirements are submitted. Finer details of the whole test environment, both the applications test environment build and the load testing architecture, are documented including hardware and software requirements. This leads on to the test environment build requirements and test execution requirements from each of the departments (network requirements, application connectivity, capacity planning engagement, database monitoring and restore etc). A diary of events is supplied, with control gates, and is defined for each project deliverable with a delivery target date. Risks and issues are continually updated throughout the test plan development and are then submitted to the appropriate project logs.
 
Test Model
 
Some of the details located in the Test Plan will be duplicated in the test model and extended to a level that enables a test team to repeatedly perform the same set of test scenarios. What will be monitored, who will be responsible, and how it will be achieved is detailed for the test environment, both the applications test environment build and the load testing architecture. The load test scenarios will document each business process with the number of virtual users along with the number of transactions per hour will be performed, where these virtual users will be executed from, how these will be ramped up to full volume, when these will start to be executed, what the duration time will be, how these will be ramped down, and what the runtime settings are including any iteration wait time. For each of the business processes and system processes there will be a detailed list of the transaction names of the key process indicators that will be captured. Initial and end test execution conditions are defined to ensure consistency between test runs and to assist with simpler test results analysis. A pre-execution system checklist is available of the events that need to be considered as part of test preparation before each run of a test scenario that lists all the mandatory steps required from the preparation of databases (refresh or restore), the restarting of servers and services, the clearing down of logs and messages stored in folders. Alongside this is a pre-execution personnel checklist that re-iterates the responsibilities of resources before, during and after test execution that confirms test readiness. Post execution test run checklist considerations are listed such as the storage of results, analysis, metrics gathering, database statistics, and error logs. A storage location is defined for each test scenario set of results. A detailed test schedule is provided stating execution dates and the start and end times of each test scenario execution and the pre-agreed resource requirements from each of the business departments will be detailed with names and contact details. Risks and issues are continually updated throughout the test model development and are then submitted to the appropriate project logs.
 
Test Execution Report
 
This document is produced for every test run of a test scenario. The format varies depending upon success or failure of the test run. For a successful test run this document outlines the high level test execution statistics, a summary of the test execution events, and results data in a graphical representation format or table format. The high level test execution statistics detail the time and date of execution, duration time, overall pass or fail status of test execution and verification of the overall status is provided for the high impact key process indicator statistics (such as business transactions completed, batch job completion and duration). The summary of test run events are the analysis of test results data interpreting how the system behaved, the conclusions drawn from the interpretation of the test execution results and analysis, any recommendations identified such as resolutions to irregularity in test results data. The results data will include all the key process indicator data that was captured during the whole of the test execution. This information is either displayed in time orientated graphs that show the values of business transactions of end user scripts, of system metrics such as the CPU utilisation of the applications servers, or tables providing the status and successes of batch job executed.
 
Test Completion Report
 
This report is delivered at the end of the test schedule and represents the value of testing by comparing the original test objectives with the collation of the main summary results of the test scenarios that were performed, and by reviewing the actions taken by the test team in resolving defects identified during each stage of testing. An executive summary covers the detailed analysis of performance, observations that were made during test scenario execution, conclusions, system issues addressed, details of the changes made to system (such as architecture design, environment configuration, database design or SQL), how these modifications to the system affected the outcome of the overall behaviour by showing the changes in key process indicator response values, and recommendations for change and suggested further non-functional testing if necessary. This report provides confidence in the final delivery of the application and the architecture on which it is built.