Thursday, August 18, 2011

Performance testing - brief introduction





Difference between baselining & benchmarking ??

Creating a baseline is the process of running a set of tests to capture performance metric data for the purpose of evaluating the effectiveness of subsequent performance-improving changes to the system or application. A critical aspect of a baseline is that all characteristics and configuration options except those specifically being varied for comparison must remain invariant. Once a part of the system that is not intentionally
being varied for comparison to the baseline is changed, the baseline measurement is no longer a valid basis for comparison.

And Benchmarking is the process of comparing your system’s performance against a baseline that you have created internally or against an industry standard endorsed by some other organization. 


A Mental Model for Performance Testing

                                                                      













Aspect: Evaluate System

The Evaluate System aspect the performance testing project is the initial, detail-oriented, phase. This aspect can be thought of as the evaluation of the project and system context. The intent is to collect information about the project as a whole, the functions of the system, the expected user activities, the system architecture and any other details that are helpful in creating a Performance Testing Strategy specific to the needs of the particular project. Starting with this information, the performance requirements are collected and/or determined then validated with project stakeholders. Once the performance requirements (system performance acceptance criteria) are validated, a detailed usage model is developed and validated. The usage model depicts how often what number of users perform which system activities. The final activity in this aspect is to determine the potential risks of the performance testing project based on the overall evaluation.



Determine System Functions

During this activity, the performance tester meets with stakeholders to determine what the overall purpose of the system or application is. Before one can determine how best to test or build a system, one must first completely understand what the system is intended to do. These functions may be either user initiated or scheduled (batch) processes.

Determine User Activities

During this activity, the performance tester meets with (typically non-technical) stakeholders to determine what activities users will perform on the system. At this point, all types of users and activities need to be identified. It is also important to identify the frequency with which users will perform the activity, and which activities are most performance critical.

Determine System Architecture

During this activity, the performance tester meets with (typically technical) stakeholders to determine both the physical and programmatic architecture of the system or application for both the production and test environments. In designing an effective test and engineering strategy, the performance tester needs to be aware of which components or tiers of the system communicate with one another and how. It is also valuable to understand the basic structure of the code. Knowing these items early in the project allows the performance tester to identify high risk areas early.

Determine Acceptance Criteria

During this activity, the performance tester will generally hold separate workshops with each group of stakeholders (managers, users, programmers, etc.) to determine what they believe acceptable performance for the completed system should be. Based on the feedback from each group, the performance tester creates a draft document of the tentative acceptance criteria for the decision makers to validate.

Validate Acceptance Criteria

The Validate Acceptance Criteria activity begins with presenting the tentative acceptance criteria to the project decision makers as collected by the performance tester. Often the decision makers will have questions or differing opinions from other stakeholders. When this occurs the performance tester should serve as a facilitator in a group workshop of all stakeholders to reach a consensus on the acceptance criteria.

Determine Usage Model

During this activity, the performance tester graphically depicts the usage model to be used to performance test the system. This model will contain both user and system activities, how and often these activities occur by both time and relative percentage. A good target is to model the 20% of the user activity that accounts for 80% of the total volume of activities, all expected system activity, all performance critical activities and all areas identified as high risk for causing poor performance.

Validate Usage Model

The Validate Usage Model activity begins with presenting the tentative usage model to the project stakeholders to review. Often the stakeholders will offer new or revised information after reviewing the usage model. This is typically due to the fact that this will be the first time most stakeholders will see a graphical depiction of all of the system functions. It is important not to move past this activity until all stakeholders are comfortable with the usage model. Multiple usage models may be created if the system is used significantly differently at different times.

Determine Potential Risks

During this activity, the performance tester identifies all of the known risk areas of the project. Typical areas of risk include: unrealistic acceptance criteria, unsuitable test environment, test schedule and test resources. Identifying these risks early significantly increases the odds of success for the project by allowing time for risk mitigation activities.


Aspect: Develop Test Assets

The Develop Test Assets aspect of the performance testing lifecycle is where three of the four primary project deliverables are created. The intent of this aspect is to completely prepare for test execution and analysis. These activities are initially conducted at the completion of the Evaluate System aspect. Each time the Identify Exploratory Tests aspect is completed these activities are revisited to ensure they either remain accurate or are updated to reflect the need for exploratory testing. The Develop Risk Mitigation Planactivity is reviewed continually throughout the project to ensure initially identified risks are being properly managed as well as to document any risks that present themselves during the remainder of the project.



Develop Risk Mitigation Plan

During this activity, the performance tester works with appropriate stakeholders to develop risk mitigation strategies for each identified risk from the Determine Potential Risks activity. This activity is revisited regularly throughout the project to ensure that risks continue to be managed properly, and to add additional risks as the arises. This activity should be shared with appropriate stakeholders of the system during each periodic project status meeting to increase the odds of project success.

Develop Engineering Strategy

During this activity, the performance tester develops a detailed performance test strategy based on the information gathered in all previous activities. This becomes the roadmap to be followed throughout the remainder of the project. The strategy often takes the form of a formal document that includes: descriptions of test and productions architectures, system usage model, performance acceptance criteria, project schedule and automated script descriptions. This document may be modified to incorporate unplanned exploratory tests, or updated to reflect changes in schedule, requirements, etc.

Develop Test Data

During this activity, the performance tester works closely with application subject matter experts and database administrators to determine, collect and/or create test data to represent real users.  Some applications require little more than a random number generator while other applications require significant data analysis due to data dependency of tasks to be modeled.  It is safer to assume this will be a complex task until proven otherwise.  The performance tester may be significantly involved in creating test data, or just involved in identifying test data to be created depending on the context of the project.

Develop Test Scripts

During this activity, the performance tester develops the automated test scripts to be used with the load generation tool. The scripts are created to strictly represent the usage model and all other details included in the performance engineering strategy document. The scripting process varies significantly from tool to tool, but it is important to match the scripts as closely as possible to the defined usage model. Script development should be treated like any other software development process, applying appropriate practices.






Aspect: Execute Baseline and Benchmark Tests

The Execute Baseline and Benchmark Tests aspect of the performance testing lifecycle is where test execution actually begins. The intent of this phase is twofold. First, all scripts need to be executed, validated and debugged (if necessary). Second, benchmarks are conducted to provide a basis of comparison for all future testing. Initial baselines and benchmarks are taken as soon as the test environment is available after the completion of the Design Test Assets aspect. Re-benchmarking occurs at the completion of every successful execution of the Tune aspect. Designed exploratory scripts are baselined and executed at volume if necessary in this step.

Baseline Scripts

During this activity, the performance tester executes each developed script as a single user over multiple iterations to ensure that the script is developed properly and that it accurately represents the usage model. Results may be collected during these test executions and if performance issues are detected, they should be reported and tuning should begin immediately. If single user baselines are to be included in the final report, they will need to be re-executed after the last successful completion of the Tune aspect.

Initial Benchmark

During this activity, the performance tester executes the first multi-user tests against the system. These tests generally represent approximately 15% of the expected user load and accurately represent the usage model. Results should be collected during these test executions and if performance issues are detected, they should be reported and tuning should begin immediately. The results from these tests are used as a basis of comparison for all subsequent tests to assist in the Analyze Results aspect.

Baseline Exploratory Scripts

During this activity, the performance tester baselines the exploratory test that has been identified. This activity includes validation and single user baselines as well as multi-user baseline executions. These multi-user executions are still considered to be baselines because exploratory tests are always executed in exclusion of any other scripts to pinpoint a specific performance issue.

Re-Benchmark

During this activity, the performance tester executes the same benchmark test(s) as were executed during the initial benchmark activity. These tests are always executed immediately following a successful application of the Tune aspect to provide a new basis of comparison for future test executions. Results should be collected during these test executions and will often appear in the final report document.



                                                                         

Aspect: Analyze Results

The Analyze Results aspect of the performance testing lifecycle is sometimes also called "the decision phase". The intent of this aspect is to evaluate the results of the previously executed test(s) and decide what steps to take next. While the performance test strategy document details what tests need to be conducted to consider the project complete, it cannot outline what tests may be needed in what order to complete any required tuning. This aspect of the performance testing lifecycle occurs at the completion of both the Execute Baseline and Benchmark Tests aspect and the Validate Requirements aspect. Each activity following the initial evaluation includes a decision point to determine the next aspect to be conducted.


Analyze Results

During this activity, the performance tester evaluates the results of the previous test executions(s) collaboratively with technical stakeholders. The intent of this evaluation is to determine what areas of the application are and are not performing within the documented performance acceptance criteria. The conclusions drawn during the Analyze Results activity are the basis for the decisions made in the subsequent activities in this aspect.

Acceptance Criteria Met

During this activity, the performance tester uses the conclusions drawn during the Evaluate Results activity to determine if all performance acceptance criteria have been met. If yes, the project advances to theComplete Project aspect. If no, the project advances to the conclusive activity.

Conclusive Results

During this activity, the performance tester uses the conclusions drawn during the Evaluate Results activity to determine if the results from the previous test execution(s) were conclusive. Most generally, if the resultsdo not meet the acceptance criteria, you can't figure out what is causing the poor performance and the test results are reproducible, the test is probably inconclusive. If the tests are determined to be inconclusive, the project advances to the Identify Exploratory Test aspect. Otherwise, the project advances to the Tune decision activity.

Tune

During this activity, the performance tester, collaboratively with other technical stakeholders, uses the conclusions drawn during the Evaluate Results activity to determine if the system/application should be tuned at this point in time. If all of the tests conducted so far have met their associated acceptance criteria, but there are more criteria to be tested, move on to the Validate Requirements aspect. If this is not the case the project advances to the Tune aspect.
                                                                           

Aspect: Tune

The Tune aspect of the performance testing lifecycle is the heart of the performance testing project. While everyone hopes that no tuning will be required, this is very rarely the case. This phase of the project is a collaborative effort between the performance engineer and the developers/administrators of the system. These groups, working together, are able to detect performance issues and tune them more effectively than either group could alone. The effectiveness of this activity directly correlates to the project´s ability to stay on schedule and meet the documented acceptance criteria.

Aspect: Identify Exploratory Tests

The Identify Exploratory Tests aspect of the performance testing lifecycle is the phase where unplanned tests are researched to detect and exploit performance issues to aid in the tuning effort. To be effective, these tests must be researched collaboratively with the technical stakeholders who have intimate knowledge of the area of the system exhibiting performance issues. The results of this research leads the project back into the Develop Test Assets aspect where exploratory tests are created and documented.

Aspect: Validate Requirements

The Validate Requirements aspect of this performance testing model is the phase where the majority of tests are executed. The intent of this aspect is to execute the tests called for in the performance test strategy document that are intended to validate that the stated requirements are met. These tests should all be conducted in controlled, known, environments to ensure the measurements collected from them are accurate when compared with one another.

                                                                           

Aspect: Complete Engagement

The Complete Engagement aspect of the performance testing lifecycle is the final phase of the project. The intent of this aspect is to document the results of the project. While this aspect contains only one activity, that activity is important enough to be treated separately from the others. The performance test final results document is the fourth significant deliverable of the project.



                                                                                

No comments:

Post a Comment