Performance Testing Tutorial: What is, Types, Metrics & Example

Performance Testing

Performance Testing is a software testing process used for testing the speed, response time, stability, reliability, scalability and resource usage of a software application under particular workload. The main purpose of performance testing is to identify and eliminate the performance bottlenecks in the software application. It is a subset of performance engineering and also known as “Perf Testing”.

The focus of Performance Testing is checking a software program's

In this tutorial, you will learn-

Why do Performance Testing?

Performance Testing

Features and Functionality supported by a software system is not the only concern. A software application's performance like its response time, reliability, resource usage and scalability do matter. The goal of Performance Testing is not to find bugs but to eliminate performance bottlenecks.

Performance Testing is done to provide stakeholders with information about their application regarding speed, stability, and scalability. More importantly, Performance Testing uncovers what needs to be improved before the product goes to market. Without Performance Testing, software is likely to suffer from issues such as: running slow while several users use it simultaneously, inconsistencies across different operating systems and poor usability.

Performance testing will determine whether their software meets speed, scalability and stability requirements under expected workloads. Applications sent to market with poor performance metrics due to nonexistent or poor performance testing are likely to gain a bad reputation and fail to meet expected sales goals.

Also, mission-critical applications like space launch programs or life-saving medical equipment should be performance tested to ensure that they run for a long period without deviations.

According to Dunn & Bradstreet, 59% of Fortune 500 companies experience an estimated 1.6 hours of downtime every week. Considering the average Fortune 500 company with a minimum of 10,000 employees is paying $56 per hour, the labor part of downtime costs for such an organization would be $896,000 weekly, translating into more than $46 million per year.

Only a 5-minute downtime of Google.com (19-Aug-13) is estimated to cost the search giant as much as $545,000.

It's estimated that companies lost sales worth $1100 per second due to a recent Amazon Web Service Outage.

Hence, performance testing is important.

Types of Performance Testing

Common Performance Problems

Most performance problems revolve around speed, response time, load time and poor scalability. Speed is often one of the most important attributes of an application. A slow running application will lose potential users. Performance testing is done to make sure an app runs fast enough to keep a user's attention and interest. Take a look at the following list of common performance problems and notice how speed is a common factor in many of them:

Performance Testing Process

The methodology adopted for performance testing can vary widely but the objective for performance tests remain the same. It can help demonstrate that your software system meets certain pre-defined performance criteria. Or it can help compare the performance of two software systems. It can also help identify parts of your software system which degrade its performance.

Below is a generic process on how to perform performance testing

Performance testing process image

  1. Identify your testing environment - Know your physical test environment, production environment and what testing tools are available. Understand details of the hardware, software and network configurations used during testing before you begin the testing process. It will help testers create more efficient tests.  It will also help identify possible challenges that testers may encounter during the performance testing procedures.
  2. Identify the performance acceptance criteria - This includes goals and constraints for throughput, response times and resource allocation.  It is also necessary to identify project success criteria outside of these goals and constraints. Testers should be empowered to set performance criteria and goals because often the project specifications will not include a wide enough variety of performance benchmarks. Sometimes there may be none at all. When possible finding a similar application to compare to is a good way to set performance goals.
  3. Plan & design performance tests - Determine how usage is likely to vary amongst end users and identify key scenarios to test for all possible use cases. It is necessary to simulate a variety of end users, plan performance test data and outline what metrics will be gathered.
  4. Configuring the test environment - Prepare the testing environment before execution. Also, arrange tools and other resources.
  5. Implement test design - Create the performance tests according to your test design.
  6. Run the tests - Execute and monitor the tests.
  7. Analyze, tune and retest - Consolidate, analyze and share test results. Then fine tune and test again to see if there is an improvement or decrease in performance. Since improvements generally grow smaller with each retest, stop when bottlenecking is caused by the CPU. Then you may have the consider option of increasing CPU power.

Performance Testing Metrics: Parameters Monitored

The basic parameters monitored during performance testing include:

performance testing metrics image

Example Performance Test Cases

During the actual performance test execution, vague terms like acceptable range, heavy load, etc. are replaced by concrete numbers. Performance engineers set these numbers as per business requirements, and the technical landscape of the application.

Performance Test Tools

There are a wide variety of performance testing tools available in the market. The tool you choose for testing will depend on many factors such as types of the protocol supported, license cost, hardware requirements, platform support etc. Below is a list of popularly used testing tools.

FAQ

Which Applications should we Performance Test?

Performance Testing is always done for client-server based systems only. This means, any application which is not a client-server based architecture, must not require Performance Testing.

For example, Microsoft Calculator is neither client-server based nor it runs multiple users; hence it is not a candidate for Performance Testing.

Introduction to HP LoadRunner  and its Architecture

What is the difference between Performance Testing & Performance Engineering

It is of significance to understand the difference between Performance Testing and Performance Engineering. An understanding is shared below:

Performance Testing is a discipline concerned with testing and reporting the current performance of a software application under various parameters.

Performance engineering is the process by which software is tested and tuned with the intent of realizing the required performance. This process aims to optimize the most important application performance trait i.e. user experience.

Historically, testing and tuning have been distinctly separate and often competing realms. In the last few years, however, several pockets of testers and developers have collaborated independently to create tuning teams. Because these teams have met with significant success, the concept of coupling performance testing with performance tuning has caught on, and now we call it performance engineering.

Conclusion

In Software Engineering, Performance testing is necessary before marketing any software product. It ensures customer satisfaction & protects an investor's investment against product failure. Costs of performance testing are usually more than made up for with improved customer satisfaction, loyalty, and retention.

 

YOU MIGHT LIKE: