Skip to content

2 minute read

Project lifecycle: A deep dive into performance testing

by István Nagy on

Learn how performance testing impacts a data-driven project lifecycle: from scope to metrics tracked to the implementation itself.

Table of contents

It is always advised to start a new activity with creating a plan or strategy first.

It is the same with performance testing.

Free download: The 6-step checklist to implementing a data management framework

Before starting to work on the actual tests, one of the first steps should be the creation of a test strategy.

Depending on project’s needs I always prefer a less formal documentation. The emphasis should be on the strategy.

In some situations, it is not even important to have an official document.

Instead, we have multiple options:

  • Including the strategy in an email to relevant stakeholders.
  • Creating a mind map that reflects the strategy.
  • Outlining the strategy in form of notes or a checklist.

Why have a test strategy?

  • It sets expectations for stakeholders about the scope, timeline, costs involved and existing risks.
  • It forces you to clarify outstanding questions such as non-functional requirements. It is also a good opportunity to identify dependencies and risks that might appear along the way.
  • It serves as a guideline during the performance testing process.

And these are just some of the benefits...

What information to include in a test strategy

There is a lot of information that can be included in a performance test strategy.

Based on my experience, the most important are:

In-scope and outside-of-scope for performance testing

We can start creating the strategy by specifying the type of performance testing chosen for the project and list the pages and user journeys of the application that are being tested.

We can also mention anything that is not covered by the performance tests.


Performance testing can be a recurrent activity as part of the sprint effort or a one-off activity at a certain milestone of the project. When describing the timeline of the activities we should mention the timeframes for:

  • environment set-up
  • test implementation
  • test run
  • result analysis
  • result sharing


It is enough to list the metrics that we monitor during performance testing. Some examples for such metrics are: response time, throughput, CPU & memory usage etc.

Tools used

We should specify the tools used for performance testing. If there are any costs associated with the tools needed, we should mention them also.

Test environment

 Ideally, there should be a dedicated test environment for performance testing. Regardless if there is one or not, the environment details should be described.


Reports shouldn’t be very formal, but results still have to be shared with the team and other stakeholders.

The key is to have the audience in mind and find a way to share information that everybody can understand.

In this section we can outline the report details and the periodicity: after each test run vs after multiple runs vs at the end of each sprint.

Risks and contingencies

A purpose of the test strategy is to identify any risks that may impede us from completing the performance testing activities.

We should try to identify these risks as early as possible.

Closing the performance testing cycle

Depending on the context, we can always include an Appendix section in our test strategy.

Here we can explain different terminologies used and add extra information such as screenshots or test report examples.

The aspects listed above are suggestions.

The primary goal is to assess the context and create a plan to attack the problem.

Most of the times there will be unexpected situations way, but having a strategy brings us one step closer to success.

Performance testing is part of the Datavid methodology of running projects for clients and it is incremental for our delivery on the assigned work.

datavid data management framework checklist image bottom cta

Frequently asked questions

Performance testing involves assessing the speed, stability, and scalability of a software application or system under varying workload conditions to ensure it meets performance requirements.


An example of performance testing is simulating a large number of concurrent users accessing a website to measure its response time, throughput, and resource utilization, in order to ensure the website can handle the expected load without significant performance issues.

The types of performance tests include load testing, stress testing, spike testing, endurance testing, scalability testing, volume testing, and soak testing. They assess system performance under different conditions to identify bottlenecks and ensure performance objectives are met.