Performance Testing Terminologies for every QA/Test Automation
If you are going to work for testing of a product then probably, performance testing is one of the most important aspect for it’s stability. As you might be aware, whenever there is a Live cricket/football match hosted on a OTT platform or YouTube then how come so many users simultaneously watch it? Moreover another interesting example can be festival sales on Amazon.com. The footfall of users massively increases during these times.
Yes, that is where we need to performance test our application to make sure the load on application is managed effectively none of the users face any downtime. So keeping this in context I have attached a below most useful terms that we should understand as a QA/Test Automation/SDET to effectively measure and enhance the performance our applications tested.
Latency : is the delay in network communication. It shows the time that data takes to transfer across the network. Networks with a longer delay or lag have high latency, while those with fast response times have lower latency.
Latency determines the delay that a user experiences when they send or receive data from the network. You can measure network latency by measuring ping time. This process is where you transmit a small data packet and receive confirmation that it arrived.
Throughput : refers to the average volume of data that can actually pass through the network over a specific time. It indicates the number of data packets that arrive at their destinations successfully and the data packet loss.
Throughput determines the number of users that can access the network at the same time. You can measure throughput either with network testing tools or manually. Use tools like JMeter, Postman, LoadRunner, Locust, Gatling these are quite simple to setup and can be easily used to measure.
Important : A network with low throughput and high latency struggles to send and process high data volume, which results in congestion and poor application performance. In contrast, a network with high throughput and low latency is responsive and efficient. Users experience improved performance and increased satisfaction.
Response time : is the total time taken between the moment a user or client sends a request to the system or application and the moment they receive a response. Response time can be influenced by various factors, such as latency, server capacity, database queries, code efficiency, caching, and concurrency (explained below).
Caching : is a technique that improves the speed and performance of your website by storing frequently used data or resources in a fast and accessible location such as browser or caching server etc.
A response time of 2 seconds may be acceptable for a web page, but unacceptable for a real-time application. Similarly, a response time of 10 milliseconds may be impressive for a database query, but insignificant for a video streaming service.
Concurrent Users : Multiple users using an application at the same time. To simulate real time performance test scenarios, we need to have concurrency checks. Making sure application can manage and provides correct responses.
Virtual Users : If we use some performance testing tools such as JMeter, Locust, Postman etc. we can load virtual users. Basically it means a user which is not real but is used to replicate real user actions on an application for performance checks. Virtual users can be described as Threads. Multi-Threads are used to make concurrency checks.
Transaction Per Seconds : measurement is used to calculate the performance of systems that handle routine transactions and record-keeping.
TPS can be calculated with the formula:
T ÷ S = TPS
Where:
T = Number of transactions
S = Number of seconds
TPS = Transactions per second
Ramp up period : The Ramp Up Time property represents the delay between start of the test until all virtual users are running. This is independent of the Duration setting and tells how long to take to “ramp-up” to the full number of Virtual Users chosen.
Ramp Up Time needs to be long enough to avoid too large a work-load at the start of a test, and short enough that the last virtual users start running before the first ones finish (unless one wants that to happen). Probably it should simulate a real time condition but it becomes to difficult to manage during tests hence, try different combinations.
Ramp down period : While ramp-up tests are perfect before peak time as the app launches, ramp-down looks at data when the peak hour ends. During that time, you often see drops in the concurrent numbers. Ramp-down will look at the same type of speed delay, only that this time assesses that drop.
This can be useful when we are variety of scenarios like, we increased the number of users for certain time period and then reduced. After some time we again increase the users. This make sure the application is responding correctly in variety of loads. This can also be called as Endurance tests. We will be discussing upcoming.
Advantages of Ramp up and down periods:
- Utilizing Variable Load Simulations
- Monitoring Resource Utilization
- Assessing System Recovery
Think Time : a real-world user takes time to read the content of a web page or fill in the details on a web form. Such activities create a gap between the two actions of a user. Think time simulates the same time gap by adding a delay between two transactions.
Protocol : the method of communication between a client and the server. Such as HTTP, TCP etc.
Stress tests : measures software on its robustness and error handling capabilities under extremely heavy load conditions and ensuring that software doesn’t crash under crunch situations. It can also help us to determine the breakpoint of our applications, beyond which application might start throwing error.
Why do we conduct it? : checks whether the system demonstrates effective error management under extreme conditions
Endurance tests : the load which we usually get on peak time usage of our production application are kept on application in a different environment but for extended duration of time. Just to make sure if application gets some unusual usage statistics then it should be able manage it effectively.
Volume tests : Applying huge amount of data for tests, such as artificially increasing database size. Using volume tests we can detect the impact on response time and system behavior can be studied when exposed to a high volume of data.
Why conduct Volume tests : Check system performance with increasing volumes of data in the database
- To identify the problem that occur with large amount of data
- To figure out the point at which the stability of the system degrades
- Helps to identify the capacity of the system or application — normal and heavy volume
Baseline : While establishing the performance tests, we need to have metric before hand using which can start the testing. Now this metric can contain data for current performance of application or minimum threshold which our application should meet. This metric can be described as Baseline.
Benchmarking: Once we establish our baseline now to improve the performance of system we can start benchmarking by performing different load tests as described above or we can start network throttling (meaning making network conditions worse slowly). Benchmarking will only help us to measure by how much our system is improved or degraded during particular performance tests.
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.