Reprinted from https://www.testwo.com/article/725
There are five main tests for performance testing:
1. Stress Testing: Emphasizing Extreme Violence
2. Stability test: Long-term operation under certain pressure
3. Benchmarking: Performance testing under specific conditions
4. Load testing: performance under different loads
5. Capacity Test: Optimal Capacity
Performance Test External Indicators:
1. Throughput: The number of requests and tasks that the system can handle per second.
2. Response time: The time taken by the service to process a request or a task.
3. Error rate: The proportion of requests in a batch that result in errors.
Performance Test Internal Indicators:
1,CPU,
2. Memory,
3. Server load,
4. Network,
5. Disk IO, etc.
Common performance bottlenecks:
1. When throughput reaches the maximum, the system load does not reach the threshold: it is generally caused by too little system resources allocated by the service being measured. If this is found during testing, you can locate the cause of the problem from ulimit, number of threads opened by the system, allocated memory, and so on.
2. The USS and sy of the CPU are not high, but the Wa is high: if the service under test is disk IO intensive, wa high is normal. However, if it is not such a service, there are two most likely reasons for the high wa. First, there are problems with the business logic of the service to read and write to the disk, the high read and write frequency, the large amount of data written, such as unreasonable data loading strategies, too many logs, and so on, which may cause this problem. Second, the server is out of memory and the service is constantly swapping in and out of the swap partition.
3. The response time of the same request is negligible: This problem occurs under normal throughput, possibly due to two reasons: First, there is a problem with the service's locking logic on resources, resulting in a large amount of time spent waiting for resources to be unlocked in processing some requests; The second is that Linux itself has limited resources allocated to services, and some requests need to wait for other requests to release resources before continuing.
4. Continuous memory growth: With fixed throughput, if memory continues to rise, it is very likely that there is a significant memory leak in the service under test, which needs to be located using memory checking tools such as valgrind.
Test Report Output:
1. Test conclusions: including whether the maximum QPS of the service under test, response time and other indicators meet expectations, deployment recommendations, etc.
2. Test environment description: including performance requirements, test server configuration, test data sources, test methods, etc.
3. Monitoring indicator statistics: response time statistics, QPS, server indicator statistics, process indicator statistics. It is recommended that statistics be best represented by charts.
Software performance concerns:
1. User angle:
User Action Response Time Forward friendly error prompts in time
2. Operations and maintenance angle:
response time Is Server Resource Usage Reasonable Rational use of application server and database resources Is the system scalable How many users can the system support and what is the maximum amount of business that the system can handle Where are the possible bottlenecks in system performance Replacing those devices can improve performance Can the system support 7×24 Hours of business access
3. Development Designer's Angle:
Is the architecture design reasonable Is Database Design Reasonable Is there a performance issue with the code Is there an unreasonable way of using memory in the system Is there an unreasonable thread synchronization method in the system Is there unreasonable competition for resources in the system
4. Performance Test Engineer's Angle: All of the above performance points
Key terms for software performance:
1. Response time: The time required to respond to a request
Network transmission time: N1+N2+N3+N4 Application server processing time: A1+A3 Database server processing time: A2 response time=N1+N2+N3+N4+A1+A3+A2
2. Formula for calculating the number of concurrent users
Number of System Users: The number of users rated by the system, such as one OA System, the total number of users who may use the system is 5000, so this number is the number of users of the system. Number of simultaneous online users: The maximum number of simultaneous online users in a certain time frame. Number of simultaneous online users=number of requests per second RPS(Throughput)+Number of concurrent connections+Average user think time Calculation of average concurrent users: C=nL / T among C Is the average number of concurrent users, n Is the average number of users visited per day ( login session),L Is the average time in a day that a user logs in to exit. login session Average time), T Is the length of time (how long a day a user uses the system) Peak number of concurrent users: C^Approximately equal to C + 3*Root Sign C among C^Is the peak concurrent user, C Is the average number of concurrent users. This formula follows the Poisson distribution theory.
3. Formula for throughput calculation:
Number of requests processed by the system per unit time From a business perspective, throughput can be used as follows: Number of requests/Seconds, pages/Seconds, number of people/Number of days or transactions processed/Hours and so on From a network perspective, throughput can be as follows:bytes/Seconds to measure For interactive applications, throughput metrics reflect the pressure on the server, which can explain the load capacity of the system Throughput expressed in different ways can account for different levels of problems, such as bytes/Seconds can represent bottlenecks in network infrastructure, server architecture, application server constraints, etc. Number of requests made/The second representation is mainly a bottleneck due to application server and application code constraints. When no performance bottleneck is encountered, there is a relationship between throughput and the number of virtual users, which can be calculated using the following formula: F=VU * R / among F For throughput, VU Represents the number of virtual users, R Represents the number of requests per virtual user, T Indicates the time taken for performance testing
4. Performance counters:
They are data indicators that describe the performance of a server or operating system, such as the number of memory used, process time, and play a "monitoring and analysis" role in performance testing, especially when analyzing system scalability and locating new energy bottlenecks. Resource utilization: refers to the use of various resources in the system, such as cpu Occupancy rate 68%,Memory occupancy is 55%,General Use "Actual Use of Resources"/Total Resource Availability"forms the resource utilization rate.
5. Formula for calculating thinking time:
Think Time,From a business perspective, this time refers to the time interval between each request when the user operates, and in order to simulate such time intervals when doing new energy tests, the concept of think time is introduced to more realistically simulate the user's operation. In the throughput formula F=VU * R / T Explain throughput F yes VU Number, number of requests per user R And time T And one of them R Time Available T And user think time TS To calculate: R = T / TS Here is a general step for calculating think time: A,First calculate the number of concurrent users on the system C=nL / T F=R×C B,Statistics the average throughput of the system F=VU * R / T R×C = VU * R / T C,Count the average number of requests per user R=u*C*T/VU D,Calculate think times based on formulas TS=T/R
Reprinted from https://www.testwo.com/article/725
WeChat Public Number: Play Test Development
Welcome to your attention and progress together, thank you!