Performance Testing Metrics
- A deep understanding of performance testing metrics not only helps comprehensively evaluate system performance but also provides reliable data for optimization decisions.
- This guide systematically introduces the core performance metrics of AngusTester, helping you quickly grasp the meaning and usage of each metric.
Basic Counting Metrics
Quantify the core operations and data flow during testing, providing foundational data for performance analysis.
Metric Name | Parameter Name | Description | Key Purpose | Calculation Logic |
---|---|---|---|---|
Iteration Count | iterations | Number of test task executions | Measure test coverage | Accumulated execution count |
Sample Count | n | Number of valid data collection points | Evaluate data validity | Count valid sample points |
Operation Count | operations | Total number of request operations executed | Assess system processing capacity | Accumulated operation count |
Transaction Count | transactions | Number of completed business transactions | Evaluate business processing capacity | Accumulated successful transactions |
Bytes Read | readBytes | Total data read | Assess network/storage read load | Accumulated bytes read |
Bytes Written | writeBytes | Total data written | Assess network/storage write load | Accumulated bytes written |
💡 Application Tip: Basic metrics must be analyzed with a time dimension to reveal performance characteristics.
Throughput Metrics
Key indicators reflecting the system's processing capacity per unit of time.
Request Throughput
Metric Name | Parameter Name | Description | Key Purpose | Calculation Formula |
---|---|---|---|---|
Operations Per Second | ops | Number of requests processed per second | Evaluate system processing capacity | Operation count / Test duration |
Transactions Per Second | tps | Number of transactions completed per second | Evaluate business processing capacity | Transaction count / Test duration |
💡 Important Note: In single-interface testing scenarios, QPS test results and TPS values are equal.
Network Throughput
Metric Name | Parameter Name | Description | Key Purpose | Calculation Formula |
---|---|---|---|---|
Bytes Read Per Second | brps | Data read per second | Evaluate network receiving capacity | Bytes read / Test duration |
Bytes Written Per Second | bwps | Data written per second | Evaluate network sending capacity | Bytes written / Test duration |
⚠️ Critical Threshold: When throughput reaches 80% of the system's theoretical peak, it may indicate an approaching performance bottleneck.
Latency Metrics
Core indicators measuring system response speed, directly impacting user experience.
Metric Name | Parameter Name | Description | Key Purpose | Calculation Logic |
---|---|---|---|---|
Average Response Time | tranMean | Average transaction response time | Reflect overall system performance | Total response time / Transaction count |
Minimum Response Time | tranMin | Best response time | Show optimal performance | Take minimum value |
Maximum Response Time | tranMax | Worst response time | Reveal performance bottlenecks | Take maximum value |
P50 Response Time | tranP50 | Response time for 50% of requests | Reflect typical response performance | 50th percentile after sorting |
P75 Response Time | tranP75 | Response time for 75% of requests | Reflect good performance | 75th percentile after sorting |
P90 Response Time | tranP90 | Response time for 90% of requests | Evaluate high-quality performance | 90th percentile after sorting |
P95 Response Time | tranP95 | Response time for 95% of requests | Evaluate high service level | 95th percentile after sorting |
P99 Response Time | tranP99 | Response time for 99% of requests | Evaluate extreme-case performance | 99th percentile after sorting |
P999 Response Time | tranP999 | Response time for 99.9% of requests | Evaluate ultimate performance | 99.9th percentile after sorting |
Error Metrics
Core indicators for evaluating system stability and reliability.
Metric Name | Parameter Name | Description | Key Purpose | Calculation Formula |
---|---|---|---|---|
Error Count | errors | Number of errors occurred | Evaluate system stability | Accumulated error count |
Error Rate | errorRate | Proportion of errors occurred | Evaluate system reliability | (Error count / Operation count) × 100% |
Error Cause Distribution | errorCauseCounter | Distribution of error types | Diagnose root causes | Group statistics by error type |
🔍 Diagnostic Tip: Connection timeouts and 5xx errors should be prioritized in error types.
Thread (Concurrency) Metrics
Reflect system concurrency processing capability and resource utilization efficiency.
Metric Name | Parameter Name | Description | Key Purpose | Status Description |
---|---|---|---|---|
Thread Pool Size | threadPoolSize | Current thread pool capacity | Evaluate system concurrency capacity | - |
Active Thread Count | threadPoolActiveSize | Number of working threads | Evaluate resource utilization | - |
Maximum Thread Pool Capacity | threadMaxPoolSize | Maximum supported threads | Evaluate system expansion limit | - |
Thread Running Status | threadRunning | Whether the thread is running | Monitor thread status | true = running |
Thread Termination Status | threadTerminated | Whether the thread is terminated | Monitor thread status | true = terminated |
Timestamp Fields
Field Name | Parameter Name | Description |
---|---|---|
Server Timestamp | timestamp | Server-recorded time |
Runner Timestamp | timestamp0 | Sampling task-recorded time |
Sampling Task Name | name | Sampling task identifier |
Test Duration | duration | Total execution time |
Sampling Duration | duration0 | Single sampling time |
Start Time | startTime | Test start time point |
End Time | endTime | Test end time point |
Key Considerations
- Metric Correlation: When latency increases, synchronously check error rate and throughput changes.
- Environmental Factors:
- Network latency directly affects response time.
- Test data scale impacts throughput performance.
- Scenario Differences:
- E-commerce systems should focus on P99 latency.
- Financial systems must ensure 0% error rate.
- Monitoring Strategy:
- Establish performance baselines for comparison.
- Set threshold-based alerts (e.g., error rate > 0.5%).
- Trend Analysis: Compare with historical data to identify performance degradation.