The current behaviour of the charting is such that the maximum value for the y axis is dynamically updated depending on the charted items within the time period defined by the current x axis (which the user can modify). And for the “response time distribution” chart, the minimum and maximum values of both axes appear to be dependent on the test results.
Whilst this is great when looking at a single test run’s chart in isolation, comparing charts from multiple test runs, e.g. to compare performance across releases, is very difficult.
It would be great if we could put some kind of constraints on these axes, such as fixing the values, or (even better) setting minimum/maximum values for them. This should be done during the report generation stage, rather than the test itself.
Do you know if this is possible?