Let’s say that I have a simple scenario like this one.
ScenarioBuilder simpleScenario = scenario("Create show and reserve seats")
.feed(showIdsFeeder)
.exec(http("create-show")
.post("/shows")
.body(createShowPayload)
)
.foreach(randomSeatNums.asJava(), "seatNum").on(
exec(http("reserve-seat")
.patch("shows/#{showId}/seats/#{seatNum}")
.body(reserveSeatPayload))
);
setUp(simpleScenario.injectOpen(constantUsersPerSec(usersPerSec).during(duringSec))
.protocols(httpProtocol));
A virtual user creates a Show
with 50 seats and then reserves seat by seat in a serial way.
With usersPerSec=30 and randomSeatNums list length = 50 I’m getting 1500 req/s and pretty good results, 99th = 5ms.
Of course, this is not a very realistic scenario, mostly because of this foreach loop and sequential reservations. If I change the flow to sth like this:
ScenarioBuilder reserveSeats = scenario("Reserve seats")
.feed(reservationsFeeder)
.exec(http("reserve-seat")
.patch("shows/#{showId}/seats/#{seatNum}")
.body(reserveSeatPayload));
setUp(reserveSeats.injectOpen(constantUsersPerSec(requestsPerSec).during(duringSec))))
.protocols(httpProtocol);
and set requestsPerSec = 1500, the results are really bad, 99th = 100ms and more. I thought that this is a problem with the number of connections, because, in the first scenario, I’m using the same connection to launch 50+1 requests. Even if I change the second scenario and enable shareConnections
option, the 99th percentile is still not smaller than 75 ms. Correct me if I’m wrong but in both cases, after each second the outcome is pretty much the same, 1500 requests have been launched. I have a feeling that in the second scenario, with constantUsersPerSec(1500)
all 1500 requests start at the same time (more or less) and that’s why I’m getting such bad results. In the first scenario, there is a sort of queue per virtual user (foreach loop) that drastically improves the overall performance. If that’s the case then the second scenario is also not very realistic, because in real life all the requests will be distributed more randomly across a single-second time window. Am I correct or maybe I’m missing something from the big picture? I wonder how constantUsersPerSec
works within a single-second window and if there is a way to tune it.