Out of memory issue - heap memory

Gatling version:3.9.0.4
Gatling flavor: java
Gatling build tool: gradle

When we have test that includes large payloads we are getting OOM heap memory issue (large payload is 1.06 MB). I have noticed after running profiling that objects stays in the memory and that cause OOM.

    public static final class StartedData implements ThrottlerActorData, Product, Serializable {
        private final Throttles throttles;
        private final ArrayBuffer<ThrottledRequest> buffer;
        private final long tickNanos;
        private int count;
        private final double requestStep;

This lead me to this code and I have noticed that all requests are stored in ArrayBuffer buffer - is there is a reason for that, is there is anything we can do about it.
I also investigated GC and I can notice that Old Generation space is filling vary quickly and looks like GC has no chance to clear it.
Please let me know if you need any more information. Thanks

Have you read throttle’s documentation?

Thanks for the link. I read it and I think this one is most important part for us:

  • Beware that all excess traffic gets pushed into an unbounded queue, possibly resulting in an OutOfMemoryError if your normal throughput is way higher than the normal one.

We kind of try to mitigate that issue by introducing large payload probability where we can specify percentage of large payload. We managed to run successfully where LP is 1% but anything higher we see OOM. We need to prove that we can support 5% LP.
So what I’m getting from docs you kindly provided only way to achieve that is lower throughput or time or both. Other options I suppose is just give it loads of memory - which we already do as ur setup is : -Xmx12G -Xms1G -Xmn6G
We are using

 ScenarioBuilder sendJsonPayloads = new SendJsonPayload(username, password, url, testId).get(parameters.getRecordsPerRequest(), parameters.getLargePayloadProbability());
    ScenarioBuilder sendNdjsonPayloads = new SendNdjsonPayload(username, password, url, testId).get(parameters.getRecordsPerRequest(), parameters.getLargePayloadProbability());
    HttpProtocolBuilder httpProtocol = new Protocol().getBuilder(url).shareConnections();

    {
        setUp(
                sendJsonPayloads.injectOpen(constantUsersPerSec(baseRequestsPerSec).during(totalDuration)),
                sendNdjsonPayloads.injectOpen(constantUsersPerSec(baseRequestsPerSec).during(totalDuration))

        ).throttle(buildSteps()).protocols(httpProtocol);
    }

Is there is anything else we can do?

Don’t use throttle. Properly shape your injection profile to generate the expected rps.

1 Like

I do not fully understand why these objects are kept for so long. I would like to run scenario and once it is finished I do not keep holding of that object. Is this is case of using injectOpen and so many concurent requests and this is what filling the memory so quickly. I guess using closedInjection would resolve this issue but it would not be correct test in our case. Please let me know if my thinking is correct.
This is example of our scenario

public class SendNdjsonPayload extends BaseScenario {

    public static final String SCENARIO_NAME_BASE = "send-ndjson-payloads";
    private final List<Integer> EXPECTED_HTTP_STATUSES = Arrays.asList(200, 204);

    public SendNdjsonPayload(String username, String password, String url, UUID testId) {
        super(username, password, url, testId);
    }

    public ScenarioBuilder get(int objectsPerRequest, double largePayloadProbability) throws JsonProcessingException {

        return scenario(SCENARIO_NAME_BASE)
                .group(SCENARIO_NAME_BASE + "-" + objectsPerRequest + "-objects-per-request").on(
                        exec(session -> {
                            try {
                                return session.set("payload", requests.generateNdjson(objectsPerRequest, largePayloadProbability));
                            } catch (JsonProcessingException e) {
                                throw new RuntimeException(e);
                            }
                        })
                                .exec(
                                        requests.postNdjsonPayload()
                                                .check(status().in(EXPECTED_HTTP_STATUSES)).transformResponse((response, session) -> {

                                                    if (EXPECTED_HTTP_STATUSES.contains(response.status().code())) {
                                                        LoadTest.addRecordCount(objectsPerRequest);
                                                    }
                                                    return response;
                                                })
                                )
                                .exec(session -> {
                                    // Explicitly remove the payload from the session after it's used
                                    return  session.remove("payload");
                                })
                );
    }
}

IMO, throttle is a super bad idea. It was introduced in Gatling because some users kept on requesting a port of the Ultimate Thread Group from the JMeter plugins project.
As explained in the documentation, it pushes all the extra traffic into a queue. As a result, if you’re injection users faster than your throttle, you’re going to quickly fill the heap.

I’m more and more considering deprecating throttle for removal. The expected result can be achieved by properly designing the injection profile.

Thanks for help. I will try it out.

Thanks I have changed the code so not using throttle now:

 {
        setUp(
                sendJsonPayloads.injectOpen(
                        rampUsersPerSec(1).to(baseRequestsPerSec).during(rampTime),
                        constantUsersPerSec(baseRequestsPerSec).during(baseHoldTime),
                        rampUsersPerSec(baseRequestsPerSec).to(spikeRequestsPerSec).during(rampTime),
                        constantUsersPerSec(spikeRequestsPerSec).during(spikeHoldTime),
                        rampUsersPerSec(spikeRequestsPerSec).to(baseRequestsPerSec).during(rampTime),
                        constantUsersPerSec(baseRequestsPerSec).during(baseHoldTime / 2)
                ),
                sendNdjsonPayloads.injectOpen(
                        rampUsersPerSec(1).to(baseRequestsPerSec).during(rampTime),
                        constantUsersPerSec(baseRequestsPerSec).during(baseHoldTime),
                        rampUsersPerSec(baseRequestsPerSec).to(spikeRequestsPerSec).during(rampTime),
                        constantUsersPerSec(spikeRequestsPerSec).during(spikeHoldTime),
                        rampUsersPerSec(spikeRequestsPerSec).to(baseRequestsPerSec).during(rampTime),
                        constantUsersPerSec(baseRequestsPerSec).during(baseHoldTime / 2)
                )

        ).protocols(httpProtocol);
    }

I still however get OOM but this time it is Request class - io.gatling.http.client.Request this is then used by HttpClient I believe. I can see this is holded again in the memory - any idea why? Thanks

I guess your machine is not powerful enough to process the IO you’re generating.

Possible I will try on bigger one - cheers.

Hi there,

Another option would be to give Gatling Enterprise a try. With the Enterprise version you can easily scale up and add additional machines working together to achieve the load you’re trying to create. You can give it a try for free at cloud.gatling.io or contact us to set up a demo.

All the best,
Pete

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.