This is intended: you’re generating too many metrics and are risking saturating the metrics engine.
Could you please elaborate on your use case and how you end up load testing more than 1,000 servers in the same test?
We are doing multi region testing for GET and PUT operation, Our use case is to get and put objects in S3 across different AWS regions. Issue is, PUT or GET requests going to S3 through signed URL’s resolving to new DNS endpoints address periodically under load and Gatling has limitation that it can’t support more than 1000 endpoints in the same test that’s why run is crashing after sometime when this limit is reaching to 1000.
@slandelle - Is there any way to disable remote monitoring for specific URL?
Actually, yes.
You can set on your Simulation the gatling.enterprise.groupedDomains
System property (not env var) and pass a list of comma separated domain suffixes.
The DNS resolution and connections stats for all the remotes whose domain ends with one of the defined suffixes will be grouped together.
This way, you won’t blow the 1,000 limit.
Is this the right way to add the properties as mentioned in the screenshot?I have tried after adding property , But still there are multiple DNS entries are coming in the gatling run.
No. These are DOMAIN suffixes. Ports are not part of a domain.
Here, you could use execute-api.us-west-2.amazonaws.com
and ic-storageservices-files-po-perf-data-bucket.s3.us-west-2.amazonaws.com
. Or just us-west-2.amazonaws.com
.
Tried with all your suggestions, but still its not working.
What’s your version of Gatling Enterprise?
If you’re using an old Gatling Enterprise version, the System property used to gatling.frontline.groupedDomains
.