Hi Gatling community ![]()
I’d like to share a practical resource I’ve been working on around Gatling best practices:
Gatling Best Practices — Claude Code Skill | Rodrigo Campos
This guide focuses on helping teams move from “just running tests” to building a scalable performance testing strategy using Gatling.
You can install the skill directly with:
npx skills add rcampos09/performance-testing-skills --skill gatling-best-practices
Some of the key ideas I cover:
- Treat performance tests as code (versioned, reviewed, reusable)
- Shift-left performance testing into CI/CD pipelines
- Design realistic user scenarios instead of synthetic-only flows
- Define clear SLAs (e.g. percentiles, not averages)
- Continuously track performance trends across releases
The goal is simple:
make performance testing a continuous engineering practice, not a last-minute activity.
I’d really appreciate your feedback, thoughts, or even disagreements:
- What practices have worked best for you with Gatling?
- What are the biggest challenges you’ve faced when scaling performance testing?
Happy to discuss and learn from your experiences ![]()