So I have been part of building a framework with Gatling and we were thinking if there is a scalable way to maintain CSV files for large number of simulations.
What we want to achieve:
- A single data file might be used by multiple simulations
- A single csv file contains multiple values and few of the values are used by 1 simulation and other set of values are used by other simulations
- Be able to have a default data file for each simulation
- Be able to override default data file if required and pass a new data file from may be command line.
- In some cases I would want that if I am running 10 simulations and I want to override input data file for just 1 simulation that should be possible as well.
Given above requirement and from my knowledge of JMeter I am thinking of following approach:
- Hardcode input data file for each simulation through feeder
- Have environment variable read in the framework and use them to override default data files for respective simulation. But this would mean having a lot of environment variables in framework i.e. 1 env variable for each simulation so we can override input file just for the simulation that we want for:
String envVarToGetInputDataFileNameForEachSimulation = System.getEnv(key_of_variable);
Limitations and questions:
- Is hardcoding csv files for respective simulations a good idea?
- When number of simulation grows, usually in enterprise frameworks that you build for your organisations do we maintain so many csv files? or use some other way?
- If I want to override a default file as stated in approach point 2., is it good idea to use 1 environment variable for 1 simulation? or is their a better way? And in this approach won’t we end up creating too many environment variables as number of simulation grows?
- Any other input from folks who have built large gatling frameworks and managed large number of input files can you please suggest a better and scalable way of doing it?