How to maximize throughput on and minimize the amount of CPU my Gatling instance uses?

So the biggest question I have is how to maximize throughput on a Gatling instance while requiring feeders for a large quantity of rows of simple data (100-200 chars).
I realize that’s a big question though so to focus more. …

CPU is usually the bottleneck with HTTP load-gens, leads to the smaller question of
2. How can I minimize the amount of CPU my Gatling load-gen uses?

I found one thread similar to this that resorted straight to JVM tuning. But I want to use the most efficient Gatling API’s first before going down to that level.

So the first place I’m looking at for that is feeders, since that can involve either file i/o or just random computing of numbers
I’m guessing that pre-generating my data in a csv and loading it in memory, and possibly using batch mode, is the least intensive on the CPU.

To sum up the chain of questions :

  1. How to maximize throughput on a Gatling instance while requiring feeders for a large quantity of rows of simple data (100-200 chars).

  2. How can I minimize the amount of CPU my Gatling load-gen uses?

    1. Is the difference in feeder CPU or overhead significant for the different types of feeders (json, csv etc.)? If so, which is fastest?

if any info on any of these questions, it’d be much appreciated

Thanks,