I'm curious about the viability of running Gatling effectively in virtualized environments like EC2. Off the top of my head, the lack of guarantees typically associated with virtualization translates into too many extraneous variables to the equation (e.g., memory pressure, I/O congestion, CPU starvation), which in turn yield uncertainty about the measurements. In addition, there's the impact (again nondeterministic) of the network between the node where Gatling runs, and the nod(e) where the simulation connects to. In light of these considerations my hypothesis is that virtualization has a negative impact on performance measurements and consistent results. If anybody tried it perhaps they could share their experience and conclusions?
Obviously, it depends on the virtualization platform. But generally speaking, the Gatling system should not be the bottleneck in your test. As long as it is not, any variation in the environment caused by the virtualization of your Gatling system should be negligible compared to the time taken for the system under test to service your requests.
If you are virtualizing the system under test, that is another matter. If the test is virtualized but the production system is not, then all you can say as a result of your test is that the results you obtained represent a minimum performance characteristic, and that you expect production performance to be some (unknown) degree better than that measured.
If both the test system and the production system are virtual, then you are obviously prepared to accept some minor performance penalty, in which case, just assume that the test results are representative. That is, unless the hosting company does not provide an equivalent SLA to your test environment as it does to the production environment. That is something to make sure of.