Internal API changed in 2.0 for csv() - Need to know the PROPER way to do this...

Previously, I had constructed code that called csv(filename).random with a constructed filename, saved the returned object in a hash, and at run-time would look up the object, call .next() on it to get the next record, and merge that into the session. This was my solution to not being able to say:

.feed( csv( "${id}.csv" ).random )

(you probably remember that conversation on StackOverflow)

In 2.0, I get a run-time error saying that the object can not be cast into Iterator. I dug into the sources and see that csv() isn’t complete until you call .build on it. So I modified my code to call .build before storing it in the hash. That seemed to solve the problem, but now I’m coding based on knowledge of the internals of the object. That’s generally a Bad Thing. So,

Is there a “right” way I should be solving this problem?

Which version are you using?
In current snapshot, this definitively works properly.

I just tried it in SNAPSHOT downloaded on Friday.

`
.feed( csv( “oracle/${cac}.csv” ).random )

`

`
Exception in thread “main” java.lang.ExceptionInInitializerError
at com.cigna.icollaborate.LoadTest.(LoadTest.scala:42)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at java.lang.Class.newInstance(Class.java:374)
at io.gatling.core.runner.Runner.run(Runner.scala:37)
at io.gatling.app.Gatling.start(Gatling.scala:235)
at io.gatling.app.Gatling$.fromMap(Gatling.scala:54)
at io.gatling.app.Gatling$.runGatling(Gatling.scala:79)
at io.gatling.app.Gatling$.runGatling(Gatling.scala:58)
at io.gatling.app.Gatling$.main(Gatling.scala:50)
at io.gatling.app.Gatling.main(Gatling.scala)
Caused by: java.lang.IllegalArgumentException: Could not locate feeder file; file oracle/${cac}.csv doesn’t exist
at io.gatling.core.feeder.FeederSupport$class.feederBuilder(FeederSupport.scala:55)
at io.gatling.core.Predef$.feederBuilder(Predef.scala:33)
at io.gatling.core.feeder.FeederSupport$class.separatedValues(FeederSupport.scala:45)
at io.gatling.core.Predef$.separatedValues(Predef.scala:33)
at io.gatling.core.feeder.FeederSupport$class.separatedValues(FeederSupport.scala:42)
at io.gatling.core.Predef$.separatedValues(Predef.scala:33)
at io.gatling.core.feeder.FeederSupport$class.csv(FeederSupport.scala:33)
at io.gatling.core.Predef$.csv(Predef.scala:33)
at com.cigna.icollaborate.PatientSearch$.(PatientSearch.scala:16)
at com.cigna.icollaborate.PatientSearch$.(PatientSearch.scala)

`

It is complaining during build time, it has not made it to the point of executing the scenario yet, so it doesn’t have a value for ${cac}

As the doc states, a feeder is a SHARED datasource. How do you expect its definition to be resolved against something that’s contained inside a given user session (a session attribute resolved with Gatling EL)?

If you want per virtual user datasources, look at this: https://github.com/excilys/gatling/blob/master/src/sphinx/session/feeder.rst#non-shared-data

I misunderstood, I thought you meant that you made the one-liner work.

I did a test of the following code, with and without the call to .build. It works if I call .build, it fails if I do not. Is that what you would expect?

`
object Data {
var feederCache: Map[String, Any] = Map()
def feedByCAC ( path: String ) : ( Session => Validation[Session] ) = {
( session => {
val cac: String = session(“cac”).as[String]
val which = path + “/” + cac
if ( ! feederCache.contains( which ) )
feederCache += ( which → csv( which + “.csv” ).random**.**build )
val feeder = feederCache(which).asInstanceOf[Feeder[Any]]
val data = feeder.next()
session.setAll( data - “cac” )
})
}
def searchCriteria = feedByCAC(“oracle”)
}

`

I use the above like so:

`
.exec( Data.searchCriteria )

`

As long as I call it with .build, everything works. When I take that out, it doesn’t. I just want to know what the expected way of doing it is.

How many different feeder files do you have?
If you don’t have tons of them, you’d better use a switch: https://github.com/excilys/gatling/blob/master/src/sphinx/general/scenario.rst#doswitch

Not only do I have a ton of them (100+), but they are generated by a database extract. If I point to a different endpoint, I could have a completely different set of files. The last thing I need is environment-specific code within my source. Which is why I went with this solution in the first place.

So that’s basically it. Except that you better use a threadsafe Map like ConcurrentHashMap.
You indeed have to call build, because random doesn’t produce a Feeder (which is just an alias for Iterator) as there’s still other available methods, such as “convert”.