I am trying to understand request/sec and response/sec graph. When I ran at a lower load I can see the requests are evenly distributed, requests are just less than 16/sec.
Consider giving a minimum reproducer so that community can know what happened.
Blind guess: You did not use any pause or pace or something that affects the delay between request.
If that is true, then keep in mind that each virtual user is spawned, it will immediately send the request away, and when a request is completed, another request will be sent immediately, it happens so fast that request per sec is increased.
Moreover, you are using ramp up - steady - ramp down model, which means that when it reaches steady state, there will be around 4000+ users concurrently (as your img depicted there), these users send so many requests (at the same 1 second time frame) that it would lead to request per sec increased as you are seeing.
This is not a perf issue, however, if 16 request / sec is what you need, but you still want to keep same user load, consider adding pause between request, see here, or using throttle (see here).