Understanding response per sec and request per sec graphs

I am trying to understand request/sec and response/sec graph. When I ran at a lower load I can see the requests are evenly distributed, requests are just less than 16/sec.

When the load is increased the requests/sec is uneven. It has gone above 160/sec.

Is this some perf issue. Why I am seeing such a number.? The load I want to generate is 16requests/sec.

Consider giving a minimum reproducer so that community can know what happened.

Blind guess: You did not use any pause or pace or something that affects the delay between request.
If that is true, then keep in mind that each virtual user is spawned, it will immediately send the request away, and when a request is completed, another request will be sent immediately, it happens so fast that request per sec is increased.
Moreover, you are using ramp up - steady - ramp down model, which means that when it reaches steady state, there will be around 4000+ users concurrently (as your img depicted there), these users send so many requests (at the same 1 second time frame) that it would lead to request per sec increased as you are seeing.

This is not a perf issue, however, if 16 request / sec is what you need, but you still want to keep same user load, consider adding pause between request, see here, or using throttle (see here).

I have applied pauses in my simulation. The uneven load distribution is due to high latencies. The high latencies cause the cascading effect.

Can high response time be the reason for such uneven distribution?