I agree in the sense that traditional load balancing expects multiple back ends.
What I am trying to ascertain is if I can utilise load balancing in Nginx as a way to load balance requests to a single backend. It's a hack depending on how you look at it, but its the same principle.
Consider PostgreSQL, whilst NGinx will handle 10,000 requests in front of it, its optimal to only process 5, 10 or 20 queries at a time on PostgreSQL [cite:https://wiki.postgresql.org/wiki/Number_Of_Database_Connections] as opposed to 500 at a time.
The same principle applies to my legacy backend. Its essentially a database, and there is no point running more threads than there are CPU resources. I'm not sure what is the exact optimum, but 2x the number of processes plus a couple more tends to be the sweet spot.
So say we had an 8 core CPU. If I fired up 10 to 12 processes on the backend, each with its own port, could I use Nginx to load balance HTTP requests to those ports, as if they were multiple backends.
Bottom line, as per original question, can Nginx be configured to do reverse proxy to many different ports, but be forced to use those ports in a single concurrency mode?
What I am trying to ascertain is if I can utilise load balancing in Nginx as a way to load balance requests to a single backend. It's a hack depending on how you look at it, but its the same principle.
Consider PostgreSQL, whilst NGinx will handle 10,000 requests in front of it, its optimal to only process 5, 10 or 20 queries at a time on PostgreSQL [cite:https://wiki.postgresql.org/wiki/Number_Of_Database_Connections] as opposed to 500 at a time.
The same principle applies to my legacy backend. Its essentially a database, and there is no point running more threads than there are CPU resources. I'm not sure what is the exact optimum, but 2x the number of processes plus a couple more tends to be the sweet spot.
So say we had an 8 core CPU. If I fired up 10 to 12 processes on the backend, each with its own port, could I use Nginx to load balance HTTP requests to those ports, as if they were multiple backends.
Bottom line, as per original question, can Nginx be configured to do reverse proxy to many different ports, but be forced to use those ports in a single concurrency mode?