I'm using NGINX as a L7 SSL passthrough to a couple groups of upstream servers (also running NGINX).
At the bottom of this message is my nginx configuration for this global LB that is using the stream module and ssl_preread directive to route requests to different instances.
This works (somewhat), but I believe I'm running into a browser caching problem. If I visit webpage.mydomain.com and then rancher.mydomain.com I'll get a default backend - 404 message when hitting rancher.mydomain.com in my browser (chrome). If I clear browser cache I'm then able to get to rancher.mydomain.com, but then when I try to go to webpage.mycomain.com I get the same 404 backend message.
It seems like maybe the browser is caching connections to these domainnames since they both resolve to the same IP for this global LB. If I add the parameter "proxy_timeout 3s" both sites work one after another, but this times out connections and causes some other unwanted behaviors.
So is there something I can do to fix this problem? Is it something that is known to be a limitation with browsers caching SNI or something?
NGINX Configuration for my Global Load Balancer:
load_module /usr/lib64/nginx/modules/ngx_stream_module.so;
worker_processes 4;
worker_rlimit_nofile 40000;
events {
worker_connections 8192;
}
http {
server {
listen 80;
return 301 https://$host$request_uri;
}
}
stream {
log_format combined '$remote_addr - [$time_local] $protocol Status:$status Bytes[send:$bytes_sent receive:$bytes_received] $session_time ForwardToUpstream:[$ssl_preread_server_name - $upstream_addr]';
access_log /var/log/nginx/stream-access.log combined;
error_log /var/log/nginx/stream-error.log;
upstream rancher {
least_conn;
server 10.0.0.37:443 max_fails=3 fail_timeout=5s;
}
upstream webpage{
server 10.0.0.11:443;
}
map $ssl_preread_server_name $upstream {
webpage.mydomain.com webpage;
rancher.mydomain.com rancher;
default rancher;
}
server {
listen 443;
proxy_pass $upstream;
ssl_preread on;
}
}
At the bottom of this message is my nginx configuration for this global LB that is using the stream module and ssl_preread directive to route requests to different instances.
This works (somewhat), but I believe I'm running into a browser caching problem. If I visit webpage.mydomain.com and then rancher.mydomain.com I'll get a default backend - 404 message when hitting rancher.mydomain.com in my browser (chrome). If I clear browser cache I'm then able to get to rancher.mydomain.com, but then when I try to go to webpage.mycomain.com I get the same 404 backend message.
It seems like maybe the browser is caching connections to these domainnames since they both resolve to the same IP for this global LB. If I add the parameter "proxy_timeout 3s" both sites work one after another, but this times out connections and causes some other unwanted behaviors.
So is there something I can do to fix this problem? Is it something that is known to be a limitation with browsers caching SNI or something?
NGINX Configuration for my Global Load Balancer:
load_module /usr/lib64/nginx/modules/ngx_stream_module.so;
worker_processes 4;
worker_rlimit_nofile 40000;
events {
worker_connections 8192;
}
http {
server {
listen 80;
return 301 https://$host$request_uri;
}
}
stream {
log_format combined '$remote_addr - [$time_local] $protocol Status:$status Bytes[send:$bytes_sent receive:$bytes_received] $session_time ForwardToUpstream:[$ssl_preread_server_name - $upstream_addr]';
access_log /var/log/nginx/stream-access.log combined;
error_log /var/log/nginx/stream-error.log;
upstream rancher {
least_conn;
server 10.0.0.37:443 max_fails=3 fail_timeout=5s;
}
upstream webpage{
server 10.0.0.11:443;
}
map $ssl_preread_server_name $upstream {
webpage.mydomain.com webpage;
rancher.mydomain.com rancher;
default rancher;
}
server {
listen 443;
proxy_pass $upstream;
ssl_preread on;
}
}