Quantcast
Channel: Nginx Forum - How to...
Viewing all articles
Browse latest Browse all 4759

Configuration Nginx with Microsoft Azure

$
0
0
Hi guys,

We’re currently having an issue with Nginx 1.6.1. The problem is the following:
we’re using Windows Azure Virtual machines to deploy our application and use the upstream module from the Nginx app.

We need to use a ip_hash directive in order to redirect our client into “its own” virtual machine (and application).

When our VM is up but not our application, everything is fine when using the directives “max_fails, fail_timeout, and proxy_read_timeout”. When the upstream contains at least 2 servers, the connections from the dead one are forwarded to the living one.

But, when our VM is down (and so its application), it seems that the connections from the dead VM are forwarded to the living one only if the directive “max_fails” and “fail_timeout” are set.
Nevertheless, we would like this situation to react as if the server was marked as “down”, meaning that the dead server must be “forgotten” by the ip_hash/upstream directive.

Could anyone help us please?
Cheers,

CODE: In a HTTP block:

map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}

upstream ATest {
ip_hash;
# The same application is installed in both servers
# Even if the servers have the same DNS, there are two different machines here:

server myapp.cloudapp.net:80 max_fails=1 fail_timeout=5;
server myapp.cloudapp.net:81 max_fails=1 fail_timeout=5;
}

location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-NginX-Proxy true;

# WebSocket support
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";

proxy_read_timeout 15s;

if ($cookie_vm){
proxy_pass http://$cookie_vm;
break;
}

# The login app is in localhost:
proxy_pass http://127.0.0.1:80;
}

Viewing all articles
Browse latest Browse all 4759

Trending Articles