Thank you.
↧
Re: How to Test ngx_http_limit_req_module
↧
Re: Log VLC access requests in nginx
itpp2012 Wrote:
-------------------------------------------------------
> Interesting, we're doing the same thing with enigma2 boxes but we use
> xbmc (called kodi now), anyway are you sure vlc is not accessing the
> box directly bypassing nginx?
Was wondering how you are streaming to a nginx protocol can you do this direct from vlc ?
-------------------------------------------------------
> Interesting, we're doing the same thing with enigma2 boxes but we use
> xbmc (called kodi now), anyway are you sure vlc is not accessing the
> box directly bypassing nginx?
Was wondering how you are streaming to a nginx protocol can you do this direct from vlc ?
↧
↧
possible to restream http stream
possible to re stream http stream with nginx and turn it into a RTMP stream?
Also will it effect the stream source quality and does it use much cpu?
Also will it effect the stream source quality and does it use much cpu?
↧
SOAP error with PHP
I have setup NGINX as load balancer for our 4 server IIS webservers. All 4 servers use the same code. However one of the webservers will return an error when in the load balancer array.
SoapFault exception: [WSDL] SOAP-ERROR: Parsing Schema: element '<ellement>' already defined in <test php file>
To test this we created a small php script (see below) and random we get this error.
What has been tested:
When in array with other 3 servers: sometimes an error
When in array with just 2 other servers: Always an error
Direct connection to the server (hosts file): no error
Load balancer array only this one server: no errors
Load balancer array with all but this server: no errors
This is our nginx.conf:
user nginx;
worker_processes 4;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
upstream backend {
server 192.168.100.87 max_fails=3 fail_timeout=30s;
server 192.168.100.187 max_fails=3 fail_timeout=30s;
server 192.168.100.197 max_fails=3 fail_timeout=30s;
# server 192.168.100.198 max_fails=3 fail_timeout=30s;
}
server {
listen 80;
listen 443 ssl;
server_name <domain>.com;
ssl_certificate /etc/ssl/private/<domain>_com.pem;
ssl_certificate_key /etc/ssl/private/<domain>_com.key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers 'ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES256-GCM-SHA384:AES128-GCM-SHA256:AES256-SHA256:AES128-SHA256:AES256-SHA:AES128-SHA:DES-CBC3-SHA:HIGH:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4';
ssl_prefer_server_ciphers on;
ssl_dhparam /etc/ssl/certs/dhparam.pem;
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://backend;
}
}
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for" $request_time';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 75;
client_header_timeout 3000;
client_body_timeout 3000;
proxy_read_timeout 6000;
#gzip on;
include /etc/nginx/conf.d/*.conf;
}
Test.php
<?php
for ($x = 0; $x <= 10; $x++) {
echo "The number is: $x <br>";
try
{
$wsdl = 'http://domain.com/Corporate/Product.svc?wsdl';
$client = new SoapClient($wsdl, array("trace" => 1));
}
catch (Exception $e)
{
echo $e;
echo "<br>";
}
}
?>
Can anybody help me out here? I am at a loss.
Thanks for reading.
SoapFault exception: [WSDL] SOAP-ERROR: Parsing Schema: element '<ellement>' already defined in <test php file>
To test this we created a small php script (see below) and random we get this error.
What has been tested:
When in array with other 3 servers: sometimes an error
When in array with just 2 other servers: Always an error
Direct connection to the server (hosts file): no error
Load balancer array only this one server: no errors
Load balancer array with all but this server: no errors
This is our nginx.conf:
user nginx;
worker_processes 4;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
upstream backend {
server 192.168.100.87 max_fails=3 fail_timeout=30s;
server 192.168.100.187 max_fails=3 fail_timeout=30s;
server 192.168.100.197 max_fails=3 fail_timeout=30s;
# server 192.168.100.198 max_fails=3 fail_timeout=30s;
}
server {
listen 80;
listen 443 ssl;
server_name <domain>.com;
ssl_certificate /etc/ssl/private/<domain>_com.pem;
ssl_certificate_key /etc/ssl/private/<domain>_com.key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers 'ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES256-GCM-SHA384:AES128-GCM-SHA256:AES256-SHA256:AES128-SHA256:AES256-SHA:AES128-SHA:DES-CBC3-SHA:HIGH:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4';
ssl_prefer_server_ciphers on;
ssl_dhparam /etc/ssl/certs/dhparam.pem;
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://backend;
}
}
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for" $request_time';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 75;
client_header_timeout 3000;
client_body_timeout 3000;
proxy_read_timeout 6000;
#gzip on;
include /etc/nginx/conf.d/*.conf;
}
Test.php
<?php
for ($x = 0; $x <= 10; $x++) {
echo "The number is: $x <br>";
try
{
$wsdl = 'http://domain.com/Corporate/Product.svc?wsdl';
$client = new SoapClient($wsdl, array("trace" => 1));
}
catch (Exception $e)
{
echo $e;
echo "<br>";
}
}
?>
Can anybody help me out here? I am at a loss.
Thanks for reading.
↧
server section applying where it is not expected to
Hi.
I have set up a server section for a mailman list using
server {
listen xx.xx.xx.xx:80;
server_name lists.example.org;
location = / { rewrite ^ /mailman/listinfo permanent; }
location / { rewrite ^ /mailman$uri; }
}
[...]
}
as I found examples on the web, e.g. https://mywushublog.com/2012/05/mailman-with-nginx-on-freebsd/.
Now, using some other subdomain like http://test.example.org/, the URL gets rewritten to /mailman/listinfo too, and so I suspect that this lists.example.org server section also applies to test.example.org -- which I didn't expect. Of course, the IP address is the same for all these domains and sundomains.
Has one of you guys and gals seen such a behaviour before? Does that ring some kind of bell?
Thanks for your time,
w6g
I have set up a server section for a mailman list using
server {
listen xx.xx.xx.xx:80;
server_name lists.example.org;
location = / { rewrite ^ /mailman/listinfo permanent; }
location / { rewrite ^ /mailman$uri; }
}
[...]
}
as I found examples on the web, e.g. https://mywushublog.com/2012/05/mailman-with-nginx-on-freebsd/.
Now, using some other subdomain like http://test.example.org/, the URL gets rewritten to /mailman/listinfo too, and so I suspect that this lists.example.org server section also applies to test.example.org -- which I didn't expect. Of course, the IP address is the same for all these domains and sundomains.
Has one of you guys and gals seen such a behaviour before? Does that ring some kind of bell?
Thanks for your time,
w6g
↧
↧
Re: server section applying where it is not expected to
Tried to capture the HTTP requests while using Opera. The main parts are -- slightly anonymized --
GET / HTTP/1.15..
Host: test.example.org
followed by
GET /mailman/listinfo HTTP/1.1
Host: test.example.org
which in turn serves the mailman listinfo page. :-/
Full output from
# tcpdump -n -S -s 0 -A 'tcp dst port 80' | grep -B3 -A10 "GET /"
~R.P.A-....GET / HTTP/1.15..
Host: test.example.org
Connection: keep-alive
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8
Upgrade-Insecure-Requests: 1
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/47.0.2526.111 Safari/537.36 OPR/34.0.2036.50
Accept-Encoding: gzip, deflate, lzma, sdch
Accept-Language: de-DE,de;q=0.8,en-US;q=0.6,en;q=0.4
17:47:47.813655 IP 109.91.37.211.43175 > xx.xx.xx.xx.80: Flags [P.], seq 2939537777:2939538219, ack 226382906, win 16575, length 442
~T:P.@..c..GET /mailman/listinfo HTTP/1.1
Host: test.example.org
Connection: keep-alive
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8
Upgrade-Insecure-Requests: 1
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/47.0.2526.111 Safari/537.36 OPR/34.0.2036.50
Accept-Encoding: gzip, deflate, lzma, sdch
Accept-Language: de-DE,de;q=0.8,en-US;q=0.6,en;q=0.4
GET / HTTP/1.15..
Host: test.example.org
followed by
GET /mailman/listinfo HTTP/1.1
Host: test.example.org
which in turn serves the mailman listinfo page. :-/
Full output from
# tcpdump -n -S -s 0 -A 'tcp dst port 80' | grep -B3 -A10 "GET /"
~R.P.A-....GET / HTTP/1.15..
Host: test.example.org
Connection: keep-alive
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8
Upgrade-Insecure-Requests: 1
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/47.0.2526.111 Safari/537.36 OPR/34.0.2036.50
Accept-Encoding: gzip, deflate, lzma, sdch
Accept-Language: de-DE,de;q=0.8,en-US;q=0.6,en;q=0.4
17:47:47.813655 IP 109.91.37.211.43175 > xx.xx.xx.xx.80: Flags [P.], seq 2939537777:2939538219, ack 226382906, win 16575, length 442
~T:P.@..c..GET /mailman/listinfo HTTP/1.1
Host: test.example.org
Connection: keep-alive
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8
Upgrade-Insecure-Requests: 1
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/47.0.2526.111 Safari/537.36 OPR/34.0.2036.50
Accept-Encoding: gzip, deflate, lzma, sdch
Accept-Language: de-DE,de;q=0.8,en-US;q=0.6,en;q=0.4
↧
Proxy protocol wrapped inside of SSL packet
Hopefully someone can shed some light on this for me. I have been trying to get it working all morning, and am finally throwing in the towel for now.
So the situation is, we are using AWS ELB with SSL. There is SSL termination on the load balancer however we also forward the traffic down via SSL. We have proxy protocol enabled on the ELB, so after the ELB terminates the SSL it attaches the proxy protocol header to the packet then re encoded the entire packet. Once the packet arrives at NGINX if I have the following config line
listen 443 ssl proxy_protocol;
NGINX attempts to read the proxy protocol header and fails. This seems reasonable to me, I understand. However what I want to do is terminate the SSL here then handle the proxy protocol header and continue forwarding the data with the proxy protocol info appended as x-forwarded-for headers. Unfortunately, when I remove proxy_protocol from the listen NGINX then throws the following error
client sent invalid request while reading client request line, client: ZZ.ZZ.ZZ.ZZ, server: , request: "PROXY TCP4 XX.XX.XX.XX YY.YY.YY.YY 49225 443"
Again, this does make sense. I understand why it is happening but can not figure out a workaround, if there is one.
Any suggestions? Thanks in advance!
EDIT: I was going to try and compile with the stream module, then set 'proxy_protocol on' for the upstream but my fear is that it will still fail or try to add a second proxy protocol header.
So the situation is, we are using AWS ELB with SSL. There is SSL termination on the load balancer however we also forward the traffic down via SSL. We have proxy protocol enabled on the ELB, so after the ELB terminates the SSL it attaches the proxy protocol header to the packet then re encoded the entire packet. Once the packet arrives at NGINX if I have the following config line
listen 443 ssl proxy_protocol;
NGINX attempts to read the proxy protocol header and fails. This seems reasonable to me, I understand. However what I want to do is terminate the SSL here then handle the proxy protocol header and continue forwarding the data with the proxy protocol info appended as x-forwarded-for headers. Unfortunately, when I remove proxy_protocol from the listen NGINX then throws the following error
client sent invalid request while reading client request line, client: ZZ.ZZ.ZZ.ZZ, server: , request: "PROXY TCP4 XX.XX.XX.XX YY.YY.YY.YY 49225 443"
Again, this does make sense. I understand why it is happening but can not figure out a workaround, if there is one.
Any suggestions? Thanks in advance!
EDIT: I was going to try and compile with the stream module, then set 'proxy_protocol on' for the upstream but my fear is that it will still fail or try to add a second proxy protocol header.
↧
[SOLVED] server section applying where it is not expected to
It seems it was a matter of caching, both on the server and client sides. After several tests with querying the server via telnet (and getting the expected results) and deleting browser caches I think it's working now.
w6g
w6g
↧
1.8.1 rewrite changes
I don't know if this is the new "right" behaviour mentioned in the changelog, but our rewrites stopped working after the upgrade to 1.8.1
I have an URL:
"http://mysite.com/css/main-responsive.v203.css"
Which should point to the file:
"/home/mysite/www/css/main-responsive.css"
The config, that works on 1.8.0 and not on 1.8.1 (returns 404) is this:
server {
listen 80;
server_name mysite.cz;
root /home/mysite/www/;
expires 100d;
location ^~ / {
access_log off;
location /css/ {
alias /home/mysite/www/css/;
location ~ /(.*)\.v[0-9]+\.(css) {
add_header Cache-Control public;
try_files $uri $uri/ /$1.$2;
}
location /css/fonts/ {
add_header Access-Control-Allow-Origin *;
}
}
location /js/ {
alias /home/mysite/www/js/;
location ~ /(.*)\.v[0-9]+\.(js) {
add_header Cache-Control public;
try_files $uri $uri/ /$1.$2;
}
}
location ^~ /img/ {
..........
}
.......
}
Could you please tell me, how exactly did the rewriting change in the 1.8.1 release? Thank you
I have an URL:
"http://mysite.com/css/main-responsive.v203.css"
Which should point to the file:
"/home/mysite/www/css/main-responsive.css"
The config, that works on 1.8.0 and not on 1.8.1 (returns 404) is this:
server {
listen 80;
server_name mysite.cz;
root /home/mysite/www/;
expires 100d;
location ^~ / {
access_log off;
location /css/ {
alias /home/mysite/www/css/;
location ~ /(.*)\.v[0-9]+\.(css) {
add_header Cache-Control public;
try_files $uri $uri/ /$1.$2;
}
location /css/fonts/ {
add_header Access-Control-Allow-Origin *;
}
}
location /js/ {
alias /home/mysite/www/js/;
location ~ /(.*)\.v[0-9]+\.(js) {
add_header Cache-Control public;
try_files $uri $uri/ /$1.$2;
}
}
location ^~ /img/ {
..........
}
.......
}
Could you please tell me, how exactly did the rewriting change in the 1.8.1 release? Thank you
↧
↧
Config issue for rewrite / redirect
I just want to setup a simple redirect or rewrite between subdomain and domain but I am quite new to this so I can't figure it out. I have 3 things in my mind (not sure if all 3 is possible at same time)
1-) Redirect or rewrite http://api.domain.com/api?t=... request as http://domain.com/api?t=...
2-) Shouldn't change URL at addressbar
3-) Would be great if it gives 200 status instead of 301 or 302.
I tried rewrite but because they are not on same domain, nginx changes the URL.
Any help is appreciated!
1-) Redirect or rewrite http://api.domain.com/api?t=... request as http://domain.com/api?t=...
2-) Shouldn't change URL at addressbar
3-) Would be great if it gives 200 status instead of 301 or 302.
I tried rewrite but because they are not on same domain, nginx changes the URL.
Any help is appreciated!
↧
configure document root directory for gitlab mattermost
Hi,
I'm trying to set up Let's Encrypt for Gitlab Mattermost. The configuration script for Let's Encrypt needs to put a temporary file in mattermost.example.com/.well-known/some_random_string. The configuration currently allows me to access
/opt/gitlab/embedded/service/mattermost/web/static/
but not things inside the actual document root directory, which is just
/opt/gitlab/embedded/service/mattermost/web/
Any ideas as to how I can enable read access for files in the root directory?
Here is some information about Gitlab Mattermost and how it configures nginx
https://github.com/gitlabhq/omnibus-gitlab/blob/master/doc/settings/nginx.md
Thanks
I'm trying to set up Let's Encrypt for Gitlab Mattermost. The configuration script for Let's Encrypt needs to put a temporary file in mattermost.example.com/.well-known/some_random_string. The configuration currently allows me to access
/opt/gitlab/embedded/service/mattermost/web/static/
but not things inside the actual document root directory, which is just
/opt/gitlab/embedded/service/mattermost/web/
Any ideas as to how I can enable read access for files in the root directory?
Here is some information about Gitlab Mattermost and how it configures nginx
https://github.com/gitlabhq/omnibus-gitlab/blob/master/doc/settings/nginx.md
Thanks
↧
ssl_session_tickets not working
Hi all
I have a project i'm working on which i want to use ssl_session_tickets on, but i can't get it to work. My project is a caching proxy so it's not serving local content. Config (relevant part) is:
listen 443;
ssl on;
ssl_certificate /etc/nginx/current/tls/certs/xxx.crt;
ssl_certificate_key /etc/nginx/current/tls/private/xxx.key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers !NULL:!SSLv2:!EXP:!MD5:!aNULL:!PSK:!kEDH:!KRB5:!ADH:!DES:!RC4:!CAMELLIA:AES128:HIGH:3DES;
ssl_ecdh_curve prime256v1;
ssl_buffer_size 4k;
ssl_prefer_server_ciphers on;
ssl_session_cache shared:global_ssl_cache:128m;
ssl_stapling off;
ssl_stapling_verify off;
ssl_session_timeout 20m;
ssl_session_tickets on;
ssl_session_ticket_key /etc/nginx/current/tls/session/tkt.key;
ssl_dhparam /etc/nginx/current/tls/private/dh.param;
keepalive_timeout 300;
I log the $ssl_session_reused variable in my access logs and with the above, i always see a "." (session not reused).
I'm on nginx 1.9.10, compiled from source with opensll 1.0.2e on centos 7 on AWS.
Does anyone know why session reuse isnt working? My main thoughts are, could it be due to:
* the requests being proxied, not locally served files
* perhaps my choice of ciphers is an issue
Does anyone have any suggestions? i have a test instance so i can try literally anything.
Thanks in advance!
Neil
I have a project i'm working on which i want to use ssl_session_tickets on, but i can't get it to work. My project is a caching proxy so it's not serving local content. Config (relevant part) is:
listen 443;
ssl on;
ssl_certificate /etc/nginx/current/tls/certs/xxx.crt;
ssl_certificate_key /etc/nginx/current/tls/private/xxx.key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers !NULL:!SSLv2:!EXP:!MD5:!aNULL:!PSK:!kEDH:!KRB5:!ADH:!DES:!RC4:!CAMELLIA:AES128:HIGH:3DES;
ssl_ecdh_curve prime256v1;
ssl_buffer_size 4k;
ssl_prefer_server_ciphers on;
ssl_session_cache shared:global_ssl_cache:128m;
ssl_stapling off;
ssl_stapling_verify off;
ssl_session_timeout 20m;
ssl_session_tickets on;
ssl_session_ticket_key /etc/nginx/current/tls/session/tkt.key;
ssl_dhparam /etc/nginx/current/tls/private/dh.param;
keepalive_timeout 300;
I log the $ssl_session_reused variable in my access logs and with the above, i always see a "." (session not reused).
I'm on nginx 1.9.10, compiled from source with opensll 1.0.2e on centos 7 on AWS.
Does anyone know why session reuse isnt working? My main thoughts are, could it be due to:
* the requests being proxied, not locally served files
* perhaps my choice of ciphers is an issue
Does anyone have any suggestions? i have a test instance so i can try literally anything.
Thanks in advance!
Neil
↧
Re: ssl_session_tickets not working
Forgot to post:
nginx -V
nginx version: nginx/1.9.9
built with OpenSSL 1.0.2e 3 Dec 2015
TLS SNI support enabled
configure arguments: --prefix=/usr/local/nginx --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/current/nginx.conf --pid-path=/var/run/nginx.pid --error-log-path=/var/log/nginx/default-error.log --http-log-path=/var/log/nginx/default-access.log --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=gtmdaemon --group=gtmdaemon --with-http_realip_module --with-http_v2_module --with-http_ssl_module --with-http_geoip_module --with-http_image_filter_module --with-pcre-jit --with-ipv6 --with-file-aio --with-cc-opt='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic' --add-module=/tmp/tmpSFdHHg/BUILD/nginx-1.9.9/headers-more-nginx-module --add-module=/tmp/tmpSFdHHg/BUILD/nginx-1.9.9/naxsi/naxsi_src --add-module=/tmp/tmpSFdHHg/BUILD/nginx-1.9.9/nginx-module-vts --add-module=/tmp/tmpSFdHHg/BUILD/nginx-1.9.9/nginx_upstream_check_module --with-openssl=/tmp/tmpSFdHHg/BUILD/nginx-1.9.9/openssl-1.0.2e
and it's nginx 1.9.9, not 1.9.10 - i havent depoyed a new build yet
nginx -V
nginx version: nginx/1.9.9
built with OpenSSL 1.0.2e 3 Dec 2015
TLS SNI support enabled
configure arguments: --prefix=/usr/local/nginx --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/current/nginx.conf --pid-path=/var/run/nginx.pid --error-log-path=/var/log/nginx/default-error.log --http-log-path=/var/log/nginx/default-access.log --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=gtmdaemon --group=gtmdaemon --with-http_realip_module --with-http_v2_module --with-http_ssl_module --with-http_geoip_module --with-http_image_filter_module --with-pcre-jit --with-ipv6 --with-file-aio --with-cc-opt='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic' --add-module=/tmp/tmpSFdHHg/BUILD/nginx-1.9.9/headers-more-nginx-module --add-module=/tmp/tmpSFdHHg/BUILD/nginx-1.9.9/naxsi/naxsi_src --add-module=/tmp/tmpSFdHHg/BUILD/nginx-1.9.9/nginx-module-vts --add-module=/tmp/tmpSFdHHg/BUILD/nginx-1.9.9/nginx_upstream_check_module --with-openssl=/tmp/tmpSFdHHg/BUILD/nginx-1.9.9/openssl-1.0.2e
and it's nginx 1.9.9, not 1.9.10 - i havent depoyed a new build yet
↧
↧
Re: ssl_session_tickets not working
OK, after more experimentation, i figured it out...SSL/TLS session tickets do not work when a listener is HTTP2 enabled, or at least the logging of session reuse is broken.
Can anyone else confirm please?
Can anyone else confirm please?
↧
Properly setup of limit_req
Hello, first of all I would like to mention that I read many items on the forum and outsite it, but I still can't completely understand the way this module work and that's the reason why I ask you here and thank you in advance if you help me.
1. I want to limit requests to vhosts per IP so if one Ip flood the vhost, the other visitors ot be ok and not banned. From my tests until now this not happens. Whe other ip flood the vhost I can't open the vhost's site too. This happens with the following configuration:
limit_req_zone $binary_remote_addr zone=perip:10m rate=100r/s;
limit_req zone=perip burst=100 nodelay;
How can I limit the vhost per ip so the other visitors to not have a problem? Also i would like to have another restriction that will limit the requests to vhosts not per ip, but for example 200 requests to one vhost from all visitors,another 200 requests limit to other vhost and so on.
2. I can't understand if burst limit and req_zone limit must be the same value? I want to set limit 200 requests from ip for example and on the 201 requests the ip to be denied with 503? Does the module can work on this way and how can I achieve this? One value to req_zone, but other to burst or what?
3. Why every next time I run apache bench mark the failed requests number is different, but not equal as the server limit is not changed in nginx?
1. I want to limit requests to vhosts per IP so if one Ip flood the vhost, the other visitors ot be ok and not banned. From my tests until now this not happens. Whe other ip flood the vhost I can't open the vhost's site too. This happens with the following configuration:
limit_req_zone $binary_remote_addr zone=perip:10m rate=100r/s;
limit_req zone=perip burst=100 nodelay;
How can I limit the vhost per ip so the other visitors to not have a problem? Also i would like to have another restriction that will limit the requests to vhosts not per ip, but for example 200 requests to one vhost from all visitors,another 200 requests limit to other vhost and so on.
2. I can't understand if burst limit and req_zone limit must be the same value? I want to set limit 200 requests from ip for example and on the 201 requests the ip to be denied with 503? Does the module can work on this way and how can I achieve this? One value to req_zone, but other to burst or what?
3. Why every next time I run apache bench mark the failed requests number is different, but not equal as the server limit is not changed in nginx?
↧
Re: 1.8.1 rewrite changes
Removing the aliases makes it working on 1.8.0 and 1.8.1. It doesn't make sense to use them when the root is already defined.
It's just strange to introduce such a change in a "stable" release, this was the first time when nginx update stopped our website.
It's just strange to introduce such a change in a "stable" release, this was the first time when nginx update stopped our website.
↧
Re: ssl_session_tickets not working
Update (in case anyone has interest): TLS session tickets work for simple configurations over H2 but in my use case (which has several scopes and includes/inheritance) it doesn't. I am working on a reproducible test case i can pass on in a bug report. I'll post the bug link here when i raise it.
↧
↧
How to run wordpress on Gentoo/Linux?
Hi,
I have just installed nginx&mysql&php&wordpress on my Gentoo/Linux machine -
I want to build a webserver for the first time at home.
Looks that nginx&mysql&php run ok but do not know how to run wordpress?
I did not find any tutorial for Gentoo with this setup -
please advise any link for how to configure nginx in order to have wordpress.
I have just installed nginx&mysql&php&wordpress on my Gentoo/Linux machine -
I want to build a webserver for the first time at home.
Looks that nginx&mysql&php run ok but do not know how to run wordpress?
I did not find any tutorial for Gentoo with this setup -
please advise any link for how to configure nginx in order to have wordpress.
↧
Re: Properly setup of limit_req
Can anybody answer these questions, please? If I didn't explain something right please tell me.
↧
How to configure nginx to process subdomains
First, I will apologize if this has already been posted, but I have scoured google and this site for hours and can´t figure it out.
Here is my problem.
I have nginx 1.9.10 on Ubuntu 15.10 server.
I have a domain set up and have the wildcard * option set on the DNS.
In my root server directory, I am running wordpress network.
I am running multiple sites on the wordpress network, and accessing them by subdomain is working flawlessly.
What I am TRYING to do is add additional programs, ie: owncloud, moodle, a photo gallery, mail, etc. by installing them in subdirectories of my www root.
I want to be able to access them and load the appropriate index.php or index.htm either by typing:
http(s)://domain.com/subdirectory
or
subdomain.domain.com
The intent is that for example, if I type: moodle.domain.com OR domain.com/moodle, the program will launch.
I´ve tried
--"enabling" multiple virutual hosts using symbolic links for each in the sites-enabled directory
----this results in errors loading nginx saying that it is listening on port XXX multiple times.
--having only a www file "enabled" as above, and adding "location" tags in that file.
----the result is:
-----nginx starts but I get either a 404 page not found or 403 restricted error if I try accessing any of the programs that are installed in the subdirectories.
--creating a symbolic link for the index.php file for the program in the subdirectory
----(I know that this is very unlikely to be of any use, but I tried it anyway)
-----result: nothing loads
I have been afraid to play around with the php.ini in php5-fpm and the .htaccess files, because the last time I did that it was a disaster.
Could someone please post an example of how to make this work?
If it is going to involve editing config files, please be specific as to which one(s) and the syntax. I am a novice with this process--as is clearly evident.
Here is a copy of my current site-enabled file that is working: (I have edited out the actual domain and directories, but if you need this, I can give it to you).
------------------------------------------------------------------------
server {
server_name domain.com *.domain.com;
index index.php index.html index.htm;
ssl_certificate /etc/nginx/ssl/nginx.crt;
ssl_certificate_key /etc/nginx/ssl/nginx.key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!Anull:!md5;
listen 80 default_server;
listen [::]:80 default_server;
# SSL configuration
listen 443 ssl default_server;
listen [::]:443 ssl default_server;
root /media/xxxx/xxxxx;
#=========================Locations=============================
location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
try_files $uri $uri/ =404;
}
error_page 404 /404.html;
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
#===================PHP=================
location ~ \.php$ {
try_files $uri =404;
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
}
------------------------------------------------------------------------
Thanks for any help you can give.
Here is my problem.
I have nginx 1.9.10 on Ubuntu 15.10 server.
I have a domain set up and have the wildcard * option set on the DNS.
In my root server directory, I am running wordpress network.
I am running multiple sites on the wordpress network, and accessing them by subdomain is working flawlessly.
What I am TRYING to do is add additional programs, ie: owncloud, moodle, a photo gallery, mail, etc. by installing them in subdirectories of my www root.
I want to be able to access them and load the appropriate index.php or index.htm either by typing:
http(s)://domain.com/subdirectory
or
subdomain.domain.com
The intent is that for example, if I type: moodle.domain.com OR domain.com/moodle, the program will launch.
I´ve tried
--"enabling" multiple virutual hosts using symbolic links for each in the sites-enabled directory
----this results in errors loading nginx saying that it is listening on port XXX multiple times.
--having only a www file "enabled" as above, and adding "location" tags in that file.
----the result is:
-----nginx starts but I get either a 404 page not found or 403 restricted error if I try accessing any of the programs that are installed in the subdirectories.
--creating a symbolic link for the index.php file for the program in the subdirectory
----(I know that this is very unlikely to be of any use, but I tried it anyway)
-----result: nothing loads
I have been afraid to play around with the php.ini in php5-fpm and the .htaccess files, because the last time I did that it was a disaster.
Could someone please post an example of how to make this work?
If it is going to involve editing config files, please be specific as to which one(s) and the syntax. I am a novice with this process--as is clearly evident.
Here is a copy of my current site-enabled file that is working: (I have edited out the actual domain and directories, but if you need this, I can give it to you).
------------------------------------------------------------------------
server {
server_name domain.com *.domain.com;
index index.php index.html index.htm;
ssl_certificate /etc/nginx/ssl/nginx.crt;
ssl_certificate_key /etc/nginx/ssl/nginx.key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!Anull:!md5;
listen 80 default_server;
listen [::]:80 default_server;
# SSL configuration
listen 443 ssl default_server;
listen [::]:443 ssl default_server;
root /media/xxxx/xxxxx;
#=========================Locations=============================
location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
try_files $uri $uri/ =404;
}
error_page 404 /404.html;
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
#===================PHP=================
location ~ \.php$ {
try_files $uri =404;
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
}
------------------------------------------------------------------------
Thanks for any help you can give.
↧