pay the $2.5k annually premium they want for this function, no other way
↧
Re: Purge using curl command
↧
Nginx - Only handles exactly 500 request per second - How to increase the limit?
worker_processes auto;
pid /run/nginx.pid;
worker_rlimit_nofile 100000;
error_log /var/log/nginx/error.log crit;
events {
worker_connections 4000;
multi_accept on;
use epoll;
}
http {
include /etc/nginx/mime.types;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
directio 4m;
types_hash_max_size 2048;
client_body_buffer_size 15K;
client_max_body_size 8m;
keepalive_timeout 20;
client_body_timeout 15;
client_header_timeout 15;
send_timeout 10;
open_file_cache max=5000 inactive=20s;
open_file_cache_valid 60s;
open_file_cache_min_uses 5;
open_file_cache_errors off;
gzip on;
gzip_comp_level 2;
gzip_min_length 1000;
gzip_proxied any;
gzip_types text/plain text/css application/json application/xjavascript text/xml application/xml application/xml+rss text/javascript;
access_log off;
log_not_found off;
include /etc/nginx/conf.d/*.conf;
}
The server has 8 cores and 32 gb ram.
The load is 0.05
But nginx is not able to handle more than 500 requests per second.
Please tell me how to increase the limit
pid /run/nginx.pid;
worker_rlimit_nofile 100000;
error_log /var/log/nginx/error.log crit;
events {
worker_connections 4000;
multi_accept on;
use epoll;
}
http {
include /etc/nginx/mime.types;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
directio 4m;
types_hash_max_size 2048;
client_body_buffer_size 15K;
client_max_body_size 8m;
keepalive_timeout 20;
client_body_timeout 15;
client_header_timeout 15;
send_timeout 10;
open_file_cache max=5000 inactive=20s;
open_file_cache_valid 60s;
open_file_cache_min_uses 5;
open_file_cache_errors off;
gzip on;
gzip_comp_level 2;
gzip_min_length 1000;
gzip_proxied any;
gzip_types text/plain text/css application/json application/xjavascript text/xml application/xml application/xml+rss text/javascript;
access_log off;
log_not_found off;
include /etc/nginx/conf.d/*.conf;
}
The server has 8 cores and 32 gb ram.
The load is 0.05
But nginx is not able to handle more than 500 requests per second.
Please tell me how to increase the limit
↧
↧
Client certificate validation error handling
We are using nginx as a reverse proxy to enable a client certificate authentication for our REST API endpoints. The config is as follows:
server {
listen 443 ssl;
ssl_certificate /Users/asedov/Documents/work/ssl/openssl-scripts/ca/certs/test-backend_crt.pem;
ssl_certificate_key /Users/asedov/Documents/work/ssl/openssl-scripts/ca/private/test-backend_key.pem;
ssl_client_certificate /Users/asedov/Documents/work/ssl/openssl-scripts/ca/certs/ca_crt.pem;
ssl_verify_client optional;
ssl_verify_depth 2;
server_name localhost;
proxy_set_header SSL_CLIENT_CERT $ssl_client_cert;
location / {
proxy_pass http://127.0.0.1:8088;
}
}
The idea is to get the certificate body in the SSL_CLIENT_CERT header if a client provides a certificate. It works fine while a provided certificate is valid. Otherwise, for example if the certificate is expired, nginx responds with 400 error and doesn't proxy_pass to our backend.
I'm looking for a way to change this behavior and handle the certificate verification error to still do a proxy_pass to our API but with the empty SSL_CLIENT_CERT header. So, basically, we need nginx verify provided certificates (if provided) and set the header only in case the certificate is provided and valid.
Is it possible?
Thank you in advance!
server {
listen 443 ssl;
ssl_certificate /Users/asedov/Documents/work/ssl/openssl-scripts/ca/certs/test-backend_crt.pem;
ssl_certificate_key /Users/asedov/Documents/work/ssl/openssl-scripts/ca/private/test-backend_key.pem;
ssl_client_certificate /Users/asedov/Documents/work/ssl/openssl-scripts/ca/certs/ca_crt.pem;
ssl_verify_client optional;
ssl_verify_depth 2;
server_name localhost;
proxy_set_header SSL_CLIENT_CERT $ssl_client_cert;
location / {
proxy_pass http://127.0.0.1:8088;
}
}
The idea is to get the certificate body in the SSL_CLIENT_CERT header if a client provides a certificate. It works fine while a provided certificate is valid. Otherwise, for example if the certificate is expired, nginx responds with 400 error and doesn't proxy_pass to our backend.
I'm looking for a way to change this behavior and handle the certificate verification error to still do a proxy_pass to our API but with the empty SSL_CLIENT_CERT header. So, basically, we need nginx verify provided certificates (if provided) and set the header only in case the certificate is provided and valid.
Is it possible?
Thank you in advance!
↧
Reverse Proxy as a WAF?
1. Can someone give me some guidelines about configuring a WAF? I want to filter the HTTP traffic for a few sites, but I would like to have a separate server (Proxy) for WAF.
I think I just need Nginx Reverse Proxy with Naxsi or ModSecurity. As far as I know Cloudflare is using too. Why not using my own WAF instead of Cloudflare?
2. How many sites it's okay to put under a single proxy server WAF?
I think I just need Nginx Reverse Proxy with Naxsi or ModSecurity. As far as I know Cloudflare is using too. Why not using my own WAF instead of Cloudflare?
2. How many sites it's okay to put under a single proxy server WAF?
↧
Re: Reverse Proxy as a WAF?
I use NGINX and ModSecurity 3. At a basic level you install NGINX and add the modsecurity module then use the proxy_pass directive to forward on the traffic to your real hosts. You configure ModSec to filter the bad traffic from reaching your servers via the OWASP core rule set and custom regex.
↧
↧
Re: Reverse Proxy as a WAF?
Togger75 Wrote:
-------------------------------------------------------
> I use NGINX and ModSecurity 3. At a basic level you install NGINX and
> add the modsecurity module then use the proxy_pass directive to
> forward on the traffic to your real hosts. You configure ModSec to
> filter the bad traffic from reaching your servers via the OWASP core
> rule set and custom regex.
@Togger75, Thank you for your answer! I have two more questions. If I want to use the WAF for a load balancer, do I need to put the WAF in front of the load balancer? I'm also thinking how many sites can I proxy with WAF without performance issues?
Any help is appreciated.
-------------------------------------------------------
> I use NGINX and ModSecurity 3. At a basic level you install NGINX and
> add the modsecurity module then use the proxy_pass directive to
> forward on the traffic to your real hosts. You configure ModSec to
> filter the bad traffic from reaching your servers via the OWASP core
> rule set and custom regex.
@Togger75, Thank you for your answer! I have two more questions. If I want to use the WAF for a load balancer, do I need to put the WAF in front of the load balancer? I'm also thinking how many sites can I proxy with WAF without performance issues?
Any help is appreciated.
↧
redirect to another port
Hello,
I set up a shiny server and a nginx server, I would like that when we connect to the nginx server it redirects us to the shiny server.
To redirect to the shiny server I use the proxy-pass command but I get the error page : nginx error! The page you are looking for is temporarily unavailable. Please try again later.
Server shiny and nginx are on the same machine.
I set up a shiny server and a nginx server, I would like that when we connect to the nginx server it redirects us to the shiny server.
To redirect to the shiny server I use the proxy-pass command but I get the error page : nginx error! The page you are looking for is temporarily unavailable. Please try again later.
Server shiny and nginx are on the same machine.
↧
Re: Reverse Proxy as a WAF?
I'm no expert but you can proxy pass multiple sites, think of NGINX as the load balancer and ModSec as the traffic filter. I will post up my notes I made but i wont be able to get them until tomorrow.
↧
Can NGINX do content based redirection?
I'd like to be able to us NGINX as the single point of entry and send the traffic off to different servers depending on the content of a SOAP/XML element, is that possible?
So the SOAP request might POST <colour> BLUE </colour> and it would reverse proxy to server 1 but if it was <colour> RED</colour> it would send it to server 2. I know you can do it with GET requests ($arg) but it has to be POST.
Is that possible? Any links for me?
Thanks!
So the SOAP request might POST <colour> BLUE </colour> and it would reverse proxy to server 1 but if it was <colour> RED</colour> it would send it to server 2. I know you can do it with GET requests ($arg) but it has to be POST.
Is that possible? Any links for me?
Thanks!
↧
↧
Taking much time to load
i dont know why but nginx is taking much time to load, i am just running a single wordpress website and it is taking 2 minutes to load a single page,
i have used but it never took more than 3 sec for loading a single page
Any solution to this ?
i read on my blogs and posts that nignx is much faster than apache but why it is taking too much time to just load a singlr page :(
Some details :
worker_connections 768
server_names_hash_bucket_size 64;
i have average visitors of 100 per day
i have used but it never took more than 3 sec for loading a single page
Any solution to this ?
i read on my blogs and posts that nignx is much faster than apache but why it is taking too much time to just load a singlr page :(
Some details :
worker_connections 768
server_names_hash_bucket_size 64;
i have average visitors of 100 per day
↧
Re: Reverse Proxy as a WAF?
Hey dominykas
I made this step by step for Ubuntu Server 16.04.2 as if a fresh install. You can try it perhaps and let me know if it works, it is only my notes so I can't 100% guarantee it but if all of the steps work then at the end you should have a working Ubuntu NGINX WAF with ModSecurity 3. I make no claims that this is the correct way to do it and welcome any feedback from anyone.
sudo apt-get update
sudo apt-get upgrade
put the key from here https://nginx.org/keys/nginx_signing.key into the nginx_signing.key file like this
sudo nano nginx_signing.key
-----BEGIN PGP PUBLIC KEY BLOCK-----
Version: GnuPG v2.0.22 (GNU/Linux)
mQENBE5OMmIBCAD+FPYKGriGGf7NqwKfWC83cBV01gabgVWQmZbMcFzeW+hMsgxH
W6iimD0RsfZ9oEbfJCPG0CRSZ7ppq5pKamYs2+EJ8Q2ysOFHHwpGrA2C8zyNAs4I
QxnZZIbETgcSwFtDun0XiqPwPZgyuXVm9PAbLZRbfBzm8wR/3SWygqZBBLdQk5TE
fDR+Eny/M1RVR4xClECONF9UBB2ejFdI1LD45APbP2hsN/piFByU1t7yK2gpFyRt
97WzGHn9MV5/TL7AmRPM4pcr3JacmtCnxXeCZ8nLqedoSuHFuhwyDnlAbu8I16O5
XRrfzhrHRJFM1JnIiGmzZi6zBvH0ItfyX6ttABEBAAG0KW5naW54IHNpZ25pbmcg
a2V5IDxzaWduaW5nLWtleUBuZ2lueC5jb20+iQE+BBMBAgAoAhsDBgsJCAcDAgYV
CAIJCgsEFgIDAQIeAQIXgAUCV2K1+AUJGB4fQQAKCRCr9b2Ce9m/YloaB/9XGrol
kocm7l/tsVjaBQCteXKuwsm4XhCuAQ6YAwA1L1UheGOG/aa2xJvrXE8X32tgcTjr
KoYoXWcdxaFjlXGTt6jV85qRguUzvMOxxSEM2Dn115etN9piPl0Zz+4rkx8+2vJG
F+eMlruPXg/zd88NvyLq5gGHEsFRBMVufYmHtNfcp4okC1klWiRIRSdp4QY1wdrN
1O+/oCTl8Bzy6hcHjLIq3aoumcLxMjtBoclc/5OTioLDwSDfVx7rWyfRhcBzVbwD
oe/PD08AoAA6fxXvWjSxy+dGhEaXoTHjkCbz/l6NxrK3JFyauDgU4K4MytsZ1HDi
MgMW8hZXxszoICTTiQEcBBABAgAGBQJOTkelAAoJEKZP1bF62zmo79oH/1XDb29S
YtWp+MTJTPFEwlWRiyRuDXy3wBd/BpwBRIWfWzMs1gnCjNjk0EVBVGa2grvy9Jtx
JKMd6l/PWXVucSt+U/+GO8rBkw14SdhqxaS2l14v6gyMeUrSbY3XfToGfwHC4sa/
Thn8X4jFaQ2XN5dAIzJGU1s5JA0tjEzUwCnmrKmyMlXZaoQVrmORGjCuH0I0aAFk
RS0UtnB9HPpxhGVbs24xXZQnZDNbUQeulFxS4uP3OLDBAeCHl+v4t/uotIad8v6J
SO93vc1evIje6lguE81HHmJn9noxPItvOvSMb2yPsE8mH4cJHRTFNSEhPW6ghmlf
Wa9ZwiVX5igxcvaIRgQQEQIABgUCTk5b0gAKCRDs8OkLLBcgg1G+AKCnacLb/+W6
cflirUIExgZdUJqoogCeNPVwXiHEIVqithAM1pdY/gcaQZmIRgQQEQIABgUCTk5f
YQAKCRCpN2E5pSTFPnNWAJ9gUozyiS+9jf2rJvqmJSeWuCgVRwCcCUFhXRCpQO2Y
Va3l3WuB+rgKjsQ=
=EWWI
-----END PGP PUBLIC KEY BLOCK-----
(ctrl+x enter)
sudo apt-key add nginx_signing.key
sudo nano /etc/apt/sources.list
deb http://nginx.org/packages/mainline/ubuntu/ xenial nginx
deb-src http://nginx.org/packages/mainline/ubuntu/ xenial nginx
sudo apt-get update
sudo apt-get install nginx
sudo apt-get install -y apt-utils autoconf automake build-essential git libcurl4-openssl-dev libgeoip-dev liblmdb-dev libpcre++-dev libtool libxml2-dev libyajl-dev pkgconf wget zlib1g-dev
git clone --depth 1 -b v3/master --single-branch https://github.com/SpiderLabs/ModSecurity
cd ModSecurity
git submodule init
git submodule update
./build.sh (errors here, ingnore them)
./configure
make
sudo make install
git clone --depth 1 http://github.com/SpiderLabs/ModSecurity-nginx.git
nginx -v
(answer was:nginx version: nginx/1.13.8)
wget http://nginx.org/download/nginx-1.13.8.tar.gz
tar zxvf nginx-1.13.8.tar.gz
cd nginx-1.13.8
./configure --with-compat --add-dynamic-module=../ModSecurity-nginx
make modules
sudo cp objs/ngx_http_modsecurity_module.so /etc/nginx/modules
CONFIGURE the installation
sudo nano /etc/nginx/nginx.conf
load_module "modules/ngx_http_modsecurity_module.so";
sudo mkdir /etc/nginx/modsec
sudo wget -P /etc/nginx/modsec/ https://raw.githubusercontent.com/SpiderLabs/ModSecurity/master/modsecurity.conf-recommended
sudo mv /etc/nginx/modsec/modsecurity.conf-recommended /etc/nginx/modsec/modsecurity.conf
sudo sed -i 's/SecRuleEngine DetectionOnly/SecRuleEngine On/' /etc/nginx/modsec/modsecurity.conf
create a conf directory for the custom config
sudo mkdir /etc/nginx/conf
create the 3 conf files proxy.conf, ipfilter.conf, hard.conf
Web server Config:
sudo mkdir /var/www
sudo mkdir /var/www/www.example.com
sudo nano /var/www/www.example.com/index.html (create some test)
Create a sites-enabled and sites-available folder in /etc/nginx/
sudo mkdir sites-enabled
sudo mkdir sites-available
Put the actual site into sites-available then symlink it into the sites-enabled directory. To disable a site you can now just delete the symlink rather than the content
sudo ln -s /etc/nginx/sites-available/www.example.com /etc/nginx/sites-enabled/
add this in to the nginx.conf above the geo code
include /etc/nginx/sites-enabled/*;
include /etc/nginx/conf/proxy.conf;
create a main.conf in /etc/nginx/modsec/main.conf
include /etc/nginx/modsec/modsecurity.conf
# Basic test rule
SecRule ARGS:testparam "@contains test" "id:1234,deny,status:403"
in modsecurity.conf rem out the line
#SecRequestBodyInMemoryLimit 131072
OWASP rules
Download the following into /etc/nginx/modsec/
sudo git clone https://github.com/SpiderLabs/owasp-modsecurity-crs.git
sudo gunzip owasp-modsecurity-crs.git.gz
cp crs-setup.conf.example crs-setup.conf
sudo nano /etc/nginx/modsec/main.conf
Include /etc/nginx/modsec/.../crs-setup.conf
Include /etc/nginx/modsec/.../rules/*.conf
sudo systemctl restart nginx.service
To test ModSecurity from another device
http://nginxIP/index.html?testparam=test
I made this step by step for Ubuntu Server 16.04.2 as if a fresh install. You can try it perhaps and let me know if it works, it is only my notes so I can't 100% guarantee it but if all of the steps work then at the end you should have a working Ubuntu NGINX WAF with ModSecurity 3. I make no claims that this is the correct way to do it and welcome any feedback from anyone.
sudo apt-get update
sudo apt-get upgrade
put the key from here https://nginx.org/keys/nginx_signing.key into the nginx_signing.key file like this
sudo nano nginx_signing.key
-----BEGIN PGP PUBLIC KEY BLOCK-----
Version: GnuPG v2.0.22 (GNU/Linux)
mQENBE5OMmIBCAD+FPYKGriGGf7NqwKfWC83cBV01gabgVWQmZbMcFzeW+hMsgxH
W6iimD0RsfZ9oEbfJCPG0CRSZ7ppq5pKamYs2+EJ8Q2ysOFHHwpGrA2C8zyNAs4I
QxnZZIbETgcSwFtDun0XiqPwPZgyuXVm9PAbLZRbfBzm8wR/3SWygqZBBLdQk5TE
fDR+Eny/M1RVR4xClECONF9UBB2ejFdI1LD45APbP2hsN/piFByU1t7yK2gpFyRt
97WzGHn9MV5/TL7AmRPM4pcr3JacmtCnxXeCZ8nLqedoSuHFuhwyDnlAbu8I16O5
XRrfzhrHRJFM1JnIiGmzZi6zBvH0ItfyX6ttABEBAAG0KW5naW54IHNpZ25pbmcg
a2V5IDxzaWduaW5nLWtleUBuZ2lueC5jb20+iQE+BBMBAgAoAhsDBgsJCAcDAgYV
CAIJCgsEFgIDAQIeAQIXgAUCV2K1+AUJGB4fQQAKCRCr9b2Ce9m/YloaB/9XGrol
kocm7l/tsVjaBQCteXKuwsm4XhCuAQ6YAwA1L1UheGOG/aa2xJvrXE8X32tgcTjr
KoYoXWcdxaFjlXGTt6jV85qRguUzvMOxxSEM2Dn115etN9piPl0Zz+4rkx8+2vJG
F+eMlruPXg/zd88NvyLq5gGHEsFRBMVufYmHtNfcp4okC1klWiRIRSdp4QY1wdrN
1O+/oCTl8Bzy6hcHjLIq3aoumcLxMjtBoclc/5OTioLDwSDfVx7rWyfRhcBzVbwD
oe/PD08AoAA6fxXvWjSxy+dGhEaXoTHjkCbz/l6NxrK3JFyauDgU4K4MytsZ1HDi
MgMW8hZXxszoICTTiQEcBBABAgAGBQJOTkelAAoJEKZP1bF62zmo79oH/1XDb29S
YtWp+MTJTPFEwlWRiyRuDXy3wBd/BpwBRIWfWzMs1gnCjNjk0EVBVGa2grvy9Jtx
JKMd6l/PWXVucSt+U/+GO8rBkw14SdhqxaS2l14v6gyMeUrSbY3XfToGfwHC4sa/
Thn8X4jFaQ2XN5dAIzJGU1s5JA0tjEzUwCnmrKmyMlXZaoQVrmORGjCuH0I0aAFk
RS0UtnB9HPpxhGVbs24xXZQnZDNbUQeulFxS4uP3OLDBAeCHl+v4t/uotIad8v6J
SO93vc1evIje6lguE81HHmJn9noxPItvOvSMb2yPsE8mH4cJHRTFNSEhPW6ghmlf
Wa9ZwiVX5igxcvaIRgQQEQIABgUCTk5b0gAKCRDs8OkLLBcgg1G+AKCnacLb/+W6
cflirUIExgZdUJqoogCeNPVwXiHEIVqithAM1pdY/gcaQZmIRgQQEQIABgUCTk5f
YQAKCRCpN2E5pSTFPnNWAJ9gUozyiS+9jf2rJvqmJSeWuCgVRwCcCUFhXRCpQO2Y
Va3l3WuB+rgKjsQ=
=EWWI
-----END PGP PUBLIC KEY BLOCK-----
(ctrl+x enter)
sudo apt-key add nginx_signing.key
sudo nano /etc/apt/sources.list
deb http://nginx.org/packages/mainline/ubuntu/ xenial nginx
deb-src http://nginx.org/packages/mainline/ubuntu/ xenial nginx
sudo apt-get update
sudo apt-get install nginx
sudo apt-get install -y apt-utils autoconf automake build-essential git libcurl4-openssl-dev libgeoip-dev liblmdb-dev libpcre++-dev libtool libxml2-dev libyajl-dev pkgconf wget zlib1g-dev
git clone --depth 1 -b v3/master --single-branch https://github.com/SpiderLabs/ModSecurity
cd ModSecurity
git submodule init
git submodule update
./build.sh (errors here, ingnore them)
./configure
make
sudo make install
git clone --depth 1 http://github.com/SpiderLabs/ModSecurity-nginx.git
nginx -v
(answer was:nginx version: nginx/1.13.8)
wget http://nginx.org/download/nginx-1.13.8.tar.gz
tar zxvf nginx-1.13.8.tar.gz
cd nginx-1.13.8
./configure --with-compat --add-dynamic-module=../ModSecurity-nginx
make modules
sudo cp objs/ngx_http_modsecurity_module.so /etc/nginx/modules
CONFIGURE the installation
sudo nano /etc/nginx/nginx.conf
load_module "modules/ngx_http_modsecurity_module.so";
sudo mkdir /etc/nginx/modsec
sudo wget -P /etc/nginx/modsec/ https://raw.githubusercontent.com/SpiderLabs/ModSecurity/master/modsecurity.conf-recommended
sudo mv /etc/nginx/modsec/modsecurity.conf-recommended /etc/nginx/modsec/modsecurity.conf
sudo sed -i 's/SecRuleEngine DetectionOnly/SecRuleEngine On/' /etc/nginx/modsec/modsecurity.conf
create a conf directory for the custom config
sudo mkdir /etc/nginx/conf
create the 3 conf files proxy.conf, ipfilter.conf, hard.conf
Web server Config:
sudo mkdir /var/www
sudo mkdir /var/www/www.example.com
sudo nano /var/www/www.example.com/index.html (create some test)
Create a sites-enabled and sites-available folder in /etc/nginx/
sudo mkdir sites-enabled
sudo mkdir sites-available
Put the actual site into sites-available then symlink it into the sites-enabled directory. To disable a site you can now just delete the symlink rather than the content
sudo ln -s /etc/nginx/sites-available/www.example.com /etc/nginx/sites-enabled/
add this in to the nginx.conf above the geo code
include /etc/nginx/sites-enabled/*;
include /etc/nginx/conf/proxy.conf;
create a main.conf in /etc/nginx/modsec/main.conf
include /etc/nginx/modsec/modsecurity.conf
# Basic test rule
SecRule ARGS:testparam "@contains test" "id:1234,deny,status:403"
in modsecurity.conf rem out the line
#SecRequestBodyInMemoryLimit 131072
OWASP rules
Download the following into /etc/nginx/modsec/
sudo git clone https://github.com/SpiderLabs/owasp-modsecurity-crs.git
sudo gunzip owasp-modsecurity-crs.git.gz
cp crs-setup.conf.example crs-setup.conf
sudo nano /etc/nginx/modsec/main.conf
Include /etc/nginx/modsec/.../crs-setup.conf
Include /etc/nginx/modsec/.../rules/*.conf
sudo systemctl restart nginx.service
To test ModSecurity from another device
http://nginxIP/index.html?testparam=test
↧
Multi wildcard certificates for multi wildcard domains
Hi all,
This is my environment :
CentOS release 6.4 (Final) , nginx-1.8.1-1.el6.ngx.x86_64
[quote]
nginx -V
nginx version: nginx/1.8.1
built by gcc 4.4.7 20120313 (Red Hat 4.4.7-16) (GCC)
built with OpenSSL 1.0.1e-fips 11 Feb 2013
TLS SNI support enabled
[/quote]
I have 2 web sites : website1 (multi sub domain abc.website1.com , xyz.website1.com) and website2 (single domain website2.com) , this is nginx configuration:
[quote]
server {
# website1 redirect http to https
listen ip:80;
server_name *.website1.com;
return 301 https://$host$request_uri;
}
server {
# website2 redirect http to https
listen ip:80;
server_name website2.com;
return 301 https://$host$request_uri;
}
server {
listen ip:443 ssl;
ssl_certificate path-to-website1-wildcard-certificate-file;
ssl_certificate_key path-to-website1-pricatekey-file;
ssl_session_cache shared:SSL:10m;
server_name *.website1.com;
...
}
server {
listen ip:443 ssl;
ssl_certificate path-to-website2-single-domain-certificate-file;
ssl_certificate_key path-to-website2-pricatekey-file;
ssl_session_cache shared:SSL:10m;
server_name website2.com;
...
}
[/quote]
Everything works fine. Now I purchased wildcard certificate for website2, so I change configuration :
[quote]
server {
# website1 redirect http to https
listen ip:80;
server_name *.website1.com;
return 301 https://$host$request_uri;
}
server {
# website2 redirect http to https
listen ip:80;
server_name *.website2.com;
return 301 https://$host$request_uri;
}
server {
listen ip:443 ssl;
ssl_certificate path-to-website1-wildcard-certificate-file;
ssl_certificate_key path-to-website1-pricatekey-file;
ssl_session_cache shared:SSL:10m;
server_name *.website1.com;
...
}
server {
listen ip:443 ssl;
ssl_certificate path-to-website2-wildcard-certificate-file;
ssl_certificate_key path-to-website2-pricatekey-file;
ssl_session_cache shared:SSL:10m;
server_name *.website2.com;
...
}
[/quote]
After reload, I can access to https://website1.com successfully but when I access to https://website2.com I get error about certificate points to wrong domain. I add exception and find out that nginx use website1 wildcard certificate for website2 requests/response.
I don't understand why nginx doesn't handle 2 different wildcard certificates for 2 different wildcard domains, is it normal ? Or I did something wrong ?
Now I have to change configuration with website2 to :
[quote]
server {
# website2 redirect http to https
listen ip:80;
server_name website2.com abc.website2.com xyz.website2.com;
return 301 https://$host$request_uri;
}
server {
listen ip:443 ssl;
ssl_certificate path-to-website2-wildcard-certificate-file;
ssl_certificate_key path-to-website2-pricatekey-file;
ssl_session_cache shared:SSL:10m;
server_name website2.com abc.website2.com xyz.website2.com;
...
}
[/quote]
to pass through problem temporary.
Can anyone give me some advice ? Thank you very much.
This is my environment :
CentOS release 6.4 (Final) , nginx-1.8.1-1.el6.ngx.x86_64
[quote]
nginx -V
nginx version: nginx/1.8.1
built by gcc 4.4.7 20120313 (Red Hat 4.4.7-16) (GCC)
built with OpenSSL 1.0.1e-fips 11 Feb 2013
TLS SNI support enabled
[/quote]
I have 2 web sites : website1 (multi sub domain abc.website1.com , xyz.website1.com) and website2 (single domain website2.com) , this is nginx configuration:
[quote]
server {
# website1 redirect http to https
listen ip:80;
server_name *.website1.com;
return 301 https://$host$request_uri;
}
server {
# website2 redirect http to https
listen ip:80;
server_name website2.com;
return 301 https://$host$request_uri;
}
server {
listen ip:443 ssl;
ssl_certificate path-to-website1-wildcard-certificate-file;
ssl_certificate_key path-to-website1-pricatekey-file;
ssl_session_cache shared:SSL:10m;
server_name *.website1.com;
...
}
server {
listen ip:443 ssl;
ssl_certificate path-to-website2-single-domain-certificate-file;
ssl_certificate_key path-to-website2-pricatekey-file;
ssl_session_cache shared:SSL:10m;
server_name website2.com;
...
}
[/quote]
Everything works fine. Now I purchased wildcard certificate for website2, so I change configuration :
[quote]
server {
# website1 redirect http to https
listen ip:80;
server_name *.website1.com;
return 301 https://$host$request_uri;
}
server {
# website2 redirect http to https
listen ip:80;
server_name *.website2.com;
return 301 https://$host$request_uri;
}
server {
listen ip:443 ssl;
ssl_certificate path-to-website1-wildcard-certificate-file;
ssl_certificate_key path-to-website1-pricatekey-file;
ssl_session_cache shared:SSL:10m;
server_name *.website1.com;
...
}
server {
listen ip:443 ssl;
ssl_certificate path-to-website2-wildcard-certificate-file;
ssl_certificate_key path-to-website2-pricatekey-file;
ssl_session_cache shared:SSL:10m;
server_name *.website2.com;
...
}
[/quote]
After reload, I can access to https://website1.com successfully but when I access to https://website2.com I get error about certificate points to wrong domain. I add exception and find out that nginx use website1 wildcard certificate for website2 requests/response.
I don't understand why nginx doesn't handle 2 different wildcard certificates for 2 different wildcard domains, is it normal ? Or I did something wrong ?
Now I have to change configuration with website2 to :
[quote]
server {
# website2 redirect http to https
listen ip:80;
server_name website2.com abc.website2.com xyz.website2.com;
return 301 https://$host$request_uri;
}
server {
listen ip:443 ssl;
ssl_certificate path-to-website2-wildcard-certificate-file;
ssl_certificate_key path-to-website2-pricatekey-file;
ssl_session_cache shared:SSL:10m;
server_name website2.com abc.website2.com xyz.website2.com;
...
}
[/quote]
to pass through problem temporary.
Can anyone give me some advice ? Thank you very much.
↧
Rewrite
I have a wordpress site that my client want to append a segment to always appear in the URL.
For example
http://rethink.test
will always contain:
http://rethink.test/community
or
http://rethink.test/about
http://rethink.test/community/about
etc.etc.
Could this be achieved in a server block.
For example
http://rethink.test
will always contain:
http://rethink.test/community
or
http://rethink.test/about
http://rethink.test/community/about
etc.etc.
Could this be achieved in a server block.
↧
↧
how to redirect to Apache2 properly
I'm running Nginx Tomcat Apache2 Kibana Grok and Graphite on single server.
tomcat serves grok apache2 serves graphite nginx listen on port 80 and redirect.
My configuration is:
server {
listen 443 ssl default_server;
listen 80;
if ($scheme = http) {
return 301 https://$server_name$request_uri;
}
server_name myserver.org;
if ($http_x_forwarded_proto = '') {
set $http_x_forwarded_proto $scheme;
}
root /var/www/html/;
location / {
alias /var/lib/tomcat8/webapps/;
proxy_pass http://127.0.0.1:4180;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Scheme $scheme;
proxy_connect_timeout 5;
proxy_send_timeout 30;
proxy_read_timeout 30;
}
location /kibana/ {
proxy_ignore_client_abort on;
proxy_pass http://localhost:5601/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
}
location /graphite/ {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://localhost:81/;
}
}
Requests to localhost/kibana/ work fine.
Requests to localhost/graphite/ are served by tomcat instead of apache2.
If i go to localhost:81 my graphite is loaded.
What is wrong here?
tomcat serves grok apache2 serves graphite nginx listen on port 80 and redirect.
My configuration is:
server {
listen 443 ssl default_server;
listen 80;
if ($scheme = http) {
return 301 https://$server_name$request_uri;
}
server_name myserver.org;
if ($http_x_forwarded_proto = '') {
set $http_x_forwarded_proto $scheme;
}
root /var/www/html/;
location / {
alias /var/lib/tomcat8/webapps/;
proxy_pass http://127.0.0.1:4180;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Scheme $scheme;
proxy_connect_timeout 5;
proxy_send_timeout 30;
proxy_read_timeout 30;
}
location /kibana/ {
proxy_ignore_client_abort on;
proxy_pass http://localhost:5601/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
}
location /graphite/ {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://localhost:81/;
}
}
Requests to localhost/kibana/ work fine.
Requests to localhost/graphite/ are served by tomcat instead of apache2.
If i go to localhost:81 my graphite is loaded.
What is wrong here?
↧
Re: how to redirect to Apache2 properly
localhost/graphite/ returns
HTTP Status 404 - /browser/header/
message /browser/header/
HTTP Status 404 - /composer/
type Status report
message /composer/
description The requested resource is not available.
Apache Tomcat/8.0.32 (Ubuntu)
this is from apache2 log when i call it directly on localhost:81
"GET /content/js/completer.js HTTP/1.1" 200 1010 "http://:81/composer/?"
"GET /content/js/ext/ext-all.js HTTP/1.1" 200 198092 "http://:81/composer/?"
"GET /render HTTP/1.1" 200 1178 "http://:81/composer/?"
"GET /render/?width=586&height=308&_salt=1517449.194 HTTP/1.1" 200 1418 "http://:81/composer/?"
And this is when i go through nginx localhost/graphite/
"GET / HTTP/1.0" 200 764 "-" "Mozilla/5.0
HTTP Status 404 - /browser/header/
message /browser/header/
HTTP Status 404 - /composer/
type Status report
message /composer/
description The requested resource is not available.
Apache Tomcat/8.0.32 (Ubuntu)
this is from apache2 log when i call it directly on localhost:81
"GET /content/js/completer.js HTTP/1.1" 200 1010 "http://:81/composer/?"
"GET /content/js/ext/ext-all.js HTTP/1.1" 200 198092 "http://:81/composer/?"
"GET /render HTTP/1.1" 200 1178 "http://:81/composer/?"
"GET /render/?width=586&height=308&_salt=1517449.194 HTTP/1.1" 200 1418 "http://:81/composer/?"
And this is when i go through nginx localhost/graphite/
"GET / HTTP/1.0" 200 764 "-" "Mozilla/5.0
↧
Added Nginx to a Ubuntu 16.04 with Virtualmin now I'm fucked-up...
I apologize, but the Dunning-Kruger Effect (https://en.wikipedia.org/wiki/Dunning%E2%80%93Kruger_effect) say's I'm too dumb to know how to ask for, _or get help,_ correctly.
So here's where I'm at. I'm doing a thing on my own hardware at home behind DHCP. I use the Ubuntu 16.04 LTS Server operating system and manage it with the Virtualmin interface. I've only known Apache for web serving, so it pains me to learn something new. I have a few existing virtual servers running on my DMZ built with Virtualmin and a bunch of subdomains also built with Virtualmin. I decided to make my own subdomain with Webmin from the servers FQDN to run the BigBlueButton software, but they *ONLY* let you do it via Nginx.
After some false starts installing that program, I gave the port 80 and 443 to Nginx's control and my BigBlueButton works pretty good so far. Red-Herring:=(OAuth trouble with Google and HTML5 is fucked up on Ubuntu) So now, I'm using port 591 for Apache's http traffic and port 4433 to serve Apache's https sites. I've read a lot of blogs and posts about how to do this division of traffic and I almost had it working but the SSL sites known to Apache wouldn't serve correctly. The solution I read about which was supposed to fix that SSL problem made nothing on Apache work. So here's my hope...
I can undo the fiddling I did to my Nginx files. Is there an elegant way to have Nginx push all traffic it doesn't have a /etc/nginx/sites-enabled/* file for to Apache's workload? I just don't speak Nginx worth a shit and it really looks like anti-structured nonsense to me most of the time. It seems like I should put some blocks of code in the default /etc/nginx/nginx.conf file that redirects anything caught by the default server to Apache.
Attached is my last rendition of something in my /etc/nginx/sites-available/ folder with the symbolic link to it in /etc/nginx/sites-enabled/ that didn't work for an existing Apache Virtual Host bearing that file's name.
Oh geez, and I almost forgot that the Webmin/Virtualmin management system uses port 10000, but Virtualmin adds apache records for virtual servers to redirect something like admin.wnymathguy.com to https://wnymathguy.com:10000, so that might be different than whatever solves my problem stated above.
So here's where I'm at. I'm doing a thing on my own hardware at home behind DHCP. I use the Ubuntu 16.04 LTS Server operating system and manage it with the Virtualmin interface. I've only known Apache for web serving, so it pains me to learn something new. I have a few existing virtual servers running on my DMZ built with Virtualmin and a bunch of subdomains also built with Virtualmin. I decided to make my own subdomain with Webmin from the servers FQDN to run the BigBlueButton software, but they *ONLY* let you do it via Nginx.
After some false starts installing that program, I gave the port 80 and 443 to Nginx's control and my BigBlueButton works pretty good so far. Red-Herring:=(OAuth trouble with Google and HTML5 is fucked up on Ubuntu) So now, I'm using port 591 for Apache's http traffic and port 4433 to serve Apache's https sites. I've read a lot of blogs and posts about how to do this division of traffic and I almost had it working but the SSL sites known to Apache wouldn't serve correctly. The solution I read about which was supposed to fix that SSL problem made nothing on Apache work. So here's my hope...
I can undo the fiddling I did to my Nginx files. Is there an elegant way to have Nginx push all traffic it doesn't have a /etc/nginx/sites-enabled/* file for to Apache's workload? I just don't speak Nginx worth a shit and it really looks like anti-structured nonsense to me most of the time. It seems like I should put some blocks of code in the default /etc/nginx/nginx.conf file that redirects anything caught by the default server to Apache.
Attached is my last rendition of something in my /etc/nginx/sites-available/ folder with the symbolic link to it in /etc/nginx/sites-enabled/ that didn't work for an existing Apache Virtual Host bearing that file's name.
Oh geez, and I almost forgot that the Webmin/Virtualmin management system uses port 10000, but Virtualmin adds apache records for virtual servers to redirect something like admin.wnymathguy.com to https://wnymathguy.com:10000, so that might be different than whatever solves my problem stated above.
↧
Multiple server blocks using same port
Hello. I have a problem with Nginx. I have three websites running, and all using the same 443 port for SSL. When I try to restart Nginx, I get an error like the following:
root@vps:~# nginx
nginx: [emerg] bind() to 0.0.0.0:443 failed (98: Address already in use)
nginx: [emerg] bind() to [::]:443 failed (98: Address already in use)
nginx: [emerg] bind() to 0.0.0.0:443 failed (98: Address already in use)
nginx: [emerg] bind() to [::]:443 failed (98: Address already in use)
nginx: [emerg] bind() to 0.0.0.0:443 failed (98: Address already in use)
nginx: [emerg] bind() to [::]:443 failed (98: Address already in use)
nginx: [emerg] bind() to 0.0.0.0:443 failed (98: Address already in use)
nginx: [emerg] bind() to [::]:443 failed (98: Address already in use)
nginx: [emerg] bind() to 0.0.0.0:443 failed (98: Address already in use)
nginx: [emerg] bind() to [::]:443 failed (98: Address already in use)
nginx: [emerg] still could not bind()
Here is my nginx "default" file: https://hastebin.com/ejuwapafup.nginx
I had gotten it to work before on Ubuntu 16.04.3, but when I switched to Debian 9, it caused this issue.
If you can help, please do!
root@vps:~# nginx
nginx: [emerg] bind() to 0.0.0.0:443 failed (98: Address already in use)
nginx: [emerg] bind() to [::]:443 failed (98: Address already in use)
nginx: [emerg] bind() to 0.0.0.0:443 failed (98: Address already in use)
nginx: [emerg] bind() to [::]:443 failed (98: Address already in use)
nginx: [emerg] bind() to 0.0.0.0:443 failed (98: Address already in use)
nginx: [emerg] bind() to [::]:443 failed (98: Address already in use)
nginx: [emerg] bind() to 0.0.0.0:443 failed (98: Address already in use)
nginx: [emerg] bind() to [::]:443 failed (98: Address already in use)
nginx: [emerg] bind() to 0.0.0.0:443 failed (98: Address already in use)
nginx: [emerg] bind() to [::]:443 failed (98: Address already in use)
nginx: [emerg] still could not bind()
Here is my nginx "default" file: https://hastebin.com/ejuwapafup.nginx
I had gotten it to work before on Ubuntu 16.04.3, but when I switched to Debian 9, it caused this issue.
If you can help, please do!
↧
↧
file upload
I have installed nginx version: nginx/1.10.3 (Ubuntu)
A website requires to upload files. I use php 7 for handling the upload.
The form:
========
<div class="collapse" id="upload_avatar">
<div class="card card-body">
<form enctype="multipart/form-data" action="" method="post">
<p class="text-left">Upload Avatar:</p>
<input type="hidden" name="MAX_FILE_SIZE" value="300000" />
<input name="image" type="file" /><br>
<button class="form-control mr-sm-2 btn btn-outline-success my-2 my-sm-0" type="submit" name="avatar_upload" aria-controls="collapse_upload_avatar">
Upload
</button>
</form>
</div>
</div>
The php part:
===========
if(isset($_POST["avatar_upload"])){
$verifyimg = getimagesize($_FILES['image']['tmp_name']);
if($verifyimg['mime'] != 'image/png') {
echo "Only PNG images are allowed!";
exit;
}
$uploaddir = '/members/3/';
$uploadfile = $uploaddir . basename($_FILES['image']['name']);
if (move_uploaded_file($_FILES['image']['tmp_name'], $uploadfile)) {
echo "File is valid, and was successfully uploaded.<br>";
} else {
echo "Possible file upload attack!<br>";
}
echo '<pre>';
echo 'info:';
print_r($_FILES);
print "</pre>";
}
It prints out:
=========
Possible file upload attack!
info:Array
(
[image] => Array
(
[name] => Selection_001.png
[type] => image/png
[tmp_name] => /tmp/phpGpp3rB
[error] => 0
[size] => 299338
)
)
There is no /tmp/php* file
There is no file in the /members/3/ directory
The permission is 777 for /members and /members/3
nginx/error.log shows:
=================
PHP message: PHP Warning: move_uploaded_file(/members/3/Selection_001.png): failed to open stream: No such file or directory ... on line 197 PHP message:
PHP Warning: move_uploaded_file(): Unable to move '/tmp/phpGpp3rB' to '/members/3/Selection_001.png'
/etc/nginx/nginx.conf:
=================
user www-data;
worker_processes auto;
pid /run/nginx.pid;
events {
worker_connections 768;
# multi_accept on;
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# server_tokens off;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# SSL Settings
##
ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
gzip_disable "msie6";
# gzip_vary on;
# gzip_proxied any;
# gzip_comp_level 6;
# gzip_buffers 16 8k;
# gzip_http_version 1.1;
# gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
The sites-available/example.com
==========================
server {
listen 80;
listen [::]:80;
root /home/ronald/docker-websites/example.com;
# Add index.php to the list if you are using PHP
index index.php index.html index.htm index.nginx-debian.html;
server_name example.com;
location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
try_files $uri $uri/ =404;
}
location ~ \.php$ {
include snippets/fastcgi-php.conf;
fastcgi_pass unix:/run/php/php7.0-fpm.sock;
}
location ~ /\.ht {
deny all;
}
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
if ($scheme != "https") {
return 301 https://$host$request_uri;
} # managed by Certbot
}
What am I missing?
A website requires to upload files. I use php 7 for handling the upload.
The form:
========
<div class="collapse" id="upload_avatar">
<div class="card card-body">
<form enctype="multipart/form-data" action="" method="post">
<p class="text-left">Upload Avatar:</p>
<input type="hidden" name="MAX_FILE_SIZE" value="300000" />
<input name="image" type="file" /><br>
<button class="form-control mr-sm-2 btn btn-outline-success my-2 my-sm-0" type="submit" name="avatar_upload" aria-controls="collapse_upload_avatar">
Upload
</button>
</form>
</div>
</div>
The php part:
===========
if(isset($_POST["avatar_upload"])){
$verifyimg = getimagesize($_FILES['image']['tmp_name']);
if($verifyimg['mime'] != 'image/png') {
echo "Only PNG images are allowed!";
exit;
}
$uploaddir = '/members/3/';
$uploadfile = $uploaddir . basename($_FILES['image']['name']);
if (move_uploaded_file($_FILES['image']['tmp_name'], $uploadfile)) {
echo "File is valid, and was successfully uploaded.<br>";
} else {
echo "Possible file upload attack!<br>";
}
echo '<pre>';
echo 'info:';
print_r($_FILES);
print "</pre>";
}
It prints out:
=========
Possible file upload attack!
info:Array
(
[image] => Array
(
[name] => Selection_001.png
[type] => image/png
[tmp_name] => /tmp/phpGpp3rB
[error] => 0
[size] => 299338
)
)
There is no /tmp/php* file
There is no file in the /members/3/ directory
The permission is 777 for /members and /members/3
nginx/error.log shows:
=================
PHP message: PHP Warning: move_uploaded_file(/members/3/Selection_001.png): failed to open stream: No such file or directory ... on line 197 PHP message:
PHP Warning: move_uploaded_file(): Unable to move '/tmp/phpGpp3rB' to '/members/3/Selection_001.png'
/etc/nginx/nginx.conf:
=================
user www-data;
worker_processes auto;
pid /run/nginx.pid;
events {
worker_connections 768;
# multi_accept on;
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# server_tokens off;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# SSL Settings
##
ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
gzip_disable "msie6";
# gzip_vary on;
# gzip_proxied any;
# gzip_comp_level 6;
# gzip_buffers 16 8k;
# gzip_http_version 1.1;
# gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
The sites-available/example.com
==========================
server {
listen 80;
listen [::]:80;
root /home/ronald/docker-websites/example.com;
# Add index.php to the list if you are using PHP
index index.php index.html index.htm index.nginx-debian.html;
server_name example.com;
location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
try_files $uri $uri/ =404;
}
location ~ \.php$ {
include snippets/fastcgi-php.conf;
fastcgi_pass unix:/run/php/php7.0-fpm.sock;
}
location ~ /\.ht {
deny all;
}
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
if ($scheme != "https") {
return 301 https://$host$request_uri;
} # managed by Certbot
}
What am I missing?
↧
Re: Reverse Proxy as a WAF?
Did you get anywhere with it?
↧
Server_Name redirecting to IP
Hello,
Trying to figure out why when using a DNS entry for the "server_name" entry in the nginx/sites-available/ configuration files, it will still resolve an IP address in the web browser instead of serving up a 404?
This doesn't happen in any of my other environments which are set up the same way. Security audits say our production level webpage should never resolve to an IP Address. Only a DNS name.
Any thoughts of where I should start looking even, would be greatly appreciated.
~S
Trying to figure out why when using a DNS entry for the "server_name" entry in the nginx/sites-available/ configuration files, it will still resolve an IP address in the web browser instead of serving up a 404?
This doesn't happen in any of my other environments which are set up the same way. Security audits say our production level webpage should never resolve to an IP Address. Only a DNS name.
Any thoughts of where I should start looking even, would be greatly appreciated.
~S
↧