Quantcast
Channel: Nginx Forum - How to...
Viewing all 4759 articles
Browse latest View live

Re: Reverse proxy with SSL

$
0
0
Yep, if anyone is wondering, this is the proxy_pass that I put in and it works perfectly! Thanks for the help, @itpp2012

server_names_hash_bucket_size 128;

server {
listen 80 default_server;
server_name prefix1.domain.com;
set $upstream2 127.0.0.1:8080;

location / {

proxy_pass_header Authorization;
proxy_pass http://$upstream2;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_buffering off;
client_max_body_size 0;
proxy_read_timeout 36000s;
proxy_redirect off;

}
}

server {
listen 80;
server_name prefix2.domain.com;
set $upstream1 127.0.0.1:7080;

location / {

proxy_pass_header Authorization;
proxy_pass http://$upstream1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_buffering off;
client_max_body_size 0;
proxy_read_timeout 36000s;
proxy_redirect off;

}
}

Validate Accept-Encoding

$
0
0
Hi there. Our origin server's config includes "gzip_vary on", which tells proxy caches to vary on Accept-Encoding of the received response. When an nginx cache later caches the response, it takes into account the Vary Accept-Encoding header:

"If the header includes the “Vary” field with the special value “*”, such a response will not be cached (1.7.7). If the header includes the “Vary” field with another value, such a response will be cached taking into account the corresponding request header fields (1.7.7). "
(taken from http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_cache_valid )

Now the question is: a client can send any Accept-Encoding, really. Any bogus string like "foo" there would make nginx fetch normal un-encoded content from the upstream as if Accept-Encoding: none was specified, and cache it on disk under a different key which would include "foo" as per Nginx rules. Which is NFG. Is there any way to restrict the allowed Accept-Encoding to gzip, br (Brotli) and none at all?

ASP.NET Angular app running on an Nginx proxy not locating static files

$
0
0
I have just created a basic application on Visual Studios and tried to get it running on my Ubuntu server on an Nginx proxy. Once I start it the application runs but the front-end cannot locate the static files and returns a 404/net::ERR_ABORTED on 6 static files (Please see attached screen-grab).

My Nginx proxy looks like this:

location / {
# Proxy for dotnet app
proxy_pass http://localhost:5000; # My app runs on port 5000
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection keep-alive;
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}

And my proxy.conf looks like this (I do include it in my nginx.conf file):

proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
client_max_body_size 10m;
client_body_buffer_size 128k;
proxy_connect_timeout 90;
proxy_send_timeout 90;
proxy_read_timeout 90;
proxy_buffers 32 4k;

I followed this Microsoft Tutorial on the setup vaguely (https://docs.microsoft.com/en-us/aspnet/core/publishing/linuxproduction?tabs=aspnetcore2x).

How can I resolve the net::ERR_ABORTED on the static files?

P.S. I have posted this question on stackoverflow if anyone wants the points for it:
https://stackoverflow.com/questions/47753715/asp-net-angular-app-running-on-an-nginx-proxy-not-locating-static-files

Redirect issues

$
0
0
Currently, I am running JIRA and Confluence on port 8080 and 7080. This is what I would like to happen with my nginx config:

port 80 is set to read let's encrypt challenge, otherwise forward to 443
443 listener reads hostname as either test.domain.com or test1.domain.com
test.domain.com forwards to proxy at 8080
test1.domain.com forwards to proxy at 7080

What I would like to do is also route anything on the outside coming from 8080 or 7080 to 443. Is this possible since there is already a proxy forward to 8080 and 7080 locally?

This is my current setup:

user nginx;
worker_processes 2;

error_log /var/log/nginx/error.log;
pid /var/run/nginx.pid;

events {
worker_connections 1024;
}

http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';

access_log /var/log/nginx/access.log main;

sendfile on;

keepalive_timeout 65;

include /etc/nginx/mime.types;
default_type application/octet-stream;

include /etc/nginx/conf.d/*.conf;

index index.html index.htm;

gzip on;
gzip_types
text/plain
text/css
text/js
text/xml
text/javascript
application/javascript
application/x-javascript
application/json
application/xml
application/xml+rss;

server {
listen 80 default_server;
server_name test.domain.com;

location /.well-known/acme-challenge {
root /var/www/letsencrypt;
}
}

server {
listen 80;
return 301 https://$host$request_uri;
}

server {
listen 443 ssl http2;
server_name test.domain.com;

location / {
proxy_pass http://localhost:8080;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $remote_addr;
proxy_redirect off;
client_max_body_size 10M;
proxy_connect_timeout 30s;
proxy_read_timeout 60s;
satisfy any;
allow all;
}

error_page 500 502 503 504 /50x.html;
location ~ /50x.(html|png) {
root /usr/share/nginx/html;

}

ssl_certificate /etc/letsencrypt/live/test.domain.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/test.domain.com/privkey.pem;
ssl_session_timeout 1d;
ssl_session_cache shared:SSL:50m;
ssl_dhparam /etc/nginx/dhparam.pem;
ssl_protocols TLSv1.1 TLSv1.2;
ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!3DES:!MD5:!PSK';
ssl_prefer_server_ciphers on;
add_header Strict-Transport-Security max-age=15768000;

resolver 8.8.8.8;
}

server {
listen 443 ssl http2;
server_name test1.domain.com;

location / {
proxy_pass http://localhost:7080;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $remote_addr;
proxy_redirect off;
client_max_body_size 10M;
proxy_connect_timeout 30s;
proxy_read_timeout 60s;
satisfy any;
allow all;
}

error_page 500 502 503 504 /50x.html;
location ~ /50x.(html|png) {
root /usr/share/nginx/html;

}

ssl_certificate /etc/letsencrypt/live/test.domain.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/test.domain.com/privkey.pem;
ssl_session_timeout 1d;
ssl_session_cache shared:SSL:50m;
ssl_dhparam /etc/nginx/dhparam.pem;
ssl_protocols TLSv1.1 TLSv1.2;
ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!3DES:!MD5:!PSK';
ssl_prefer_server_ciphers on;
add_header Strict-Transport-Security max-age=15768000;


resolver 8.8.8.8;
}


}

Cookies problem

$
0
0
Hi

Running latest Nginx on CentOS 7 we are dealing with cookies problem. It looks like server doesn't delete cookies after expiration. After that user can't login again, we need to manually delete cookie from system.
It must be some kind of a setting on nginx config, but can't find it. Can anyone please tell me what can we do to solve this ?

Thank you in advance.
Miha

Reverse proxy to enable Grafana

$
0
0
Hi. I have a server that I do some work on but which is not entirely under my control. So, I'm trying to piece together how it's set up and how to extend its hosting setup. It is currently hosting a Dashing installation (dashing.io). I would like to host an instance of Grafana on the same server as a subdirectory; that is, I would like to go to http://myserver.com/grafana and reach my grafana server, but would like subdirectories, such as http://myserver.com/my_dashing_board01, http://myserver.com/another_dashing_board, etc, to be handled as they currently are. Normally, Grafana is accessed over port 3000, but we are effectively unable to open that port because of an extensive process requirement-- basically, because reasons :-).

Now, it appears that nginx is the handler for port 80 requests:

$ telnet 10.11.12.13 80
Trying 10.11.12.13...
Connected to 10.11.12.13.
Escape character is '^]'.
GET /index.htm HTTP/1.1
host: 10.11.12.13

HTTP/1.1 301 Moved Permanently
Server: nginx/1.4.6 (Ubuntu)
Date: Thu, 14 Dec 2017 17:39:10 GMT
Content-Type: text/html
Content-Length: 193
Connection: keep-alive
Location: https://10.11.12.13/index.htm

<html>
<head><title>301 Moved Permanently</title></head>
<body bgcolor="white">
<center><h1>301 Moved Permanently</h1></center>
<hr><center>nginx/1.4.6 (Ubuntu)</center>
</body>
</html>
Connection closed by foreign host.

So, it seems that nginx would have to handle proxying. In order to get it working, I've added the following to the nginx config file:

upstream grafana {
server grafana:3000;
}

location /grafana/ {
proxy_pass http://grafana:3000;
}

Does it seem like that should work? This is my first work with nginx, so I have to ask that you use small words. :-) I may have made some very basic errors. Thanks in advance.

Proxy Pass to Upstream HTTPS

$
0
0
I am using proxy_pass directive to upstream https server. The proxy server is meant for LAN clients. The upstream https server uses letsencrypt. How do I configure SSL verification?

proxy_pass https://upstream.backend
proxy_verify_ssl on;
proxy_ssl_trusted_certificate <which_file_is_supposed_to_be_here>;
proxy_ssl_verify_depth <what_number_here>;


Also is it possible to rewrite http_referer header to https?
example http://192.168.1.5/application/page -> http://upstream.backend/application/page

Authentication fails depending on FQDN entered by end user

$
0
0
See attached picture for topology with nginx reverse proxy (fqdn proxy.com) for server with fqdn endpoint.com.
Between internet and LAN is a router forwarding all traffic on port 443 to proxy.com

Internet DNS records for proxy.com and endpoint.com point to Firewall external IP.
LAN DNS records for proxy.com and endpoint.com point to local IP addresses of these hosts.

When user enters proxy.com he is proxied to endpoint.com , he gets login screen of endpoint.com but authentication fails.
When user enters endpoint.com he is proxied to endpoint.com , he gets login screen of endpoint.com and authentication succeeds.

Why is authentication failing when proxy.com is used in the end user's browser?

Here is the nginx config for the proxy:

proxy_pass https://endpoint.com;

more_set_input_headers 'Authorization: $http_authorization';
proxy_set_header Accept-Encoding "";

proxy_set_header X_FORWARDED_PROTO https;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Url-Scheme $scheme;
proxy_redirect off;
proxy_max_temp_file_size 0;

[SOLVED] Re: Authentication fails depending on FQDN entered by end user

$
0
0
Upstream SUSE server showed this line in the logs:
"No issuer certificate for certificate in certification path found".

This was solved by entering the full chain of certificates (root, intermediate, server) into the crt file that the proxy presents to the upstream server to identify itself.
The server directive line in nginx;

proxy_ssl_certificate /etc/nginx/ssl/public.crt;

Re: Cookies problem

$
0
0
you mean backend server keeps session entries even they expired? i think that's the server's fault, rather than nginx.

include module in Debian stretch setup

$
0
0
Hi all,

I have Debian stretch where nginx was installed with apt-get install the usual way after adding the sources to the sources.list file. nginx is Version 1.13.7

Now I would like to include the following module: ngx_stream_core_module

On the site http://nginx.org/en/docs/stream/ngx_stream_core_module.html there are the following instructions:
This module is not built by default, it should be enabled with the --with-stream configuration parameter.

Unfortunately I have no idea what I need to do exactly.

Can anybody help me and tell me what I need to do to get this work ?

Many thanks.

Re: Configuring nginx/php-fpm for high traffic site (5000+ concurrent users)

$
0
0
Hi,

i have the similar project and looking for the solution. Did you solve the issue and how?

thanks in advance

regards

Murat

Issues with multiple port passes and using Let's Encrypt

$
0
0
Hello,

We currently use a single host to run a Confluence and JIRA server (Atlassian products) on port 8080 and 7080. We are not using SSL yet, and would like to set this up using Let's Encrypt. Let's Encrypt uses port 80 to renew its certificate once every 60 days or so.

Here is what we are trying to do:
1. All current traffic hitting port 8080 or 7080 gets transferred to HTTPS (443) and handed off to the correct application by reading the URL
2. We still allow port 80 to be open to Let's Encrypt so that it can automatically renew
3. Since JIRA and Confluence used to operate on port 8080 and 7080, we now have to proxy_pass them over to ports 8100 and 7100 respectively

I am running into an issue with the NGINX portion not correctly handing off, and I think there's an issue with my nginx.conf configuration.

Here it is. Please let me know if you notice anything wrong:

---



user nginx;
worker_processes 2;

error_log /var/log/nginx/error.log;
pid /var/run/nginx.pid;

# include /usr/share/nginx/modules/*.conf;

events {
worker_connections 1024;
}

http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';

access_log /var/log/nginx/access.log main;

sendfile on;
# tcp_nopush on;
# tcp_nodelay on;

keepalive_timeout 65;
# types_hash_max_size 2048;

include /etc/nginx/mime.types;
default_type application/octet-stream;

include /etc/nginx/conf.d/*.conf;

index index.html index.htm;

gzip on;
gzip_types
text/plain
text/css
text/js
text/xml
text/javascript
application/javascript
application/x-javascript
application/json
application/xml
application/xml+rss;

# server_names_hash_bucket_size 128;


# Initial listener to hand off Let's Encrypt renewal
server {
listen 80 default_server;
server_name test.domain.com;

location /.well-known/acme-challenge {
root /var/www/letsencrypt;
}
}

#Second listener to redirect all HTTP traffic to HTTPS and over to the correct proxy_pass by reading the FQDN of the request
server {
listen 80;
return 301 https://$host$request_uri;
}

# Listener on port 8080 redirecting JIRA traffic to correct HTTPS handoff
server {
listen 8080;
return https://$host$request_uri;
}

# Listener on port 7080 redirecting Confluence traffic to correct HTTPS handoff
server {
listen 7080;
return https://$host$request_uri;
}

# Listener on 443 with proxy_pass setup to hand it off to port 8100 (new JIRA port)
server {
listen 443 ssl http2;
server_name test.domain.com;

location / {
proxy_pass http://localhost:8100;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $remote_addr;
proxy_redirect off;
client_max_body_size 10M;
proxy_connect_timeout 30s;
proxy_read_timeout 60s;
satisfy any;
allow all;
}

## 500 error page - using default HTML directory for CentOS; change if desired. Sample error page and image background included in repository
error_page 500 502 503 504 /50x.html;
location ~ /50x.(html|png) {
root /usr/share/nginx/html;

}

ssl_certificate /etc/letsencrypt/live/test.domain.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/test.domain.com/privkey.pem;

## SSL Configuration
ssl_session_timeout 1d;
ssl_session_cache shared:SSL:50m;

# Diffie-Hellman parameter for DHE ciphersuites
ssl_dhparam /etc/nginx/dhparam.pem;

# Protocol and Cipher configuration
ssl_protocols TLSv1.1 TLSv1.2;
ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!3DES:!MD5:!PSK';
ssl_prefer_server_ciphers on;

# HSTS - instructs browsers to only connect to you via HTTPS in the future
add_header Strict-Transport-Security max-age=15768000;


resolver 8.8.8.8;
}

# Listener on 443 with proxy_pass setup to hand it off to port 7100 (new Confluence port)
server {
listen 443 ssl http2;
server_name test1.domain.com;

location / {
proxy_pass http://localhost:7100;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $remote_addr;
proxy_redirect off;
client_max_body_size 10M;
proxy_connect_timeout 30s;
proxy_read_timeout 60s;
satisfy any;
allow all;
}

## 500 error page - using default HTML directory for CentOS; change if desired. Sample error page and image background included in repository
error_page 500 502 503 504 /50x.html;
location ~ /50x.(html|png) {
root /usr/share/nginx/html;

}

ssl_certificate /etc/letsencrypt/live/test.domain.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/test.domain.com/privkey.pem;

## SSL Configuration
ssl_session_timeout 1d;
ssl_session_cache shared:SSL:50m;

# Diffie-Hellman parameter for DHE ciphersuites
ssl_dhparam /etc/nginx/dhparam.pem;

# Protocol and Cipher configuration
ssl_protocols TLSv1.1 TLSv1.2;
ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!3DES:!MD5:!PSK';
ssl_prefer_server_ciphers on;

# HSTS - instructs browsers to only connect to you via HTTPS in the future
add_header Strict-Transport-Security max-age=15768000;


resolver 8.8.8.8;
}


}

kevent() reported about an closed connection error

$
0
0
Hi,
we are using FreeBSD, nginx, php-fpm in our server for php application. Sometimes "kevent() reported about an closed connection (54: Connection reset by peer) while reading response header from upstream..." occured in log file and i get the "502 Bad Gateway" error. Simple php-fpm restart solves the problem.
Any solution?

Stale Blocking in Proxy Cache

$
0
0
Hello everyone, I'm in big trouble. I'm using nginx to do a proxy cache and enabled the directives: proxy_cache_use_stale error timeout updating http_500 http_502 http_503 http_504; proxy_cache_lock on; proxy_cache_background_update on; When the cache expires, the contents of the stale are passed but at the same time the request is blocked until the contents update. The other requests come back with stale - updatating. But this request that hangs is giving me a lot of headache, how to solve this?

DENY ALL ONLY ACCEPT SOME IP

$
0
0
Hello,
If it is possible to add some on default.conf that i can deny all from my page and only allow some IP?

Here is my default.conf but I dont know how I need to do...

server {
set $rootpath "/var/www/ipmp";
root $rootpath;
listen 80;

if ($request_uri ~ (/).*) {
rewrite ^ https://$host$request_uri? permanent;
}
if ($request_uri ~ (/mobile/).+) {
rewrite ^ https://$host$request_uri? permanent;
}
if ($request_uri ~ (/interactive/).*) {
rewrite ^ https://$host$request_uri? permanent;
}

include /etc/nginx/part.d/*.part;
}
server {
set $rootpath "/var/www/ipmp";
root $rootpath;
listen 443 ssl;
keepalive_timeout 70;

server_name $host;
ssl_certificate /ha_shared/ipmp/config/certificates/cert.csr;
ssl_certificate_key /ha_shared/ipmp/config/certificates/cert.key;
ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_session_cache shared:SSL:20m;
ssl_session_timeout 10m;

include /etc/nginx/part.d/*.part;
include /etc/nginx/part.d/onlyssl/*.part;
}

Where I need add that deny all? That is my root path: /var/www/ipmp . Hope some can help.

root path custumized

$
0
0
Hello, I have configured my DNS CNAME: * .mydomain.com

Is this possible with nginx?

If the url does not have a sub domain use this root: /var/www/mydomain.com
If the url has www redirect to without www (that I already did)
If the url has the subdomain system (system.mydomain.com) use this root: /var/www/system
If the url has any other subdomain (company.mydomain.com or ong.mydomain.com) use this root: /var/www/companies

That's possible?

Thank you

Missing /etc/nginx/sites-available/default

$
0
0
i was following a tutorial to setup nginx on ubuntu 16.04

i am supposed to edit a this file : /etc/nginx/sites-available/default
but its not there , there is no folder like sites-available
what i should do ?

Nginx reverse proxy

$
0
0
Hello,
I am tried to configure my nginx that is installed on Ubuntu 16.10
I have the following infrastructure

wan
|
nginx reverse proxy with domain ssl.example.com
|
web server http.example.com


the http.example.com is point to ssl.example.com ip address

when the client open http://http.example.com it should be redirect to https://http.example.com directly
the client will establish ssl with my reverse proxy ssl.example.com "I already have installed lets encrypt cert"
my reverse proxy should be request the http.example.com -with no ssl -

it's like a cloud flare.

so what the configuration I should be do it ?
also how I can load balance to too web server that is in back of reverse proxy.
thanks.

Static files slooooooow

$
0
0
Hi,
I have recently moved my site from shared hosting to VDS with Nginx. The performance of every page that does not contain heavy elements is very obvious, pages do load much faster. However, there is something wrong with the static files. Starting with the ~170k font file: it takes few seconds for the font to "apply" when I visit the site in a fresh anonymous tab. And it is way more horrible with bigger files: pdf files take ages to load.

This Pingdom report ( https://tools.pingdom.com/#!/dWuIkE/https://www.bykasov.com/2016/oda-sobakam-severa ) shows that there are several attempts to access the pdf file – ?

While on shared, the average text page load was slower, loading these static files would take far less time (even on pages with several pdf's at once, like category pages).

Apparently there is something wrong with my configuration and I would appreciate any help.

My nginx.conf:

# For more information on configuration, see:
# * Official English Documentation: http://nginx.org/en/docs/
# * Official Russian Documentation: http://nginx.org/ru/docs/

user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;

# Load dynamic modules. See /usr/share/nginx/README.dynamic.
include /usr/share/nginx/modules/*.conf;

events {
worker_connections 1024;
}

http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';

access_log /var/log/nginx/access.log main;

sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;

include /etc/nginx/mime.types;
default_type application/octet-stream;

server_names_hash_bucket_size 64;

# Load modular configuration files from the /etc/nginx/conf.d directory.
# See http://nginx.org/en/docs/ngx_core_module.html#include
# for more information.
include /etc/nginx/conf.d/*.conf;
charset utf-8;

server {
listen 80 default_server;
listen [::]:80 default_server;
server_name _;
root /usr/share/nginx/html;

# Load configuration files for the default server block.
include /etc/nginx/default.d/*.conf;
include /etc/nginx/hhvm.conf;

location / {
}

error_page 404 /404.html;
location = /40x.html {
}

error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}

# Settings for a TLS enabled server.
#
# server {
# listen 443 ssl http2 default_server;
# listen [::]:443 ssl http2 default_server;
# server_name _;
# root /usr/share/nginx/html;
#
# ssl_certificate "/etc/pki/nginx/server.crt";
# ssl_certificate_key "/etc/pki/nginx/private/server.key";
# ssl_session_cache shared:SSL:1m;
# ssl_session_timeout 10m;
# ssl_ciphers HIGH:!aNULL:!MD5;
# ssl_prefer_server_ciphers on;
#
# # Load configuration files for the default server block.
# include /etc/nginx/default.d/*.conf;
#
# location / {
# }
#
# error_page 404 /404.html;
# location = /40x.html {
# }
#
# error_page 500 502 503 504 /50x.html;
# location = /50x.html {
# }
# }

gzip on;
gzip_vary on;
gzip_proxied any;
gzip_comp_level 6;
gzip_buffers 16 8k;
gzip_http_version 1.1;
gzip_types image/svg+xml text/plain text/xml text/css text/javascript application/xml application/xhtml+xml application/rss+xml application/javascript application/x-javascript application/x-font-ttf application/vnd.ms-fontobject font/opentype font/ttf font/eot font/otf;

}


My site conf file:

server {
listen 80;
server_name bykasov.com www.bykasov.com;
return 301 https://www.bykasov.com$request_uri;
}

server {
listen 443 ssl http2;
server_name bykasov.com www.bykasov.com;

ssl_certificate /etc/letsencrypt/live/bykasov.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/bykasov.com/privkey.pem;

ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;

ssl_dhparam /etc/ssl/certs/dhparam.pem;
ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA';
ssl_session_timeout 1d;
ssl_session_cache shared:SSL:50m;

ssl_stapling on;
ssl_stapling_verify on;
add_header Strict-Transport-Security max-age=15768000;
access_log (.....removed....);

# The rest of your server block
root (....removed....);
index index.php index.html index.htm;

directio 300k;
#output_buffers 2 1M;

#sendfile on;
#sendfile_max_chunk 256k;

location ^~ /.well-known/acme-challenge/ {
}

location / {
try_files $uri $uri/ /index.php?$args;
}

error_page 404 /404.html;
location = /50x.html {
root /(...removed....);
}

location ~* /wp-includes/.*.php$ {
deny all;
access_log off;
log_not_found off;
}

location ~* /wp-content/.*.php$ {
deny all;
access_log off;
log_not_found off;
}

location ~ ^/(wp-config\.php) {
deny all;
access_log off;
log_not_found off;
}

location ~ ^/(wp-login\.php) {
# allow (.....removed.....);
deny all;
}

location ~ \.php$ {
try_files $uri =404;
fastcgi_pass unix:/var/run/hhvm/hhvm.sock;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}


location ~ \.(js|css|png|jpg|jpeg|gif|ico|html|woff|woff2|ttf|svg|eot|otf)$ {
add_header "Access-Control-Allow-Origin" "*";
expires 1M;
access_log off;
add_header Cache-Control "public";
}

}



The directio-output buffers-sendfile part is something that I've tried but could not see it making any difference.
Viewing all 4759 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>