Quantcast
Channel: Nginx Forum - How to...
Viewing all 4759 articles
Browse latest View live

Return all unfound pages to 444

$
0
0
I occasionally have issues with certain IP addresses trying to access locations for scripts that have vulnerabilities. You can see some of these below:

GET /bannerslideradmin/adminhtml_bannerslider/index HTTP/1.1"
GET /iwdall/adminhtml_support/index HTTP/1.1
GET /soldtogether/adminhtml_order/index HTTP/1.1

I am running Magento and Nginx 1.13. In my /etc/nginx/sites-enabled/mydomain.com.conf file, I have added the following to block some of the common directories that the scanners are looking for.

# Denied locations require a "^~" to prevent regexes (such as the PHP handler below) from matching
# http://nginx.org/en/docs/http/ngx_http_core_module.html#location
location ^~ /app/ { return 444; }
location ^~ /service-unavailable/ { return 444; }
location ^~ /a2billing/ { return 444; }
location ^~ /sales/guest/form { return 444; }
location ^~ /administrator/ { return 444; }
location ^~ /wp-login.php { return 444; }
location ^~ /wp-admin/ { return 444; }
location ^~ /wp-content/ { return 444; }
location ^~ /wordpress/ { return 444; }
location ^~ /assets/ { return 444; }
location ^~ /plugins/ { return 444; }
location ^~ /wp/ { return 444; }
location ^~ /scripts/ { return 444; }
location ^~ /blog/ { return 444; }
location ^~ /phpmyadmin/ { return 444; }
location ^~ /backup/ { return 444; }
location ^~ /backups/ { return 444; }

This is fine if these match the location of the ones requested. However, there are many more locations that aren't on this list and my website returns a nice and pretty 404 page with the website logo, fancy CSS, javascript and everything else that goes with modern website. This means RAM gets used on the VPS, RAM usage goes up and it gets slower.

I would like to drop all unknown locations to 444 so that no response gets sent back to client and minimal resources used. How can I do this?

SSL nginx on tomcat Server

$
0
0
Hi ,

after configuring Nginx SSL with Tomcat 7

if I Type URL: https://test.rockwell.co.in/testril (page Work Fine ! and secured)
now if I log in to my application, URL get changed to
http://test.rockwell.co.in:5323/testril/ (which is not expected)
and not secured

Where am i going Wrong !
Please guide me

Nginx config:

# Tomcat we're forwarding to
upstream tomcat_server {
server 127.0.0.1:9090 fail_timeout=0;
}

server {
listen 443 ssl;
server_name rockwell.co.in;

#HTTPS Setup
ssl on;
ssl_certificate rbundle.crt;
ssl_certificate_key testserver.key;

ssl_session_timeout 5m;

ssl_protocols SSLv2 SSLv3 TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_ciphers 'EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH';

location / {
# Forward SSL so that Tomcat knows what to do
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://test.rockwell.co.in:5323;
proxy_set_header X-Forwarded-Proto https;
proxy_set_header X-Real-IP $remote_addr;

proxy_redirect off;
proxy_connect_timeout 240;
proxy_send_timeout 240;
proxy_read_timeout 240;
}

Tomcat Server Conf :

<Service name="Catalina">

<Connector port="5323" protocol="HTTP/1.1"
connectionTimeout="20000"
URIEncoding="UTF-8"
redirectPort="8443"
acceptCount="100"
compressableMimeType="text/html,text/xml,text/javascript,application/x-javascript,application/javascript"
compression="on"
compressionMinSize="2048"
disableUploadTimeout="true"
enableLookups="false"
maxHttpHeaderSize="8192"
Server =" "
usehttponly="true"
/>

<!-- A "Connector" using the shared thread pool-->

<Connector executor="tomcatThreadPool"
port="9090" protocol="HTTP/1.1"
connectionTimeout="20000"
redirectPort="8443" />

<Engine name="Catalina" defaultHost="localhost">

<Realm className="org.apache.catalina.realm.LockOutRealm">

<Realm className="org.apache.catalina.realm.UserDatabaseRealm"
resourceName="UserDatabase"/>
</Realm>

<Host name="localhost" appBase="webapps"
unpackWARs="true" autoDeploy="true">

Note: The pattern used is equivalent to using pattern="common" -->
<Valve className="org.apache.catalina.valves.RemoteIpValve"
remoteIpHeader="x-forwarded-for"
ProxiesHeader="x-forwarded-by"
protocolHeader="x-forwarded-proto"
protocolHeaderHttpsValue="https"/>

<Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs"
prefix="localhost_access_log." suffix=".txt"
pattern="%h %l %u %t &quot;%r&quot; %s %b" />

</Host>
</Engine>
</Service>

Re: SSL nginx on tomcat Server

$
0
0
https://serverfault.com/questions/172542/configuring-nginx-for-use-with-tomcat-and-ssl
See the connector section.

Requests not passed to FCGI

$
0
0
upstream site-main-5.6 {
server php-5-6.example.org:1008;
}

server {
listen 80;
server_name site-main.example.org;

root "/home/site-main/htdocs";

location ^~ /gallery/test/ {
default_type "text/plain";
#return 200 "@app root.";
try_files $uri @app;
autoindex off;
}
location @app {
default_type "text/plain";
#return 200 "@app root.";

rewrite_log on;
error_log "/home/site-main/logs/error.debug.log" debug;

include extra/fastcgi_php_fpm;

fastcgi_param SCRIPT_FILENAME "/home/site-main/app/app.php";

fastcgi_pass site-main-5.6;
}

#error_page 404 /404.html;

# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}

# deny access to .ht* files
location ~ /\.ht {
deny all;
}
}


This configuration just does not work, unless local webroot mirroring FCGI remote FS.
nginx doesn't even TRY to pass the request to the backend, unless it sees a matching local file.
What I'm missing?

ftp proxy with nginx

$
0
0
hello,

I have a proxmox solution for virtualisation.
On each virtual machine there's a ftp service
On the node I put nginx (1.13) to proxy http/https between virtual machines and it's works well; but for FTP it does'nt work

1- Is it possible ?
2 - Has someone an example configuration ? (the idea is to use ftp.domain.fr to proxy on the first VM and ftp-preprod.domain.fr must be routed to & second VM

best
bruno

Re: ftp proxy with nginx

$
0
0
I strongly suggest eradicating FTP with extreme prejudice.
Use SFTP or SCP at least.

Re: ftp proxy with nginx

$
0
0
it's another problem the protocol . ftp or sftp I think it's the same problem. How to route to correct VM ?

Re: ftp proxy with nginx

$
0
0
No, the problem is different, because protocols are inherently different.
SSH is easily proxied. FTP is not.

Re: Requests not passed to FCGI

$
0
0
Found my mistake. I've had

if (!-f $document_root$fastcgi_script_name) {
return 404;
}

hidden deep within includes. Didn't notice it until now.

Punycode domains

$
0
0
Hey,
I connected the following domain www.xn--xample-9gg.com (just an example) to my nginx server.
When I browse this domain it doesn't converted to its native name, which is www.ҽxample.com

I changed the configuration file to include the following:
server_name www.ҽxample.com www.xn--xample-9gg.com;

However, still no success.

Any idea what is the issue here?

Thanks!

Complex Proxying (NAT) with NGINX

$
0
0
Hi everyone,

I have a challenge I need to solve with NGINX. I have tried just about everything and i cant get this to fully work.

I have a front end facing NGINX server which has several sites hosted.
I need one site to proxy to another back-end server that is not on the same network but over MPLS networks somewhere and load that page in a "https://www.example.com/hiddensite/" Basically, that other back-end server is a full website with DB and reports. The back end server is running in a non encrypted HTTP protocol as its in-house and was build that way. Outsiders do not have access directly to the back-end server I'm trying to NAT Proxy with NGINX.

I have gotten the site homepage file loaded with NGINX proxy, but user cannot click links and browse the site at all.

Please help me if anyone know how to get this done properly.

[internet]---[firewall]---[nginx server]---[lan1]---[MPLS]---[lan2]---[hiddensite server]

Re: ftp proxy with nginx

$
0
0
I see this interesting module, it's seems to work with ftp but ...

https://www.nginx.com/resources/admin-guide/tcp-load-balancing/

but... I don't know how to configure it for using different subdomains in source

the server_name directive is not allowed inside stream module


so at the moment I have this

stream {
server {
listen 21;
proxy_pass ip_vm_1:21;
}
server {
listen 20;
proxy_pass ip_vm_1:20;
}
}

when I introduce this it doesn't work

stream {
server {
listen 21;
server_name preprod.domain.fr
proxy_pass ip_vm_1:21;
}
server {
listen 20;
server_name preprod.domain.fr
proxy_pass ip_vm_1:20;
}
server {
listen 21;
server_name prod.domain.fr
proxy_pass ip_vm_2:21;
}
server {
listen 20;
server_name prod.domain.fr
proxy_pass ip_vm_2:20;
}


== server_name is not allowed here --> KO

How can I differanciate the source subdomain ?

NGINX not starting git-http-server

$
0
0
I'm working on a Git server behind Nginx using the git git-http-backend script.

Currently I have a passenger server that's working serving a rails app at port 2222. However, behind the /git/ folder I want to serve git repositories. The thing is, Nginx doesn't seem to start the script. Whether I use a socket file or a different localhost post, I get a 502 error. This is showing in the Nginx Error Log:

2017/06/07 18:43:03 [error] 2147#0: *3 no live upstreams while connecting to upstream, client: 127.0.0.1, server: localhost, request: "GET /git/me/repo HTTP/1.1", upstream: "fastcgi://localhost", host: "localhost:2222"

It seems nginx is not starting the process to handle the git files.

This is my location part of the nginx setup:

location ~ /git(/.*) {
include fastcgi.conf;
include fastcgi_params;
#fastcgi_pass 127.0.0.1:8888;
fastcgi_param SCRIPT_FILENAME /Library/Developer/CommandLineTools/usr/libexec/git-core/git-http-backend;
# export all repositories under GIT_PROJECT_ROOT
fastcgi_param GIT_HTTP_EXPORT_ALL "";
fastcgi_param GIT_PROJECT_ROOT /Users/userx/Documents/Projecten/repositories;
fastcgi_param PATH_INFO $1;


fastcgi_keep_conn on;
fastcgi_connect_timeout 20s;
fastcgi_send_timeout 60s;
fastcgi_read_timeout 60s;
#fastcgi_pass 127.0.0.1:9001;
fastcgi_param REMOTE_USER $remote_user;
#fastcgi_pass unix:/var/run/fcgi/fcgiwrap.socket;
fastcgi_pass localhost:9001;

}

I can't figure it out alone, can anybody share their thoughts on this?

microcache pread() bad file descriptor error

$
0
0
Hello,

I'm getting the error in regards to multiple files on a server, and was wondering if anyone had any ideas what could be causing it/how to fix. It's obviously enabled to the microcache due to the location it's complaining about, and turning off caching fixes it, but after that point i'm stumped.Running nginx/1.10.0 (Ubuntu) on ubuntu 16.04
2017/06/07 14:08:54 [crit] 32493#32493: *5961 pread() "/var/cache/CACHENAME/6/dc" failed (9: Bad file descriptor), client: IP , server: SITENAME.dev, request: "GET /themes/basic/js/build/loadIn.js?v=1.x HTTP/1.1", host: "SITENAME.dev", referrer: "http://SITENAME.dev/"


Example of the microcache config from the server;

gzip off;
# Setup var defaults
set $no_cache "";
# If non GET/HEAD, don't cache & mark user as uncacheable for 1 second via cookie
if ($request_method !~ ^(GET|HEAD)$) {
set $no_cache "1";
}
# Drop no cache cookie if need be
# (for some reason, add_header fails if included in prior if-block)
if ($no_cache = "1") {
add_header Set-Cookie "_mcnc=1; Max-Age=2; Path=/";
add_header X-Microcachable "0";
}
# Bypass cache if no-cache cookie is set
if ($http_cookie ~* "_mcnc") {
set $no_cache "1";
}
# Bypass cache if flag is set
proxy_no_cache $no_cache;
proxy_cache_bypass $no_cache;
# Set cache zone
proxy_cache CACHENAME;
# Set cache key to include identifying components
proxy_cache_key $scheme$host$request_method$request_uri;
# Only cache valid HTTP 200 responses for 1 second
proxy_cache_valid 200 1s;
# Serve from cache if currently refreshing
proxy_cache_use_stale updating;

proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto http;
proxy_set_header X-Forwarded-Port 80;
proxy_set_header Host $host;
}

Nginx is not picking error_page

$
0
0
Hi Team,

I may need your help on this matter.I am trying to implement a redirection based on 405 error page.But Nginx has not executed my redirection.
Please help me to fix this.I have stuck on this matter from last 2 days.

location / {
resolver 10.79.157.2 valid=30s;
set $upstream_core "ssoapp.devsso.veri.internal:9443";
proxy_pass https://$upstream_core;
error_page 502 /DEV.502.nginx.html;
error_page 405 /DEV.502.nginx.html;
proxy_read_timeout 1200;
proxy_send_timeout 1200;
proxy_connect_timeout 1200;
proxy_ignore_client_abort on;
proxy_http_version 1.1;
proxy_set_header Host $host;
}


location = /DEV.502.nginx.html {
root /opt/applications/nginx/nginx-verison-1.8.0/html/;
}


location = @app {
return 301 /;
}

Re: Nginx is not picking error_page

Delay after reconnection to NGINX

$
0
0
We have NGNIX Server with flowplayer. Folowplayer has configuration to make buffer for 0.5 second. But if I disconnect the NGINX server from the network and reconnect it again I have a big delay on the client more then 10 seconds. Is a way to reduce the delay or force the player to reconnect to the NGINX and to play from the time the NGINX was reconnected? Thanks.

How to send http request in c?

$
0
0
I've stuck in this problem for days. I think I should go with ngx_http_connection_t, ngx_http_request_t and related functions. I can't find any useful document for this (Very ironic, an HTTP server with completely no documents about how to send http request). And I read the source code but with no luck. I'm not a decision maker, I can't introduce openresty to our project. Does anyone know how to achieve this?

redirect http to https, but exclude API

$
0
0
we switched a site from http to https, but for compatibility reasons we need some API to still be reachable by http.

our current redirect is:

server {
listen 80;
server_name www.mysite.com;

location ~ /.well-known {
root /var/www/html;
allow all;
}

return 301 https://$server_name$request_uri;
}

what do I need to change/add, to make sure POST requests to http://www.mysite.com/api/reportNew are not forwarded to https?
I tried some variants with location and root, but somehow never succeeded.

Re: redirect http to https, but exclude API

$
0
0
http {
map $request_uri $requri {
default 1;
/well-known 0;
}
……………………
server {
listen 80;
server_name www.mydomain.eu;
root '/webroot/www.mydomain.eu’;
if ($requri) { return 301 https://www.mydomain.eu$request_uri; }
location / {
try_files $uri $uri/ =404;
index index.html index.htm;
}
}
Viewing all 4759 articles
Browse latest View live