Quantcast
Channel: Nginx Forum - How to...
Viewing all 4759 articles
Browse latest View live

How to produce stale event for nginx

$
0
0
I am studying nginx by reading and debugging source code. I want to produce the stale event, what is the simplest way to make it happen?
Thanks very much

NGINX with Web File Manager - Sprutio

$
0
0
I'm using an excellent web file manager (Sprutio), open source, in my VPS (Debian 9, MariaDB, PHP7, NGINX, ISPConfig). But, I do not understand how to configure to work on SSL, since Sprutio's documentation is not good.

Below is the web link file manager:
https://sprut.io/en/install

Below is the Git link where there is a discussion about SSL:
https://github.com/LTD-Beget/sprutio/issues/65
https://github.com/LTD-Beget/sprutio/issues/70

Best Regards!

Redirect escaped_fragment

$
0
0
Hi everyone :)

I have some URL's like this.

https://www.mydomain.com/23/something?amp%25252525253B_escaped_fragment_=&p=3
https://www.mydomain.com/23/something?%253Fp%253D5=
https://www.mydomain.com/something?_escaped_fragment_=

I want redirect this fragments to

https://www.mydomain.com/23/something
https://www.mydomain.com/23/something
https://www.mydomain.com/something

Who i can do this?

Is it possible to build some simple (?) logic on nginx (plus?) instead of creating backend services?

$
0
0
Hi!
Need to divide primary post json request into 2 subrequest (get and post).
So we have some kind of proprietary software and need to interact with it by 2-step logic. First send a question with procId, like this get https://backend1/gettask?process=[procId]. The value of procId should be taken from primary request (in url or in json, anyway – its our soft and our design). Receive json like ‘[{“taskid”: “aa1-aa2”, “descry”: “some info”, …}]’ from this json should take value of “taskid” (in this example its “aa1-aa2”). And second step send post to https://backend2/[value taskId]/GetData with header application/json and json body from primary post request. The result of this second request should be forwarding as a final request back to our soft.

Elegant way to inject IP filtering in vhosts

$
0
0
Have a dozen of vhosts which sometime require IP filtering (maintenance ...) using 'geo'.

So

- 'geo' is in http context, and gives a variable a value depending on the IP filters => that declaration is in an include file "Geo"

- then have a 'error_page / location / rewrite / if / return' block => is in an include file "IP_filter"

To activate filtering, "include Geo;" is added in 'http', and "include IP_filter;" is injected in each 'server'

That's only 1 + N vhosts lines to add, but I was wondering if there is a more elegant way to inject the filtering into each 'server' (without modifying the files each time), like a "trick" in the 'http' block(?).

Of course I could include the same empty file "Injection" into each 'server', and only fill that file with a nested "include IP_filter" and reload the config, when necessary. But that doesn't seem very clean (to me at least).

Any suggestion is welcome.

Thank you.

How to get THE original $request_uri ?

$
0
0
Despite documentation claiming that `$request_uri` is a "full original request URI (with arguments)", nginx in reality reduces it to path+query only.
I see why it does that, for the configuration purposes, but for my use case, I'd like to pass an actual request URI to the backend.
Is there a way?

Resource consumption during execution of many subrequests

$
0
0
Hello everyone.

I have some problem about subrequest.

My module during the processing of one request,
handler() in the access phase issue sequential sub requests sequentially,
while dynamically obtain data from the file cache or upstream,
ngx_http_output_filter calling it many times,
send data permanently back to the client little by little.

This seems to work at first glance, but it has the problem of dealing with memory, file descriptors, and disk resources until it processes the request and ends the connection with the client.


1. Leaking memory

Since nginx does not release the memory reserved by r->pool until processing of one request is completed.
In such processing as returning a response permanently, the memory usage will continue to increase.

In order to solve this problem, I tried to purge r->out and r->cache->buf, etc buffers by ngx_pfree() in subrequest finalize_handler, but reserved memory by ngx_palloc_small() could not purge.

In my module, the total memory allocated by ngx_palloc_small() in one subrequest is now more than 4 MB without r->out and r->cache->buf.

I think that I want to be able to reserve specify another new pool when creating a subrequest, but since nginx shares the pool with the mainrequest and subrequests.
It is difficult to purge separately only the memories that have been created by subrequest.

Is there a good solution about this problem?
Can I control it to samehow by cleanup handler?


2. Increase of fd and disk

nginx write temporary file to store data to get from upstream etc, and rename to standard hash name.

But fd of there temorary files is not closed() during processing of this request, the same file name attempting to rename to a file consumes both old and new fd and disk.

$ ls -l /proc/nginx-worker-pid/fd

21 /var/cache/nginx/temp-hash-file (delete)
22 /var/cache/nginx/temp-hash-file (delete)
23 /var/cache/nginx/temp-hash-file
24 /var/cache/nginx/temp-hash-file (delete)
:
:

Without caching, this kind of thing did not happen.

As long as you keep repeating caching sub requests, you can not prevent disk growth.

I close the following fd in filnalize_handler of subrequest,
It seems to have been solved, but this attempt is correct could it be?

ngx_close(r->upstream->pipe->temp_file.file.fd);
ngx_close(r->cahce->file.fd);
r->upstream->pipe->temp_file.file.fd = NGX_INVALID_FILE;
r->cache->file.fd = NGX_INVALID_FILE;

Michihiro

NGINX Standalone Webserver

$
0
0
I am trying to make NGINX as standalone web server for static content. I am able to achieve download (GET) of files from NGINX. However, I am getting error 301 when I do post and 405 when I tried PUT. Can someone please advise configuration that allows me to upload files using PUT or POST ?

location /static/data {
alias /static-data/;
client_body_temp_path /tmp/;
client_body_in_file_only on;
client_body_buffer_size 128k;
client_max_body_size 100M;
autoindex on;
}

Re: NGINX Standalone Webserver

$
0
0
POST and PUT by definition are NOT static methods.
Your question contradicts your initial statement of wanting to serve static content.

Re: NGINX Standalone Webserver

$
0
0
I agree with you.. So my requirement is to upload binary files from one source (through HTTP PUT / POST) which can be served to other clients (via HTTP GET)

Securing Nginx Web Server

$
0
0
We have hosted our platform on Google Cloud. Its a startup and its pretty lean setup with

1 X Nginx | acting as a web server | Public facing subnet

1 x Database server | internal subnet



I am 100% sure that this is not a recommended practice because in traditional on-premise, we never put our web server public facing and it was always behind our firewalls. But I am puzzled as in what are my options to protect my web server

can anyone please guide me on achieving below

Internet users <------> Firewall ? or Another Nginx Server ? <-------> Nginx Web server <------> DB.

As this is a startup without funding, please help me with some low cost / open source options.

Re: Securing Nginx Web Server

$
0
0
Note : Its depends upon your hit rate



-> Internet <--> PFSense (OpenSource firewall) <-> NGINX <-> Database

Passing errors to user received from upstream generated by POST requests

$
0
0
Hi

I have a setup where nginx acts as a reverse proxy for two application servers and is configured to try the other application server if the first one fails. The problem is I can't pass errors generated by the application to the user in case it was generated by a POST request. Here's a scenario I'm facing:

1. User sends POST request to nginx
2. nginx passes said request to an upstream server
3. Upstream server responds with an HTTP 500 and a stack trace
4. nginx sends an HTTP 500 message to the user without the stack trace. Instead it contains a generic HTTP 500 html body

The scenario does not happen when a user send for example an HTTP PUT request. In that scenario the stack trace is passed along to the user. I understand nginx handles non-idempotent messages differently as is evident in the configuration parameters of proxy_next_upstream. If I add non-idempotent to proxy_next_upstream the stack trace is indeed passed on to the user but in these cases the POST request is passed to multiple upstreams in case of an error. This is not something I want to do.

Is there a way for nginx to pass the original error message from an upstream to a user when the error was generated by a POST request?

Here's the relevant parts of my configuration:

upstream webapp {
server 10.0.0.2:1234;
server 10.0.0.3:1234;
}

server {
listen 80;
server_name example.com;
return 301 https://$host$request_uri;
}

server {
listen 443 ssl;
server_name example.com;
ssl_certificate certs/example.crt;
ssl_certificate_key certs/example.key;

location / {
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://webapp/;
proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504;

}

nginx stream pass client ip address to backend

$
0
0
I have this configuration:

https://pastebin.com/rjKDaUPC

I use it to terminate SSL connection to port 443 on the public IP address, then redirect the SSL traffic to apache or VPN server based on hostname, this is knows as SSL preread.

Everything is fine, but on the apache I've got every request from client 127.0.0.1 which is the IP of the Nginx server. What i have to do to pass the real client ip address to the apache server.

Thanks a lot.

Load balance HTTP requests to different API endpoints

$
0
0
I'm new to NGINX and am trying to load balance between different API endpoints. For example, if I have two API endpoints that return data to a GET request, I should be able to send a GET request to my NGINX server and see it alternate between the two APIs.

For simplicity's sake, I am only using one API endpoint for now. Once I can get this one working, then I'll add a second one. Here is my conf file:



user www-data;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
events {
worker_connections 768;
# multi_accept on;
}
http {
##
# Basic Settings
##
upstream apis {
server api.openweathermap.org/data/2.5/weather?q=Berlin&APPID={API KEY};
}
server {
listen 80;
location / {
proxy_pass http://apis;
}
}
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# server_tokens off;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
# gzip_vary on;
# gzip_proxied any;
# gzip_comp_level 6;
# gzip_buffers 16 8k;
# gzip_http_version 1.1;
# gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}

Sorry about the indentation, my spaces aren't showing up

Re: nginx stream pass client ip address to backend

$
0
0
You could try using proxy protocol. Though I'm unsure about it in case of SSL connection.

Re: Load balance HTTP requests to different API endpoints

$
0
0
Upsteam specifies SERVERS not URL's.
For nginx specific pastes, you can use https://paste.ngx.cc/

Nginx Serving Static files Issue

$
0
0
Hi,

I am using nginx to server my application static files. I'm currently using the alias option. But we have a requirement to limit this nginx static files server to certain origin requests for ex: serve only to those requests from xyz.com.

I have achieved this in my application block. But not able to do it for static files, i have used the same option if condition for alias, but it doesn't work. Any workarounds are appreciated.

Thanks,
Rejoy

/app

if(condition)
{
proxy_pass http://myapp;
}

Nginx Proxy

$
0
0
Hello.

I have 2 servers with nginx intalled with 2 external ip.( 192.168.0.1 192.168.1.1)
1 domain - example.com
3 web services test1.example.com test2.example.com test3.examle.com

and BIG problem :)

Right now i have DNS setup:
test1.example.com - 192.168.0.1
test2.example.com - 192.168.0.1
test3.example.com - 192.168.0.1

I need to change DNS setup to:
test1.example.com - 192.168.1.1
test2.example.com - 192.168.1.1
test3.example.com - 192.168.1.1

But web services must remain on 192.168.0.1

How to configure that setup?

Thanks for any help :)

Re: Nginx Proxy

$
0
0
proxy_pass http://192.168.0.1/ for each server_name, and retain the Host: header.
Viewing all 4759 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>