September 21, 2024, 7:25 pm
Can't edit my post, so will reply instead:
- tried to create a SSL certificate and test connectivity. NPM returned that a server was detected but returned an unexpected status code "invalid domain or IP". I'm using a domain provided by my router, and have already setup port forwarding for ports 30022 and 443 to the server running NPM.
- pinging my domain returns the public ip. however if i telnet my domain via 443, the connection fails. there's no firewall/router setting that blocks this (and connection has worked under other installation attempts), so don't understand why this is despite the port forwarding
↧
September 21, 2024, 7:37 pm
- telnetting my public ip on port 443 also results in a failed connection
↧
↧
September 21, 2024, 11:56 pm
So testing across different NPM installations:
When NPM is installed as a TrueNAS app:
- when trying to create a certificate, server reachability is failed. The error is that a server can found but returned an unexpected status code ‘invalid domain or IP’
- port 443 and 30022 (as required for the app) has been forwarded to the device running NPM, however I’m not sure if the port forward is actually running properly
- check with www.portchecktool.com shows port 443 is blocked, but port 30022 is ok
So to check this isn’t an error with my router settings, I also tried NPM installation in a Docker container:
- same error when creating a certificate as above
- port 443 has been forwarded to the device/container running NPM. (port 30022 not required with the Docker installation)
- this time with the portchecktool, port 443 is shown to be clear
So in:
1) the TrueNAS App installation, the App somehow blocks/is not listening for traffic on port 443; and
2) the Docker installation, port 443 is cleared but NPM can’t process the certificate? Would anyone be able to make sense of this?
↧
September 27, 2024, 2:57 am
Hi there,
Is there any way of preventing a redirect to any server except a specific server?
That is to deny all redirects to anything but a specific target server?
The requirement is to prevent connections to anything except a specific server.
Any input or suggestions for this area would be both greatly appreciated and interesting.
Thanks for reading!
Regards,
Rob.
↧
Hi,
I have 2 VMS on a server. Both has Ubuntu 23.10. One of them has installed Nextcloud that is accessed from Internet from https://nextcloud.midiominio.com:1234.
In the other vm I have nginx working properly. I want to know if I can access the Nextcloud web interphase trough nginx, in order to access Nextcloud from internet from https://cloud.otherdomain.com (with no port or standard HTTPS port 443).
Can I achieve that with nginx? I have achieved that with simpler sites, but I can't get it to work with some others (like Nextcloud). I guess It has something to do with the key certificates that sites like Nextcloud manage.
Thank you very much for your time and answers.
↧
↧
October 17, 2024, 5:56 am
Hi all,
I have a React JS website hosted on an Ubuntu VPS with NGINX as the web server. The videos work fine on most browsers and devices, but I’m facing an issue where they won’t play on any browser on iPhones and iMacs (Safari, Chrome, etc.). This problem seems specific to Apple devices. I'm new to nginx so please help me to solve this issue
Below is my nginx config code:
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
server {
listen 80;
server_name 43.225.52.129;
location / {
# Backend nodejs server
proxy_pass http://127.0.0.1:5000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
# Headers for video playback
add_header 'Access-Control-Allow-Origin' '*';
add_header 'Access-Control-Allow-Methods' 'GET, OPTIONS';
add_header 'Access-Control-Allow-Headers' 'Range';
add_header 'Accept-Ranges' 'bytes';
}
}
server {
listen 443 ssl;
server_name 43.225.52.129;
ssl_certificate /etc/letsencrypt/live/orangevideos.in/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/orangevideos.in/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
location / {
# Backend nodejs server
proxy_pass http://127.0.0.1:5000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
# Headers for video playback
add_header 'Access-Control-Allow-Origin' '*';
add_header 'Access-Control-Allow-Methods' 'GET, OPTIONS';
add_header 'Access-Control-Allow-Headers' 'Range';
add_header 'Accept-Ranges' 'bytes';
}
# MIME types for video files
types {
video/mp4 mp4;
video/webm webm;
video/ogg ogg;
}
}
↧
October 17, 2024, 11:12 am
I don't suppose anyone has found a solution to this?
↧
October 17, 2024, 5:53 pm
I am currently considering an implementation to load balance between two specified backend servers using Nginx. I understand that this is possible depending on the configuration of the upstream directive, but it hasn't been working as expected, so I would appreciate any advice or tips.
I want to dynamically specify the FQDN of the server in the upstream directive as an argument for proxy_set_header Host, but I'm having trouble achieving this. Specifically, when I have the following configuration in upstream, I would like proxy_set_header Host to include the FQDN of the server that is being load balanced within upstream. For example, if it is load balancing to server test1.example.com:443, I want proxy_set_header Host to be set to test1.example.com.
------
upstream backends {
server test1.example.com:443;
server test2.example.com:443;
}
------
By the way, when I specified only one server in upstream and hardcoded the argument for proxy_set_header Host, it worked fine.
If you have any advice regarding the above, please let me know. Thank you!
↧
October 18, 2024, 6:14 pm
Hi there!
So I originally posted my config files and have a huge thread back and forth here:
https://users.rust-lang.org/t/cant-connect-to-rust-executable-running-on-live-server/119939/12
Basically, I have a websocket server that runs on ws://0.0.0.0:8000/ws
I want to have ufw and nfinx to allow secure connections on wss and route to running running websocket process here.
I've tried what seems like every conbination of nginx config settings a endpoints but nothing works. Also in the final logs it seems like ufw is blocking things, but I have all these rules allowing many ports in ufw...
I am trying to use the domain with subdomain "quackers-beta.jimlynchcodes.com".
I can see locally that it is indeed running, but I just can't seem to access it from anywhere outside...
so what can I do to fix this? what are the proper nginx config settings I need
thanks
↧
↧
October 18, 2024, 6:16 pm
This is what my nginx service config looks like. I've tried a LOT of different ones:
located at: /etc/nginx/sites-available/quackers-beta.jimlynchcodes.com
```
server {
listen 443 ssl;
server_name quackers-beta.jimlynchcodes.com;
ssl_certificate /etc/letsencrypt/live/quackers-beta.jimlynchcodes.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/quackers-beta.jimlynchcodes.com/privkey.pem;
ssl_protocols TLSv1.2 TLSv1.3;
location /ws {
proxy_pass http://127.0.0.1:8000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_read_timeout 86400; # Prevent timeout for long-lived connections
}
location / {
proxy_pass http://0.0.0.0:8000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
error_log /var/log/nginx/quackers_error.log debug;
access_log /var/log/nginx/quackers_access.log;
}
```
↧
October 18, 2024, 6:17 pm
and here's my /etc/nginx/nginx.conf:
user www-data;
worker_processes auto;
worker_cpu_affinity auto;
pid /run/nginx.pid;
error_log /var/log/nginx/error.log debug;
include /etc/nginx/modules-enabled/*.conf;
events {
worker_connections 768;
# multi_accept on;
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
types_hash_max_size 2048;
server_tokens build; # Recommended practice is to turn this off
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# SSL Settings
##
ssl_protocols TLSv1.2 TLSv1.3; # Dropping SSLv3 (POODLE), TLS 1.0, 1.1
ssl_prefer_server_ciphers off; # Don't force server cipher order.
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
##
# Gzip Settings
##
gzip on;
# gzip_vary on;
# gzip_proxied any;
# gzip_comp_level 6;
# gzip_buffers 16 8k;
# gzip_http_version 1.1;
# gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
#mail {
# # See sample authentication script at:
# # http://wiki.nginx.org/ImapAuthenticateWithApachePhpScript
#
# # auth_http localhost/auth.php;
# # pop3_capabilities "TOP" "USER";
# # imap_capabilities "IMAP4rev1" "UIDPLUS";
#
# server {
# listen localhost:110;
# protocol pop3;
# proxy on;
# }
#
# server {
# listen localhost:143;
# protocol imap;
# proxy on;
# }
#}
↧
October 21, 2024, 3:59 pm
↧
October 22, 2024, 3:40 am
Hello all
I am facing a problem that I don't understand the origin of it, but I was able to solve the issue.
Maybe any of you have an idea how to look for the root of the difference?
Any reasons why the same configure flags can result in a different NGX_MODULE_SIGNATURE?
I did the same thing on a Debian 12 + nginx 1.26.2 and I don't have the problem, so something related to the OS?
Step by step of the problem and resolution:
- When migrating a docker image from alpine 3.19 + nginx 1.24.0 to alpine 3.20 + nginx 1.26.2, we started receiving "is not binary compatible" after a startup of nginx with our module
- This is normally related with different configure options between the nginx binary and the dynamic lib
- We double checked the flags and they are actually the same between nginx 1.24 for alpine 3.19 and nginx 1.26.3 for alpine 20
-- nginx_flags="$(nginx -V 2>&1 | grep -oP 'configure arguments: \K(.*)' | sed -e 's/--add-dynamic-module=\S*//g')"
- We checked the gcc version, everything ok as well
- So, checking the source code we found the NGX_MODULE_SIGNATURE def and it seems to be the root of the message
- To find the difference, we did:
-- strings /usr/sbin/nginx| grep -F '8,4,8'
-- strings /etc/nginx/modules/foo_module.so | grep -F '8,4,8'
8,4,8,0010111111010111001111111111111110
8,4,8,0010111111010111001111111111101110
- The 30th value was different, meaning that NGX_MODULE_SIGNATURE_30 was not set:
-- #if (NGX_HTTP_HEADERS)
-- #define NGX_MODULE_SIGNATURE_30 "1"
- Solution: add the following line to the config of the module, the compilation worked and Nginx started as expected.
-- have=NGX_HTTP_HEADERS . auto/have
↧
↧
October 23, 2024, 1:20 am
It seems like the auto/modules changed between versions:
1.26.2: we only have this on the NGX_COMPAT condition
if [ $NGX_COMPAT = YES ]; then
have=NGX_COMPAT . auto/have
have=NGX_HTTP_GZIP . auto/have
have=NGX_HTTP_DAV . auto/have
have=NGX_HTTP_REALIP . auto/have
have=NGX_HTTP_X_FORWARDED_FOR . auto/have
have=NGX_HTTP_HEADERS . auto/have
have=NGX_HTTP_UPSTREAM_ZONE . auto/have
have=NGX_STREAM_UPSTREAM_ZONE . auto/have
fi
1.24.0 : we have it on the HTTP_V2 and NGX_COMPAT condition
if [ $HTTP_V2 = YES ]; then
have=NGX_HTTP_V2 . auto/have
have=NGX_HTTP_HEADERS . auto/have
ngx_module_name=ngx_http_v2_module
ngx_module_incs=src/http/v2
ngx_module_deps="src/http/v2/ngx_http_v2.h \
src/http/v2/ngx_http_v2_module.h"
ngx_module_srcs="src/http/v2/ngx_http_v2.c \
src/http/v2/ngx_http_v2_table.c \
src/http/v2/ngx_http_v2_encode.c \
src/http/v2/ngx_http_v2_module.c"
ngx_module_libs=
ngx_module_link=$HTTP_V2
. auto/module
fi
if [ $NGX_COMPAT = YES ]; then
have=NGX_COMPAT . auto/have
have=NGX_HTTP_GZIP . auto/have
have=NGX_HTTP_DAV . auto/have
have=NGX_HTTP_REALIP . auto/have
have=NGX_HTTP_X_FORWARDED_FOR . auto/have
have=NGX_HTTP_HEADERS . auto/have
have=NGX_HTTP_UPSTREAM_ZONE . auto/have
have=NGX_STREAM_UPSTREAM_ZONE . auto/have
fi
↧
October 23, 2024, 9:15 am
First, I am new here. If this topic is in the wrong forum, please let me know where to post.
I set up a nginx server for my (not so great looking) website on arch Linux. I then used certbot to get a ssl certificate. I lastly set up a custom 404 page. https://grace-central.net works, yet www.grace-central.net does not work. Furthermore, www.grace-central.net does not display my custom 404 page, but instead the default. https://grace-central.net/not-a-real-page.html does display my custom 404 page. I will attach my config files.
Note: etcNginx.cconf is actually nginx.conf in /etc/nginx. The other nginx.conf is in /srv/http/grace-central/nginx.conf
↧
November 3, 2024, 4:33 am
Good morning,I have installed nginx with rtmp module on a ubuntu server.
I would need to stream to nginx with user & password authentication.
Can it be done.
Do you have any document or help to realize it.
Thanks,Alberto
↧
November 3, 2024, 9:30 am
Hi, all. After the upgrade to nginx 1.26.2 I started facing this warning:
nginx: [warn] protocol options redefined for 0.0.0.0:443 in /path/to/123456.conf:8
Here's how configuration is laid out. There's one http clause under which a number of name-based servers are included from /path/to/*.conf and then comes this server:
server {
listen 80 default accept_filter=httpready rcvbuf=8k;
location / { deny all; }
}
server {
listen 443 ssl default_server accept_filter=dataready;
ssl_certificate ssl/self-ssl.crt;
ssl_certificate_key ssl/self-ssl.key;
ssl_stapling off;
location / { deny all; }
}
/path/to/*.conf files look like this:
server {
http2 on;
listen 80;
listen 443;
...
I can put ssl after 443 - it doesn't matter, it will still give the same warning. If I duplicate the listen clause from http
listen 443 ssl accept_filter=dataready;
nginx errs out with:
nginx: [emerg] duplicate listen options for 0.0.0.0:443 in /path/to/nginx.conf:47
If I completely remove listen directives from /path/to/*.conf, the said name based hosts aren't found resulting in HTTP 403.
So how do I get it working without warnings? For some reason listen 80 is ok, nginx doesn't complain about it at all.
Thanks.
↧
↧
November 4, 2024, 12:02 pm
Problem solved) If all listen 443 lines in all files have the same options - there will be no warnings. In my case
listen 80;
listen 443 ssl;
in all included files and the "catch-all":
listen 80 default accept_filter=httpready rcvbuf=8k;
listen 443 ssl default accept_filter=dataready;
at http level. Options other than ssl were ignored (and even resulted in a duplicate error if I tried to copy the accept_filter part, for instance).
↧
November 13, 2024, 12:35 pm
I still would like to know the answer on this question please?
When we are talking about PHP-fpm. Does setting fastcgi_keep_conn on; even make sense when using a unix socket file in fastcgi_pass?
Anybody? Does keepalive make sense or not?
↧
November 15, 2024, 12:46 am
I have nginx installed Ok with it running using my johnrose.mywire.org server block. When I try to install php-fpm it fails:
Active: failed (Result: exit-code) since Fri 2024-11-08 09:42:43 GMT; 20ms ago Docs: man:php-fpm7.4(8) Process: 3385 ExecStart=/usr/sbin/php-fpm7.4 --nodaemonize --fpm-config /etc/php/7.4 /fpm/php-fpm.conf (code=exited, status=78)
I have now purged php-fpm & php7.4-fpm as well as deleting the directory /etc/php with its contents. But I still get this problem.
Any ideas please?
↧
November 15, 2024, 2:52 am
Somebody executed a benchmark, which does show still improvements when using keepalive with PHP socket files:
Test case RPS Latency
TCP / fastcgi_keep_conn OFF / keepalive OFF 8791.1 ± 161.45 225µs ± 48µs
TCP / fastcgi_keep_conn ON / keepalive OFF 8514.02 ± 79.07 232µs ± 45µs
TCP / fastcgi_keep_conn ON / keepalive ON 10356.21 ± 115.86 190µs ± 51µs
Unix / fastcgi_keep_conn OFF / keepalive OFF 9571.68 ± 66.48 206µs ± 40µs
Unix / fastcgi_keep_conn ON / keepalive OFF 9376.97 ± 73.16 211µs ± 42µs
Unix / fastcgi_keep_conn ON / keepalive ON 10333 ± 155.5 191µs ± 65µs
With the following additional information:
- Unix sockets are generally a bit faster than TCP sockets.
- Enabling fastcgi_keep_conn and keepalive provides a measurable performance boost.
- This improvement is more noticeable with TCP sockets than with Unix sockets.
See: https://github.com/yegor-usoltsev/nginx-upstream-keepalive/issues/1
↧
↧
November 16, 2024, 12:08 pm
Hi all!
First time poster and still trying to learn nginx so bear with me here.
Previously, I had a bunch of web services hosted on Docker, proxied using Traefik. I now want to branch into learning Kubernetes and a couple of other technologies so now have a couple of different places traffic may need to be directed to.
I've installed Nginx and attempting to set it up as an ingress controller (if that's the right term) which will take all incoming traffic and route it to the appropriate server (whether that's the Kubernetes cluster, the Docker server or something else entirely)
Bothe the Kubernetes cluster and Docker server have their own reverse proxies (specifically Traefik) for directing traffic to the appropriate container.
How do I set up nginx so that it passes on all the information that the downstream proxies need to function as normal?
Is it also possible to implement a catch all rule such that if an incoming request does not match a given config, forward it to one of the downstream proxies?
Thanks in advanced, still only getting started with nginx and it's very different from traefik so I'm having to adjust a lot.
Toby
↧
November 17, 2024, 7:06 am
Right now I have lots of server blocks completely random in conf.d e.g.:
site-1.conf
site-2.conf
...
But I need more organization, e.g.:
conf.d/user-1/site-1.conf
conf.d/user-1/site-2.conf
conf.d/user-2/site-1.conf
Unfortunately, this does not seem to work unless I write/include each of the paths in a specific config file.
Is it possible to get nginx to read all the server blocks that are in the subfolders by e.g. only write "conf.d/*" or something similar?
Best regards
↧
November 24, 2024, 12:58 pm
Hi,
I have nginx as a reverse proxy. The proxy require authentication with Authorization Basic.
I can access the proxied server fine, until it gets to the request /api/config/config_entries/flow.
The request from the browser contains an Authorization Bearer header, and the proxy refuse the request (code 401).
How can I make nginx pass the Bearer token and ignore it itself ?
Nginx show those logs :
2024/11/24 20:08:46 [info] 55109#100183: *26059 no user/password was provided for basic authentication, client: xxx.xxx.xxx.xxx, server: myserver.com, request: "POST /api/config/config_entries/flow HTTP/1.1", host: "myserver.com", referrer: "https://myserver.com/config/integrations/dashboard"
2024/11/24 20:08:46 [info] 55109#100183: *26059 delaying unauthorized request, client: xxx.xxx.xxx.xxx, server: myserver.com, request: "POST /api/config/config_entries/flow HTTP/1.1", host: "myserver.com", referrer: "https://myserver.com/config/integrations/dashboard"
2024/11/24 20:11:50 [info] 55109#100183: *26086 client closed connection while waiting for request, client: xxx.xxx.xxx.xxx, server: 0.0.0.0:443
My nginx.conf is:
server {
listen 443 ssl;
server_name myserver.com;
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
ssl_certificate /path/fullchain.pem;
ssl_certificate_key /path/privkey.pem;
ssl_session_cache shared:SSL:1m;
ssl_session_timeout 5m;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers CDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-CHACHA20-POLY1305;
ssl_prefer_server_ciphers off;
auth_basic "Nope";
auth_basic_user_file htpasswd;
auth_delay 5s;
location / {
proxy_pass http://proxied_server/;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
↧
December 8, 2024, 3:23 am
Hello!
I recently installed nginx on my fedora server (running i na vm) and I want to set up a revers proxy to home assistant server. Here is my nginx config:
server {
listen 80;
server_name dom.jeansibelius.net;
location / {
proxy_pass http://something.something.something.100:8123/; # this is the ip address of my home assistant server
}
}
I have set up port forwarding on my router (80 => nginx_server:80, 443 => nginx_server:443).
Also I have set up subdomain (dom.jeansibelius.net) to point to my public ip address.
So when I go to http://dom.jeansibelius.net/ I should see my home assistant, but I do not...
I only see "Secure site not available" and after accepting I see "connection was reset".
Nothing in /var/log/nginx/access.log and in /var/log/nginx/error.log.
Can you guys help me?
↧
↧
December 13, 2024, 1:27 am
Hello,
I installed "Datagerry" (cmdb) via deb on Ubuntu and it's working.
When I add a new field (for laptop / hardware) then I got the following error message when I will save it:
An Error Occurred
Bad Request
Request Line is too large (4222 > 4094)
So what can I do?
I would like to add more fields.
I assume that it is a Nginx “configuration issue”. So I added these lines to the nginx.conf (also in sites-enabled/default) but it wasn’t helpful:
client_max_body_size 0;
client_header_buffer_size 8k;
client_body_buffer_size 16k;
large_client_header_buffers 8 16k;
So “Request Line is too large (4222 > 4094)” is definitely less than my configured 8k/16k.
So where is it configured if not in nginx.conf? (or not in sites-enabled/default)
nginx -t said that everything is right.
nginx version: nginx/1.24.0 (Ubuntu)
Ubuntu 24.04 (ubuntu-noble-24.04-amd64-server-20241109)
Datagerry 2.2.0
Thank you and best,
Jumo
↧
December 16, 2024, 2:28 am
OK, this is solved.
Simply add this line to the webserver section in Datagerry's cmdb.conf:
limit_request_line = 8190
Nothing found about this in Datagerry's documentation. It was a guess and it's working now.
↧
December 29, 2024, 7:53 am
I have a couple of VPS's running AlmaLinux 9.x with RPM NGINX as a reverse-proxy, that I use as my personal web proxies. I want to add OpenConnect (ocserv) as a backend service so that I can use these VPS's as personal VPN's or personal web proxies but I can't figure out the correct code to use in the NGINX config file.
The VPS's have one single public IP address. I want to use SNI to determine which backend gets the traffic. I want to use *acme.sh* with DNS challange to obtain LE certs.
Below is my current config file for the web proxies:
```
user nginx;
worker_processes auto;
error_log /var/log/nginx-error.log info;
pid /var/run/nginx.pid;
events {
accept_mutex on;
multi_accept on;
worker_connections 1024;
}
http {
keepalive_timeout 60;
access_log /var/log/nginx-access.log combined;
server {
listen 80;
listen [::]:80;
server_name www.example.com;
return 301 https://$http_host$request_uri;
}
server{
listen 443 ssl;
listen [::]:443 ssl;
server_name www.example.com;
ssl_certificate /root/.acme.sh/www.example.com_ecc/fullchain.cer;
ssl_certificate_key /root/.acme.sh/www.example.com_ecc/www.example.com.key;
ssl_protocols TLSv1.3;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;
add_header Strict-Transport-Security "max-age=31536000";
location /hGtmb {
proxy_redirect off;
proxy_http_version 1.1;
proxy_pass http://localhost:14722;
proxy_set_header Host $http_host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
location / {
sub_filter $proxy_host $host;
sub_filter_once off;
#proxy_pass https://www.bing.com;
proxy_pass http://localhost:81;
#proxy_set_header Host $proxy_host;
#proxy_set_header X-Real-IP $remote_addr;
#proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
#proxy_set_header X-Forwarded-Port $server_port;
#proxy_set_header X-Forwarded-Proto $scheme;
}
}
}
```
I have the Apache web server currently listening on localhost:81 for regular https traffic.
The *location /hGtmb* entry is for the Shadowsocks/v2ray proxy server. Everything works as it should but when I try to add ocserv to the mix, I kill everything. I'm not sure what I'm doing wrong or if RPM NGINX is capable of doing what I'm attempting to do.
I am basically trying to recreate what they've done with HAProxy:
h**ps://docs.openconnect-vpn.net/recipes/ocserv-multihost/
h**ps://www.linuxbabe.com/linux-server/ocserv-vpn-server-apache-nginx-haproxy
I've been working on this for about a month now. I just can't seem to find a working example/tutorial using NGINX. First I started with Nginx Proxy Manager but no one on the Github discussion board has responded to my request for advice.
So I guess I first should ask, can RPM NGINX do what I want? If so, can someone point me to a tutorial, a working config or tweak my current config by adding code that should get me going in the right direction?
Thanks in advance!
↧
December 30, 2024, 9:11 am
Hello
Coming from Caddy there is a feature where I can include a file passing arguments to the file, hence designing template files is very neat and straigthforward
Imagine I have 100 domains, domain1.com, domain2.com, ... domain100.com
And I want to use NGINX as a reverse proxy for such domains
How can I create a simple config, to pass the minimum text I can configure them all significantly? (including logging, expires, compression, and all the features I want)?
So far the only idea I've found is passing $server_name to all configurable parameters except for server_name and error_log and have in one file all like:
server {
include /path_to_templated elements that accept $server_name
server_name domain1.com
error_log /var/log/nginx/error-domain1.com.log;
}
server {
include /path_to_templated elements that accept $server_name
server_name domain2.com
error_log /var/log/nginx/error-domain2.com.log;
}
...
server {
include /path_to_templated elements that accept $server_name
server_name domain100.com
error_log /var/log/nginx/error-domain100.com.log;
}
From what I've read, server_name and error_log are some of the few directives that don't admit variables, hence templating like this is impossible (the only templating I've seen is designed for separate instances of Nginx passing ENV variables which I don't find too useful in this scenario because I cannot conditionally control them inside Nginx
Anyway its not terrible this way, but since I'm an Nginx newcomer I was simply wondering if there is something better than this.
↧
↧
January 10, 2025, 12:26 am
hi there. an nginx neophyte here.
i seem to have a problem having my custom error page display for my virtual host.
the error page (error.html) resides in /some/path/outside/of/var/www/html/
the custom error page (error.html) is displayed with my default server block code (and triggered using http://myipaddress/missingpage.html, where missingpage.html doesn't exist), but fails to show using the virtual host server block shown below (and triggered using https://somedomain.tld/missingpage.html):
server {
listen 80;
listen 443 ssl;
ssl_certificate /some/path/to/ssl/certificate/certificate.pem;
ssl_certificate_key /some/path/to/ssl/certificate//privatekey.pem;
root /some/path/to/domain/html/files;
index index.html;
server_name somedomain.tld;
location / {
try_files $uri $uri/ =404;
}
location ~ ^/scripts/.*\.pl$ {
gzip off;
include /etc/nginx/fastcgi_params;
fastcgi_pass unix:/var/run/fcgiwrap.socket;
fastcgi_index index.pl;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
}
include /etc/nginx/custom/customerrorpage.conf;
}
where customerrorpage.conf contains the following content
error_page 404 405 500 501 /error.html;
location /error.html {
root /some/path/outside/of/var/www/html/;
internal;
}
when expecting a 404 (page not found) error, my custom error page is not shown, but instead i see the nginx output "404 Not Found" being displayed.
i've tried all manner of combos regarding error page location (inside and outside of /var/www/html/), code placement, file permissions/ownerships, etc. but i can't seem to get my custom error page being displayed for my virtual host code block.
with all of my attempted code combos, when applied to the default server block (and triggered using http://myipaddress/missingpage.html), the custom error page is always shown, but is never shown for my virtual host code block.
the (virtual host) domain functions as expected within web browsers, but that dang custom error page is a'hidin'
any suggestions would be greatly a'ppreciated.
thanks.
thed
.
↧