- 2 Posts
- 11 Comments
Misery loves company! Mine is Verizon and there was a setting that was causing me trouble recently, but probably is unrelated to yours (was DNS rebind protection).
No, I thought the routing was to forward the IP from the Tailscale 100.x.x.x subnet(? not sure I’m using that word correctly) to where the resources I want to access are (in my case, my local 192.168 addresses).
Yes, the machine that is running Docker/Tailscale is serving as an exit node and it hosts all the other services I want to access, which are also in containers.
That’s what I was counting on! Guess I just have to look at it as a learning opportunity.
Yeah, I’ve tried the 100.x.x.x IP and their tailscale URLs, neither of which work.
Yes, it does (been checking with sysctl net.ipv4.ip_forward, but guess it’s the same thing). It seems like the issue may be that IPv6 may not be enabled within the container. It’s enabled on the host, but the docker logs say ipv6 forwarding is not enabled.
Thanks, I did check that my machine had IP forwarding enabled, and it does. I also ran those lines to create the config file as well, but that didn’t change anything. And I do have the lines in my compose file to advertise routes.
Sorry for misformatted code.
tailscale-authkey1: image: tailscale/tailscale:latest hostname: myhost environment: - TS_AUTHKEY=xx - TS_STATE_DIR=/var/lib/tailscale - TS_USERSPACE=false - TS_EXTRA_ARGS=--advertise-exit-node,--accept-routes - TS_ROUTES=192.168.0.0/24 volumes: - ts-authkey-test:/var/lib/tailscale - /dev/net/tun:/dev/net/tun cap_add: - NET_ADMIN - SYS_MODULE restart: unless-stopped nginx-authkey-test: image: nginx network_mode: service:tailscale-authkey1
pirateMonkey@lemmy.worldOPto
Selfhosted@lemmy.world•Having trouble setting up NginxEnglish
0·4 months agoPart of the idea here is to get comfortable with what’s happening here in a safe/unexposed environment before trying something that I would expose to the internet, and I’m of the understanding that you can do it this way (pass it to the internet, which will then return that internal IP that Nginx should route appropriately.
I may be (probably am) worrying too much about this, but doesn’t that remove much of the benefit of running services in containers? My understanding is that one benefit of containerization is so that if one service is somehow compromised, the others remain isolated, but running the service that allows you inside on bare metal gives single point access to the drives that those other services rely on, and that’s from the most likely point someone could get into your network. Alternatively, if Tailscale is containerized and someone gets in, they have access to the other services’ front ends but not the data they rely on since Tailscale itself doesn’t have that access.