Replies: 16 comments 32 replies
-
|
This is awesome. I'm going to try this out right now, hopefully this can get some more attention! |
Beta Was this translation helpful? Give feedback.
-
|
Your post helped get my setup working. Thank you!!! I did a few things differently:
|
Beta Was this translation helpful? Give feedback.
-
|
Has anyone managed to get this working with podman? Everything seems to work fine but I don't have access to the internet when using the tailscale container as exited node routed through gluetun |
Beta Was this translation helpful? Give feedback.
-
|
I've just spent the last 2 days trying to work out how to do this and not have it leak DNS but I've accomplished it now. This might have saved me a bit of time I don't have the --accept-routes enabled and it seems to work so far |
Beta Was this translation helpful? Give feedback.
-
|
got it working with proton-vpn, thank you |
Beta Was this translation helpful? Give feedback.
-
There's a wrong assumption in this example. Each environment variable can only be declared once. So your definition will translate to: You lose --advertise-tags and --accept-routes. Instead, it should look like: |
Beta Was this translation helpful? Give feedback.
-
|
For me, I
I edited my post after I managed to get it to work (12th April, includes the comment from Write below), so the compose file above should be working! If you are not using headscale, make sure to remove the If you are not using pihole/a self-hosted DNS server, make sure to remove |
Beta Was this translation helpful? Give feedback.
-
|
Exacly what i was looking. Thank you. |
Beta Was this translation helpful? Give feedback.
-
|
I'm trying to get this working but I'm getting very slow down speeds although up seems reasonable. Did anyone else experience this and manage to solve it? |
Beta Was this translation helpful? Give feedback.
-
|
Since posting, I have now removed the custom DNS line from my gluetun configuration. Now using the VPNs default DNS. The custom one was slow and causing failed healthchecks. Since I have other containers that depend on gluetun being healthy, they were continually having problems. I lost the ability to connect to those containers via tailscale using the container names, but if you remember the IP and port or bookmark them, it still works. Not sure this will fix your speed issue, but it helped mine. |
Beta Was this translation helpful? Give feedback.
-
|
I've got this working pretty well from the local network but externally the tailscale traffic to the exit node goes through the VPN tunnel so using it as an exit node off site is diabolically slow. Using tailscale ping, the external hosts use DERP to obtain the IP address which appears to be the VPN endpoint. Has anyone found a way to get it to the exit node without going through the vpn backwards? |
Beta Was this translation helpful? Give feedback.
-
|
Just adding my one cent of contribution here for those who are facing indirect connections via DERP server: the one solution that currently worked for me was using IPv6 stack on docker. It also requires using a IPv6-compatible VPN provider. In my case, a tested Mullvad and it's working fine. |
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
|
Finally got it working at the cost of a bit of my sanity. Also since it involves a bunch of iptables mods, proceed with caution. I DO NOT know what I’m doing. Basically, it runs the gluetun and tailscale containers in the same bridge network, and forwards non-STUN traffic to gluetun. Works by changing tailscale’s default gateway to gluetun’s IP and enabling forwarding on gluetun.
compose.yamlservices:
gluetun:
container_name: gluetun
image: qmcgaw/gluetun:latest
environment:
VPN_SERVICE_PROVIDER: ""
SERVER_COUNTRIES: ""
VPN_TYPE: "wireguard"
WIREGUARD_PRIVATE_KEY: ""
WIREGUARD_PRESHARED_KEY: ""
WIREGUARD_ADDRESSES: ""
WIREGUARD_IMPLEMENTATION: "kernelspace"
WIREGUARD_MTU: "1320"
DOT: "off"
DNS_KEEP_NAMESERVER: "on"
TZ: "${TIMEZONE:?err}"
networks:
vpn:
ipv4_address: 172.31.0.2
volumes:
- ${CONFIG_HOME:?err}/gluetun:/gluetun
configs:
- source: gluetun-rules
target: /iptables/post-rules.txt
mode: 0755
cap_add:
- NET_ADMIN
devices:
- /dev/net/tun:/dev/net/tun
sysctls:
net.ipv4.ip_forward: 1
net.ipv4.conf.all.src_valid_mark: 1
tailscale:
container_name: tailscale
image: tailscale/tailscale:latest
environment:
TS_AUTHKEY: ""
TS_HOSTNAME: ""
TS_STATE_DIR: "/var/lib/tailscale"
TS_EXTRA_ARGS: "--advertise-tags=tag:<TAG> --advertise-exit-node"
TS_USERSPACE: "false"
networks:
vpn:
ipv4_address: 172.31.0.3
volumes:
- ${CONFIG_HOME:?err}/tailscale:/var/lib/tailscale
depends_on:
gluetun:
condition: service_healthy
restart: true
entrypoint: /tailscale_wrapper.sh
configs:
- source: tailscale-wrapper
target: /tailscale_wrapper.sh
mode: 0755
cap_add:
- NET_ADMIN
- NET_RAW
devices:
- /dev/net/tun:/dev/net/tun
sysctls:
net.ipv4.ip_forward: 1
net.ipv4.conf.all.src_valid_mark: 1
configs:
gluetun-rules:
content: |
iptables -I FORWARD 1 -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT
iptables -I FORWARD 2 -i eth0 -o tun0 -j ACCEPT
iptables -I FORWARD 3 -i tun0 -o eth0 -j ACCEPT
iptables -t nat -A POSTROUTING -o tun0 -j MASQUERADE
tailscale-wrapper:
content: |
#!/bin/sh
set -e
cleanup() {
echo "[wrapper]: Received shutdown signal, stopping daemon..."
kill -TERM $BOOT_PID 2>/dev/null || true
wait $BOOT_PID 2>/dev/null || true
exit 0
}
trap cleanup EXIT TERM INT QUIT
echo "[wrapper]: Starting containerboot"
/usr/local/bin/containerboot 2>&1 &
BOOT_PID=$!
echo "[wrapper]: waiting for tailscale"
until tailscale status >/dev/null 2>&1; do sleep 1; done
echo "[wrapper]: waiting for routing on tailscale0"
# if you don't have any node running 24/7, just a `sleep 10` would do
until ping -c 1 -W 2 "<KNOWN_TAILNET_IP" >/dev/null 2>&1; do sleep 1; done
ip route add default via 172.31.0.1 dev eth0 table 100
ip route add 172.31.0.0/16 dev eth0 proto kernel scope link src 172.31.0.3 table 100
ip rule add from all fwmark 0x80000/0xff0000 lookup 100 pref 4200
ip route replace default via 172.31.0.2 dev eth0
iptables -t nat -I POSTROUTING 1 -o eth0 -j MASQUERADE
echo "[wrapper]: applied forwarding rules"
echo "[wrapper]: running tailscale netcheck, on the house"
tailscale netcheck
wait $BOOT_PID
networks:
vpn:
name: _vpn # this leading underscore keeps the interface as `eth0` in the container netns
ipam:
config:
- subnet: 172.31.0.0/16
gateway: 172.31.0.1
Starts and stops reliably with no issues, and periodic netchecks pass. Am daily-driving it for now, let’s see. |
Beta Was this translation helpful? Give feedback.
-
|
i set netbird exit node up using this guide and tweaking it a bit for netbird instead of wireguard. i know netbird isnt made for this, but its neat because i can VPN into my self-hosted servers and stuff then right click and change to gluetun exit node and boom its like a normal VPN all-in-one for me without any other vpn software running. the cool part is that i can still access my servers just perfectly fine while having all of my outbound traffic still shoved through the vpn via the exit node. really great stuff. cheers! |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Rationale: when away from home, be able to access all services on home server, plus all other machines on home network, while at the same time, without changing any settings, browse internet privately through VPN. Previously, having both mobile VPN client and Tailscale mobile client installed, switching between the two is inconvenient, and after a certain number of switches, breaks networking, requiring reboot.
Tailscale allows use of Mullvad servers as exit nodes, but the subscription must be purchased through Tailscale. When the feature was first offered, they were enabling Mullvad customers who already had paid in advance. Tailscale is awesome for being able to access home network from anywhere, without exposing the network to the internet, but with their default exit node feature all traffic outside the home network is not private.
So, I was looking for a way to use Mullvad via Gluetun as a Tailscale exit node. After some trial and error, I have it working. At least, when I load mullvad.net/en/check, with Tailscale enabled on my mobile device, I appear to be using Mullvad and it passes all the security checks.
Here is my setup...
gluetun docker compose (relevant parts only, indentation broken on paste)
tailscale docker compose (relevant parts only)
tailscale dashboard settings -- machines (https://login.tailscale.com/admin/machines):
click on your server (hostname set as above)
subnet routes 192.168.1.0/24 (same as TS_ROUTES above) --> check box
use as exit node --> check box
tailscale dashboard -- dns (https://login.tailscale.com/admin/dns)
nameservers > global nameservers --> enter IP of your server
override local DNS --> (enable slider)
magicDNS --> disable
tailscale mobile client settings
open app > 3 dots in upper right > use exit node... > click on server hostname
AdGuardHome (or PiHole) settings (Docker container on same server as Gluetun and Tailscale)
filters > DNS rewrites --> domain: *.lan (any custom local suffix) answer: 192.168.1.87 (server LAN IP)
Not necessary but makes it easy to browse to jellyfin.lan or jf.lan rather than 192.168.1.87:8096, for example.
caddy-docker-proxy settings (https://github.com/lucaslorentz/caddy-docker-proxy)
e.g. jellyfin docker compose (partial)
auto-generates this snippet in Caddyfile when jellyfin container starts. Can also manually write and mount into Caddy container.
Install root certificate on devices that will access resources through Caddy, so that security warnings are not generated
get root certificate from caddy container > /data/caddy/pki/authorities/local/root.crt
google how to install to your OS and/or browsers
Honestly, I'm not sure how exactly everything works together here, so this may be a spaghetti mess that could be simplified or improved. But for me at least, it works. Happy to receive any feedback.
There is one open issue #1854, that links to a post I also found helpful https://lemmy.world/post/7281194. However, their setup doesn't allow for remote access of the entire home network, nor does it allow for custom DNS settings.
Beta Was this translation helpful? Give feedback.
All reactions