• 2 Posts
  • 138 Comments
Joined 26 days ago
cake
Cake day: January 6th, 2026

help-circle

  • Just a small number of base images (ubuntu:, alpine:, debian:) are routinely synced, and anything else is built in CI from Containerfiles. Those are backed up. So as long as backups are intact can recover from loss of the image store even without internet.

    I also have a two-tier container image storage anyway which gives redundancfor the built images but thats more of a side-effect of workarounds… Anyway, the “source of truth” docker-registry which is pushed to is only exposed internally to the one who needs to do authenticated push, and to the second layer of pull-through caches which the internal servers actually pull from. So backups aside, images that are in active use already at least three copies (push-registry, pull-registry, and whoevers running it). The mirrored public images are a separate chain alltogether.

    This has been running for a while so all handwired from component services. A dedicated Forgejo deployment looks like it could serve for a large part of above in one package today. Plus it conveniently syncs external git dependencies.




  • The advantage to using something like terraform is repeatability, reliability across environments and roll-backs.

    Very valuable things for a stress-free life, especially if this is for more than just entertainment and gimmicks.

    I’d rather stare at the terminal screen for many hours of my choosing than suddenly having to do it at a bad time for one… 2… 3… (oh god damn the networking was relying on having changed that weird undocumented parameter i forgot about years ago wasnt it) hours. Oh, and a 0-day just dropped for that service you’re running running on the net. That you built from source (or worse, got from an upstream that is now mia). Better upgrade fast and reboot for that new kern… She won’t boot again. The bootdrive really had to crap out right now didn’t it? Do we install everything from scratch, start Frankensteining or just bring out the scotch at this point?

    Also been at this for a while. I never regretted putting anything as infra-as-code or config management. Plenty of times I wish I had. But yeah, complexity can be insiduous. Going for High Availability and container cluster service mesh across the board was probably a mistake on the other hand…





  • One way to go about the network security aspect:

    Make a separate LAN(optionally: VLAN) for your internals of hosted services. Separate from the one you use to access internet and use with your main computer. At start this LAN will probably only have two machines (three if you bring the NAS into the picture separately from JF)

    • The server running Jellyfin. Not connected to your main network or internet.

    • A “bastion host” which has at least two network interfaces: One connected outwards and one inwards. This is not a router (no IP forwarding) and should be separate from your main router. This is the bridge. Here you can run (optional) VPN gateway, SSH server. And also an HTTP reverse proxy to expose Jellyfin to outside world. If you have things on the inside that need to reach out (like package updates) you can have an HTTP forward proxy for that.

    When it’s just two machines you can connect them directly with LAN cable, when you have more you add a cheap network switch.

    If you don’t have enough hardware to split machines up like this you can do similar things with VMs on one box but that’s a lot of extra complexity for beginners and you probably have enough of new things to familiarize yourself with as it is. Separating physically instead of virtually is a lot simpler to understand and also more secure.

    I recommend firewalld for system firewall.