There is a post about getting overwhelmed by 15 containers and people not wanting to turn the post into a container measuring contest.
But now I am curious, what are your counts? I would guess those of you running k*s would win out by pod scaling
docker ps | wc -l
For those wanting a quick count.
Four LXCs
I have 43 running, and this was a great reminder to do some cleanup. I can probably reduce my count by 5-10.
61 containers in 26 docker files.
49, I could imagine running all of those bare would be hard with dependencies
13 in a docker LXC, most of my stuff runs on 13 other dedicated LXCs
- Because I’m old, crusty, and prefer software deployments in a similar manner.
I salute you and wish you the best in never having a dependency conflict.
I’ve been resolving them since the late 90s, no worries.
I use Debian
My worst dependency conflict was a libcurlssl error when trying to build on a precompiled base docker image.
Agreed. Im tired after work. Debian/yunohost is good enough.
At work its hundreds of docker containers but all ci/cd takes care of that.
Isn’t that harder?
It depends a lot on what you want to do and a little on what you’re used to. It’s some configuration overhead so it may not be worth the extra hassle if you’re only running a few services (and they don’t have dependency conflicts). IME once you pass a certain complexity level it becomes easier to run new services in containers, but if you’re not sure how they’d benefit your setup, you’re probably fine to not worry about it until it becomes a clear need.
Me too!
140 running containers and 33 stopped (that I spin up sometimes for specific tasks or testing new things), so 173 total on Unraid. I have them gouped into:
- 118 Auto-updates (low chance of breaking updates or non-critical service that only I would notice if it breaks)
- 55 Manual-updates (either it’s family-facing e.g. Jellyfin, or it’s got a high chance of breaking updates, or it updates very infrequently so I want to know when that happens, or it’s something I want to keep particular note of or control over what time it updates e.g. Jellyfin when nobody’s in the middle of watching something)
I subscribe to all their github release pages via FreshRSS and have them grouped into the Auto/Manual categories. Auto takes care of itself and I skim those release notes just to keep aware of any surprises. Manual usually has 1-5 releases each day so I spend 5-20 minutes reading those release notes a bit more closely and updating them as a group, or holding off until I have more bandwidth for troubleshooting if it looks like an involved update.
Since I put anything that might cause me grief if it breaks in the manual group, I can also just not pay attention to the system for a few days and everything keeps humming along. I just end up with a slightly longer manual update list when I come back to it.
I’ve never looked into adding GitHub releases to FreshRSS. Any tips for getting that set up? Is it pretty straight forward?
I just added this URL for Jellyfin and it “just worked”:
if not, adding .rss or .atom should do the trick:
https://github.com/jellyfin/jellyfin/releases.atom https://github.com/jellyfin/jellyfin/releases.rss
thanks, I’ll look into it. Much appreciated
I added the bookmarklet to my bookmarks bar so it’s pretty easy to just navigate to the releases page on github and hit the button. I change the “visibility” setting to “show in its category” so things stay in their lanes rather than all go in a communal main feed but otherwise leave it as default.
I did have to add some filters to the categories so it wouldn’t flag all the -dev/-rc releases but that’s it. The filters that work for me are:
intitle:prototype- intitle:-build-number intitle:rc5 intitle:rc6 intitle:rc7 intitle:rc8 intitle:rc9 intitle:-dev. intitle:Beta intitle:preview- intitle:rc1 intitle:rc2 intitle:rc3 intitle:rc4 intitle:"Release Candidate" intitle:Alpha intitle:-rc intitle:-alpha intitle:-beta intitle:develop- intitle:"Development release" intitle:Pre-Release
All of you bragging about 100+ containers, please may in inquire as to what the fuck that’s about? What are you doing with all of those?
Kube makes it easy to have a lot, as a lot of things you need to deploy on every node just deploy on every node. As odd as it sounds, the number of containers provides redundancy that makes the hobby easy. If a Zimaboard dies or messes up, I just nuke it, and I don’t care whats on it.
In my case, most things that I didn’t explicitly make public are running on Tailscale using their own Tailscale containers.
Doing it this way each one gets their own address and I don’t have to worry about port numbers. I can just type http://cars/ (Yes, I know. Not secure. Not worried about it) and get to my LubeLogger instance. But it also means I have 20ish copies of just the Tailscale container running.
On top of that, many services, like Nextcloud, are broken up into multiple containers. I think Nextcloud-aio alone has something like 5 or 6 containers it spins up, in addition to the master container. Tends to inflate the container numbers.
Ironic that Nextcloud AIO spins up multiple…
deleted by creator
Possibly. I don’t remember that being an option when I was setting things up last time.
From what I’m reading it’s sounding like it’s just acting as a slightly simplified DNS server/reverse proxy for individual services on the tailnet. Sounds Interesting. I’m not sure it’s something I’d want to use on the backend (what happens if Tailscale goes down? Does that DNS go down too?), but for family members I’ve set up on the tailnet, it sounds like an interesting option.
Much as I like Tailscale, it seems like using this may introduce a few too many failure points that rely on a single provider. Especially one that isn’t charging me anything for what they provide.
A little of this, a little of that…I may also have a problem… >_>;
The List
Quickstart
- dockersocket
- ddns-updater
- duckdns
- swag
- omada-controller
- netdata
- vaultwarden
- GluetunVPN
- crowdsec
Databases
- postgresql14
- postgresql16
- postgresql17
- Influxdb
- redis
- Valkey
- mariadb
- nextcloud
- Ntfy
- PostgreSQL_Immich
- postgresql17-postgis
- victoria-metrics
- prometheus
- MySQL
- meilisearch
Database Admin
- pgadmin4
- adminer
- Chronograf
- RedisInsight
- mongo-express
- WhoDB
- dbgate
- ChartDB
- CloudBeaver
Database Exporters
- prometheus-qbittorrent-exporter
- prometheus-immich-exporter
- prometheus-postgres-exporter
- Scraparr
Networking Admin
- heimdall
- Dozzle
- Glances
- it-tools
- OpenSpeedTest-HTML5
- Docker-WebUI
- web-check
- networking-toolbox
Legally Acquired Media Display
- plex
- jellyfin
- tautulli
- Jellystat
- ErsatzTV
- posterr
- jellyplex-watched
- jfa-go
- medialytics
- PlexAniSync
- Ampcast
- freshrss
- Jellyfin-Newsletter
- Movie-Roulette
Education
- binhex-qbittorrentvpn
- flaresolverr
- binhex-prowlarr
- sonarr
- radarr
- jellyseerr
- bazarr
- qbit_manage
- autobrr
- cleanuparr
- unpackerr
- binhex-bitmagnet
- omegabrr
Books
- BookLore
- calibre
- Storyteller
Storage
- LubeLogger
- immich
- Manyfold
- Firefly-III
- Firefly-III-Data-Importer
- OpenProject
- Grocy
Archival Storage
- Forgejo
- docmost
- wikijs
- ArchiveTeam-Warrior
- archivebox
- ipfs-kubo
- kiwix-serve
- Linkwarden
Backups
- Duplicacy
- pgbackweb
- db-backup
- bitwarden-export
- UnraidConfigGuardian
- Thunderbird
- Open-Archiver
- mail-archiver
- luckyBackup
Monitoring
- healthchecks
- UptimeKuma
- smokeping
- beszel-agent
- beszel
Metrics
- Unraid-API
- HDDTemp
- telegraf
- Varken
- nut-influxdb-exporter
- DiskSpeed
- scrutiny
- Grafana
- SpeedFlux
Cameras
- amcrest2mqtt
- frigate
- double-take
- shinobipro
HomeAuto
- wyoming-piper
- wyoming-whisper
- apprise-api
- photon
- Dawarich
- Dawarich—Sidekiq
Specific Tasks
- QDirStat
- alternatrr
- gaps
- binhex-krusader
- wrapperr
Other
- Dockwatch
- Foundry
- RickRoll
- Hypermind
Plus a few more that I redacted.
I look at this list and cry a little bit inside. I can’t imagine having to maintain all of this as a hobby.
Dococd + renovate goes brrr
From a quick glance I can imagine many of those services don’t need much maintenance if any. E.g. RickRoll likely never needs any maintenance beyond the initial setup.
Things and stuff. There is the web front end, API to the back end, the database, the redis cache, mqtt message queues.
And that is just for one of my web crawlers.
/S
Not bragging. It is what it is. I run a plethora of things and that’s just on the production server. I probably have an additional 10 on the test server.
100 containers isn’t really a lot. Projects often use 2-3 containers. Thats only something like 30-50 services.
“Only”
0, it’s all organised nicely with nixos
Boooo, you need some chaos in your life. :D
That’s why I have one host called
theBarreland it’s just 100 Chaos Monkeys and nothing elseThis is the way.
It’s fun in a way that defies comparison.
I have 1 podman container on NixOS because some obscure software has a packaging problem with ffmpeg and the NixOS maintainers removed it.
docker: command not found
I know using work as an example is cheating, but around 1400-1500 to 5000-6000 depending on load throughout the day.
At home it’s 12.
I was watching a video yesterday where an org was churning 30K containers a day because they didn’t profile their application correctly and scaled their containers based on a misunderstanding how Linux deals with CPU scheduling.
Yeah that shit is more common than people think.
A big part of the business of cloud providers is that most orgs have no idea how to do shit. Their enterprise consultants are also wildly variable in competence.
There was also a large amount of useless bullshit that I needed to cut down since being hired at my current spot, but the amount of containers is actually warranted. We do have that traffic, which is both happy and sad, since while business is booming, I have to deal with this.
I am like Oprah yelling “you get a container, you get a container, Containers!!!” At my executables.
I create aliases using toolbox so I can run most utils easily and securely.
Toolbox?
Edit: Oh cool! Thanks for sharing.
https://github.com/containers/toolbox
Podman toolboxes, which layer a do gained over your user file system, allowing you to make toolbox specific changes to the system that only affect that toolbox.
I think it’s oringinally meant for development of desktop environments and OS features, but you can put most command line apps in them without much feauture breakage.
I always saw them pitched by Fedora as the blessed way to run CLI applications on an immutable host.
That’s why I use them, but they are missing the in ramp to getting this working nicely for regular users.
E.g. how do I install neovim with toolbox and get Wayland clipboard working, without doing a bunch of manual work? It’s easy to add to my ostree, but that’s not really the way it should be.
I ended up making a bunch of scripts to manage this, but now I feel like I’m one step away from just using nixos.
Zero.
About 35 NixOS VMs though, each running either a single service (e.g. Paperless) or a suite (Sonarr and so on plus NZBGet, VPN,…).
There’s additionally a couple of client VMs. All of those distribute over 3 Proxmox hosts accessing the same iSCSI target for VM storage.
SSL and WireGuard are terminated at a physical firewall box running OpnSense, so with very few exceptions, the VMs do not handle any complicated network setup.
A lot of those VMs have zero state, those that do have backup of just that state automated to the NAS (simply via rsync) and from there everything is backed up again through borg to an external storage box.
In the stateless case, deploying a new VM is a single command; in the stateful case, same command, wait for it to come up, SSH in (keys are part of the VM images), run
restore-<whatever>.On an average day, I spend 0 minutes managing the homelab.
Why VMs instead of contsiners? Seems like way more processing overhead.
Eh… Not really. Qemu does a really good job with VM virtualizarion.
I believe I could easily build containers instead of VMs from the nix config, but I actually do like having a full VM: since it’s running a full OS instead of an app, all the usual nix tooling just works on it.
Also: In my day job, I actually have to deal quite a bit with containers (and kubernetes), and I just… don’t like it.
Yeah, just wondered because containers just hook into the kernal in a way that doesn’t have overhead. Where as a VM has to emulate the entire OS. But hey I get it, fixing stuff inside the container can be a pain
Is this in a repo somewhere we can have a look?
I’ll DM you… Not sire I want to link those two accounts publicly 😄
On an average day, I spend 0 minutes managing the homelab.
0 is the goal. Well done !
Edit: Ha! Some masochist down-voted that.
Zero. Either it’s just a service with no wrappers, or a full VM.
Why a full VM, that seems like a ton of overhead
For some convoluted networking things it’s easier for me to have a full “machine” as it were
How it started : 0
Max : 0
Now : 0
Iso27002 and provenance validation goes brrrrr
Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:
Fewer Letters More Letters DNS Domain Name Service/System LXC Linux Containers NAS Network-Attached Storage Plex Brand of media server package SSH Secure Shell for remote terminal access SSL Secure Sockets Layer, for transparent encryption VPN Virtual Private Network VPS Virtual Private Server (opposed to shared hosting) k8s Kubernetes container management package
9 acronyms in this thread; the most compressed thread commented on today has 4 acronyms.
[Thread #42 for this comm, first seen 29th Jan 2026, 11:00] [FAQ] [Full list] [Contact] [Source code]














