Just a small number of base images (ubuntu:, alpine:, debian:) are routinely synced, and anything else is built in CI from Containerfiles. Those are backed up. So as long as backups are intact can recover from loss of the image store even without internet.
I also have a two-tier container image storage anyway which gives redundancfor the built images but thats more of a side-effect of workarounds… Anyway, the “source of truth” docker-registry which is pushed to is only exposed internally to the one who needs to do authenticated push, and to the second layer of pull-through caches which the internal servers actually pull from. So backups aside, images that are in active use already at least three copies (push-registry, pull-registry, and whoevers running it). The mirrored public images are a separate chain alltogether.
This has been running for a while so all handwired from component services. A dedicated Forgejo deployment looks like it could serve for a large part of above in one package today. Plus it conveniently syncs external git dependencies.




Right. Then if this would have been a locally hosted scenario, it’s like making a post to complain about the service of their electricity company or ISP. Could similarly be reasonably considered on- or offtopic. But I think this sub is more in the spirit of “there is no cloud, just someone elses computer”. I’m with mod on this one.