There’s also archinstall which comes with the latest os image which is just like any other installer and holds your hand through the process.
It’s really very simple to get arch installed
There’s also archinstall which comes with the latest os image which is just like any other installer and holds your hand through the process.
It’s really very simple to get arch installed


My salt is just a memorized password I put in addition to the one stored in pass


This is what I do. If someone can figure out pass with my password protected gpg, plus my passwords are partials (I salt them), and otp then they can have my access
I have a simple bash script that manages folders and files with a way to route them to whatever location. Then I run the script and it does all the symlinking for me. This is what I do for systemd unit files and my own dotfiles
For anyone looking for a simple rss-to-email digest I recommend this service: https://pico.sh/feeds
Stand up a local lfs server or figure out a different way to store large files. I generally avoid lfs
Why not just run bare repos on your n100? That’s what I do. I have no need for a code forge with code collab when it’s just me pushing
https://git-scm.com/book/en/v2/Git-on-the-Server-Setting-Up-the-Server
If you want a web viewer use a static site git viewer like https://pgit.pico.sh/
While not the same I use an rss-to-email service that hits the minimal sweet spot for me
It seems like there might be exceptions to the “no partial upgrades” which has not been discussed: you can pin your version of the kernel primarily to give time for packages like zfs to catch up to the latest kernel


I’ve never used bcachefs and only recently read about some of the drama. I wish the project the best but at this point it is hard to beat zfs
Here’s my journey from arch to proxmox back to arch: https://bower.sh/homelab
I was in your shoes and decided to simplify my system. It’s really hard to beat arch and I missed having full control over the system. Proxmox is awesome but it felt overkill for my use cases. If I want to experiment with new distros I would probably just run distrobox or qemu directly. Proxmox does a lot but it ended up just being a gui on top of qemu with some built in backup systems. But if you end up using zfs anyway … what’s the benefit?
If you want low effort high value then get a synology 2 bay. If you want full control over the host OS then run Debian/arch with zfs


I didn’t use any of the terms you used in your post. I’m not using those products in part for the reasons I discussed but also I don’t see it particularly useful beyond a cult of personality building it.
Librefox has been awesome. Once you get the hang of enabling cookies for specific sites it mostly just works. Although Fastmail keeps logging me out for some reason


Here’s my homelab journey: https://bower.sh/homelab
Basically, containers and GPU is annoying to deal with, GPU pass through to a VM is even more annoying. Most modern hobbyist GPUs also do not support splitting your GPU. At the end of the day, it’s a bunch of tinkering which is valuable if that’s your goal. I learned what I wanted, now I’m back to arch running everything with systemd and quadlet


I’m of mixed views about this. Omarchy is popular purely because of DHH. I don’t see anything of benefit beyond the notoriety of a famous dev.
There’s also some dissenting opinion about DHH in general that taints the project: https://drewdevault.com/2025/09/24/2025-09-24-Cloudflare-and-fascists.html
Being based on hyprland also has some potential social issues.
I don’t get why cloudflare didn’t donate to arch instead.
Sorry but this is a ridiculous argument. What entity has dropped nukes on an entire population? Who is the current president of the US? Insane take.