Wondered if folks had a preferred method for implementing a GitOps approach for homelabs consisting of mostly docker compose stacks.

We run Kubernetes at work, and I’m tempted to migrate to that as it provides a slew of great GitOps tools, but I’ve been putting off that migration as I haven’t had the time to invest.

Is learning Ansible and going that route still the way to go?

  • iii@mander.xyz
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    At one of my clients, who wants everything on-prem, I use gitlab CI with ansible. It took 3 days to setup, and requires thinkering. But all in all, I like the versitility, consistency and transparency of this approach.

    If I’d start over again, I’d use pyinfra instead of ansible, but that’s a minor difference.

  • beeng@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    K3s with flux inside. Fun video from a guy with a nice easy repo from goto conference on YouTube. Might be a bit much but… Not sure of anything for compose and gitops

  • Matt The Horwood@lemmy.horwood.cloud
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    I do this with semaphore, the below playbook would check out the git repo and reapply the compose file.

    • docker_hosts is the host to run this on
    • project_dir is where the compose file is on disk

    I dont use github, but my own git server not on the internet.

    ---
    - name: Update docker imges
      hosts: "{{ docker_hosts }}"
      gather_facts: false
      tasks:
        - name: Read-write git checkout from github
          ansible.builtin.git:
            repo: git@github.com:yourname/docker.git
            dest: /home/username/git/docker
    
        - name: Create and start services
          community.docker.docker_compose_v2:
            project_src: "{{ project_dir }}"
            build: never
            pull: always
            state: present
          register: output
    
        - ansible.builtin.debug:
            var: output.stderr_lines
    
    
  • chakli@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    Wondering, Just after how many containers does ops make sense? I have a dozen containers, I check for updates once a month manually. I update the compose/docker files manually and up my containers. In stages, because my git and my container registry are also containers. Also my dev is my prod env.

    • iii@mander.xyz
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      I think it depends on the rate of change, rather than the amount of containers.

      At home I do things manually as things change maybe 3 or 4 times a year.

      Professionally I usually do setup automated devops because updates and deployments happen almost daily.

    • thejml@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      If feel like, for me at least, GitOps for containers is peace of mind. I run a small Kubernetes cluster as my home lab, and all the configs are in git. If need be, I know (because i tested it) if something happens to the cluster and I lose it all, I can spin up a new cluster and apply the configs from git and be back up and running. Because I do deployments directly from git, I know that everything in git is up to date and versioned so i can roll back.

      I previously ran a set of docker containers with compose and then swarm, and I always worried something wouldn’t be recoverable. Adding GitOps here reduced my “What If?” Quotient tremendously.

      • chakli@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 month ago

        How many hosts do you manage? What k8 tools do you use? I have just one host, I use bind mounts for container generated config/data/cache in docker compose, for which I dont have backup, and if gone, I have to start from scratch. But i try to keep most config in git.

        • thejml@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 month ago

          Currently, I have a 3 1L Dell node Proxmox cluster with 6 kube nodes on it (3 masters, 3 workers). Lets me do things like migrate services off of a host so I can take it out, do upgrades/maintenance, and put it back without hearing about downtime from the family/friends.

          For storage, I’ve got a Synology NAS with NFS setup and then the pods are configured to use that for their storage if they need it (So, Jellyfin, Immich, etc). I do regular backups of the NAS with rsync. So, if that goes down, I can restore or standup a new NAS with NFS and it’ll be back to normal.