I recognize this will vary depending on how much you self-host, so I’m curious about the range of experiences from the few self-hosted things to the many self-hosted things.

Also how might you compare it to other maintenance of your other online systems (e.g. personal computer/phone/etc.)?

    • Scrubbles@poptalk.scrubbles.tech
      link
      fedilink
      arrow-up
      10
      ·
      2 months ago

      highly recommend doing infrastructure-as-code, it makes it really easy to git commit and save a previously working state, so you can backtrack when something goes wrong

      • Kaldo@kbin.social
        link
        fedilink
        arrow-up
        1
        ·
        2 months ago

        Got any decent guides on how to do it? I guess a docker compose file can do most of the work there, not sure about volume backups and other dependencies in the OS.

          • Kaldo@kbin.social
            link
            fedilink
            arrow-up
            2
            ·
            2 months ago

            Oh I think i tried at one point and when the guide started talking about inventory, playbooks and hosts in the first step it broke me a little xd

            • kernelle@lemmy.world
              link
              fedilink
              English
              arrow-up
              3
              ·
              edit-2
              2 months ago

              I get it, the inventory is just a list of all servers and PC you are trying to manage and the playbooks contain every step you would take if you would configure everything manually.

              I’ll be honest when you first set it up it’s daunting but that’s the thing! You only need to do it once, then you can deploy and redeploy anything you have in minutes.

              Edit: found this useful resource

    • seaQueue@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      2 months ago

      +1 automate your backup rolling, setup your monitoring and alerting and then ignore everything until something actually goes wrong. I touch my lab a handful of times a year when it’s time for major updates, otherwise it basically runs itself.

  • Max-P@lemmy.max-p.me
    link
    fedilink
    English
    arrow-up
    15
    ·
    2 months ago

    Very minimal. Mostly just run updates every now and then and fix what breaks which is relatively rare. The Docker stacks in particular are quite painless.

    Couple websites, Lemmy, Matrix, a whole email stack, DNS, IRC bouncer, NextCloud, WireGuard, Jitsi, a Minecraft server and I believe that’s about it?

    I’m a DevOps engineer at work, managing 2k+ VMs that I can more than keep up with. I’d say it varies more with experience and how it’s set up than how much you manage. When you use Ansible and Terraform and Kubernetes, the count of servers and services isn’t really important. One, five, ten, a thousand servers, it matters very little since you just run Ansible on them and 5 minutes later it’s all up and running.

    • Footnote2669@lemmy.zip
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 months ago

      +1 for docker and minimal maintenance. Only updates or new containers might break stuff. If you don’t touch it, it will be fine. Of course there might be some container specific problems. Depends what you want to run. And I’m not a devops engineer like Max 😅

  • 0110010001100010@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    ·
    2 months ago

    Typically, very little. I have ~40 containers in my Docker stack and by in large it just works. I upgrade stuff here and there as needed. I am getting ready to do a hardware refresh but again with Docker that’s pretty painless.

    Most of the time spent in my lab is trying out new things. I’ll find a new something that looks cool and go down the rabbit hole with it for a while. Then back to the status quo.

  • CarbonatedPastaSauce@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    2 months ago

    It’s bursty; I tend to do a lot of work on stuff when I do a hardware upgrade, but otherwise it’s set it and forget it for the most part. The only servers I pay any significant attention to in terms of frequent maintenance and security checks are the MTAs in the DMZ for my email. Nothing else is exposed to the internet for inbound traffic except a game server VM that’s segregated (credential-wise and network-wise) from everything else, so if it does get compromised it would be a very minimal danger to the rest of my network. Everything either has automated updates, or for servers I want more control over I manually update them when the mood strikes me or a big vulnerability that affects my software hits the news.

    TL;DR If you averaged it over a year, I maybe spend 30-60 minutes a week on self hosting maintenance tasks for 4 physical servers and about 20 VM’s.

  • Crogdor@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    2 months ago

    Mostly nothing, except for Home Assistant, which seems to shit the bed every few months. My other services are Docker containers or Proxmox LXCs that just work.

  • thirdBreakfast@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    2 months ago

    I run two local physical servers, one production and one dev (and a third prod2 kept in case of a prod1 failure), and two remote production/backup servers all running Proxmox, and two VPSs. Most apps are dockerised inside LXC containers (on Proxmox) or just docker on Ubuntu (VPSs). Each of the three locations runs a Synology NAS in addition to the server.

    Backups run automatically, and I manually run apt updates on everything each weekend with a single ansible playbook. Every host runs a little golang program that exposes the memory and disk use percent as a JSON endpoint, and I use two instances of Uptime Kuma (one local, and one on fly.io) to monitor all of those with keywords.

    So - weekly: 10 minutes to run the update playbook, and I usually ssh into the VPS’s, have a look at the Fail2Ban stats and reboot them if needed. I also look at each of the Proxmox GUIs to check the backs have been working as expected. Monthly: stop the local prod machine and switch to the prod2 machine (from backups) for a few days. Probably 30 minutes each way, most of it waiting for backups. From time to time (if I hear of a security update), but generally every three months: Look through my container versions and see if I want to update them. They’re on docker compose so the steps are just backup the LXC, docker down, pull, up - probs 5 minutes per container. Yearly: consider if I need to do operating systems - eg to Proxmox 8, or a new Debian or Ubuntu LTS Yearly: visit the remotes and have a proper check/clean up/updates

  • Opisek@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    2 months ago

    As others said, the initial setup may consume some time, but once it’s running, it just works. I dockerize almost everything and have automatic backups set up.

  • dlundh@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    2 months ago

    A lot less since I started using docker instead of running separate vms for everything. Less systems to update is bliss.

  • Lem453@lemmy.ca
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    2 months ago

    Maybe 1 hr every month or two to update things.

    Thinks like my opnsense router are best updated when no one else is using the network.

    The docker containers I like to update manually after checking the release logs. Doesn’t take long and I often find out about cool new features perusing the release notes.

    Projects will sometimes have major updates that break things and I strongly prefer having everything super stable until I have time to sit down and update.

    11 stacks, 30+ containers. Borg backups runs automatically to various repositories. Zfs auto snap snot also runs automatically to create rapid backups.

  • DeltaTangoLima@reddrefuge.com
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    2 months ago

    Not heaps, although I should probably do more than I do. Generally speaking, on Saturday mornings:

    • Between 2am-4am, Watchtower on all my docker hosts pulls updated images for my containers, and notifies me via Slack
    • For containers I don’t care about, Watchtower auto-updates them as well, at which point I simply check the service is running and purge the old images
    • For mission-critical containers (Pi-hole, Home Assistant, etc), I manually update the containers and verify functionality, before purging old images
    • I then check for updates on my OPNsense firewall, and do a controlled update if required (needs me to jump onto a specific wireless SSID to be able to do so)
    • Finally, my two internet-facing hosts (Nginx reverse proxy and Wireguard VPN server) auto-update their OS and packages using unattended-upgrades, so I test inbound functionality on those

    What I still want to do is develop some Ansible playbooks to deploy unattended-upgrades across my fleet (~40ish Debian/docker LXCs). I fear I have some tech debt growing on those hosts, but have fallen into the convenient trap of knowing my internet-facing gear is the always up to date, and I can be lazy about the rest.

  • smileyhead@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    2
    ·
    2 months ago

    I spend a huge amount of time configuring and setting up stuff as it’s my biggest hobby. But I got good enough that when I set something up it can stay for months without any mainainence. Most I do for keeping it up is adding more storage if it turn out to be used more than planned.

  • N-E-N@lemmy.ca
    link
    fedilink
    English
    arrow-up
    2
    ·
    2 months ago

    As a complete noob trying to make A TrueNAS server, none and then suddenly lots when idk how to fix something that broke