mindly.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
Mindly.Social is an English speaking, friendly Mastodon instance created for people who want to use their brains and their hearts to make social networking more social. 🧠💖

Administered by:

Server stats:

1.2K
active users

#homelab

67 posts57 participants0 posts today
Replied in thread

Alright, I've been trawling through logs for quite a while now, and I think it's not a specific issue, but just "all this stuff together is too much for a Pi 4". The Ceph MONs are regularly failing their liveness probes, so does Vault due to the local CSI plugins losing their MONs. The etcd and kubeapiserver logs are full of timeouts. It looks like my setup is not actually sustainable.

Now to decide what to do about it.

Continued thread

Next I run #syncthing on my laptop, and on my #homelab #intelN100 #n100 mini pc / server that runs in the cupboard and is very #lowpower I run #proxmox and this also has a #samba share which allows any other network devices to see the media.

With syncthing running, I always have two copies of the media, but for backup I was using #rclone to send an encrypted copy to #googledrive - which I am in the process of switching over to #nextcloud running on #hetzner

🧵 3/4

Replied in thread

Aha. The first issue was that for some reason, I did not install kube-vip on 2 of the 3 CP nodes. So there was only ever one host holding/serving the k8s VIP. I'm also thinking about getting some load balancing going for the k8s API. Right now, whoever holds the VIP for the API via kube-vip gets all of the requests, if I understand it correctly. Perhaps I could improve the stability by load-balancing the VIP. That's of course not possible with ARP mode, so some more reading necessary.

Continued thread

Okay, from initial investigations it looks like the crash this morning at 10:17 AM was due to kube-vip failing to do its leader election due to timeouts when contacting the k8s API, and consequently the k8s API IP going offline. That wreaked havoc in the cluster for a bit. I'm still not 100% whether the I/O overload I'm seeing on the CP node was created by the k8s API going down, or whether it caused the API to go down.

The new control plane on my Pi 4 is not really stable. I woke up to one sealed Vault instance because it crashed in the middle of the night, and then an hour ago a lot of my services went down for a period, possibly due to I/O overload on one of the CP Pis. Need to dig deeper into what happened there now.

Hab ja eigentlich großen Spaß dran, mein #Homelab mit #Kubernetes zu betreiben, aber der Overhead fürs ganze cloud-native Geraffel auf einem Single Node ist schon arg groß. Am liebsten würd ich das grad alles abfackeln und wieder mit Docker Compose machen. Hab ich Bock, ~40 k8s-Deployments manuell in Compose syntax zu konvertieren? 👀

Bruh, I might've wasted my time learning how to passthrough a GPU to an #LXC container on #Proxmox (as well as mount a SMB/CIFS share) and write up a guide (haven't been able to test yet, cept with the latter) - all by doing some seemingly magic #Linux fu with some user/group mappings and custom configs, if it turns out that you could actually achieve the same result just as easily graphically using a standard wizard on PVE.

It's 4am, I'll prolly try to find time later during the day, or rather evening (open house to attend at noon), and try using the wizard to 1) Add a device passthrough on an LXC container for my
#AMD iGPU (until my #Intel #ArcA380 GPU arrives) and see if the root user + service user on the container could access it/use it for transcoding on #Jellyfin/#ErsatzTV, and 2) Add a SMB/CIFS storage on the Proxmox Datacenter, tho my #NAS is also just a Proxmox VM in the same cluster (not sure if this is a bad idea?) and see if I could mount that storage to the LXC container that way.

#Homelab folks who have done this, feel free to give some tips or wtv if you've done this before!

Project "Talos Most of the #HomeLab Things" has taken another step closer to fruition. The new NAS head machine arrived yesterday so I put in an HBA and a 10Gbit card and installed #TrueNAS.

This morning I swapped the ZFS array from its old home (nibbler) to the new NAS head (morbo), imported the pools and set up NFSv4 exports. After mounting those shares on nibbler my #Jellyfin LXC booted right up.

The next step is to convert Jellyfin into a #Kubernetes workload with an NFS-backed PVC. After I've got that working for everything but transcodes I'll be able to pave nibbler with Talos and get transcodes back, then work on the rest of the media stack.

Future steps are to pave hypnotoad and lrrr with Talos, put TrueNAS on the backup machine (crushinator) and maybe put control plane nodes into TrueNAS VMs.

Cositas en mi homelab:
- HomeAssitant (automatización del hogar)
- Jellyfin (media player)
- Nextcloud (fotos, calendario, contactos, RSS, documentos, Kanban, ...)
- Searxng (buscador agregador de otros)
- LibreTranslate (traductor LLM local)
- Forgejo (gestor de repositorios)
- Motion (detector de movimiento para cámaras IP)
- Prosody y Synapse (XMPP para video llamadas, chat, etc.)
- Y más cositas para gestionar lo anterior (Gatus, Nagios, Grafana, Prometheus, Graphite, Puppet, ...)
#HomeLab