A tree based network structure. #tech #devops #homelab #solarpunk #solarcyber
A tree based network structure. #tech #devops #homelab #solarpunk #solarcyber
New OpenSSL 3.5 is out, it will be the new LTS replacing 3.0. It supports PQE and QUIC (At last!) but I haven’t tested those features yet.
Alright, I've been trawling through logs for quite a while now, and I think it's not a specific issue, but just "all this stuff together is too much for a Pi 4". The Ceph MONs are regularly failing their liveness probes, so does Vault due to the local CSI plugins losing their MONs. The etcd and kubeapiserver logs are full of timeouts. It looks like my setup is not actually sustainable.
Now to decide what to do about it.
Next I run #syncthing on my laptop, and on my #homelab #intelN100 #n100 mini pc / server that runs in the cupboard and is very #lowpower I run #proxmox and this also has a #samba share which allows any other network devices to see the media.
With syncthing running, I always have two copies of the media, but for backup I was using #rclone to send an encrypted copy to #googledrive - which I am in the process of switching over to #nextcloud running on #hetzner
3/4
Get y'er $112 5TB drives before the tariffs fuck everything up!
Aha. The first issue was that for some reason, I did not install kube-vip on 2 of the 3 CP nodes. So there was only ever one host holding/serving the k8s VIP. I'm also thinking about getting some load balancing going for the k8s API. Right now, whoever holds the VIP for the API via kube-vip gets all of the requests, if I understand it correctly. Perhaps I could improve the stability by load-balancing the VIP. That's of course not possible with ARP mode, so some more reading necessary.
Okay, from initial investigations it looks like the crash this morning at 10:17 AM was due to kube-vip failing to do its leader election due to timeouts when contacting the k8s API, and consequently the k8s API IP going offline. That wreaked havoc in the cluster for a bit. I'm still not 100% whether the I/O overload I'm seeing on the CP node was created by the k8s API going down, or whether it caused the API to go down.
Making use of another #maintenanceWindow and upgraded #opnsense to 25.1.5_4.
Boring as usual ;)
The new control plane on my Pi 4 is not really stable. I woke up to one sealed Vault instance because it crashed in the middle of the night, and then an hour ago a lot of my services went down for a period, possibly due to I/O overload on one of the CP Pis. Need to dig deeper into what happened there now.
Hab ja eigentlich großen Spaß dran, mein #Homelab mit #Kubernetes zu betreiben, aber der Overhead fürs ganze cloud-native Geraffel auf einem Single Node ist schon arg groß. Am liebsten würd ich das grad alles abfackeln und wieder mit Docker Compose machen. Hab ich Bock, ~40 k8s-Deployments manuell in Compose syntax zu konvertieren?
This weeks stupidity… I made a mini 5u HomeLab for my HomeAssistant and PiHole boxes that sit inside an Ikea storage crate.
Includes a Pi5 16GB, Pi 3b+, Pi Zero W2 and a Pimoroni Badger for stats.
#ikea #ikeahack #homelab #minilab #homeassistant #pihole #network #3dprinting #3dprinted #raspberrypi #pimoroni #diy #maker @geerlingguy
@0xamit everyone who does a little #selfhosting has played this version of Russian roulette in their #homelab.
It’s the perfect crime where you are the victim, the investigator, and also the prime suspect.
It's done. I've just shut down the k8s control plane node VMs after the Pi4 nodes finally had everything running.
The migration is done.
Now onto the cleanup list.
Bruh, I might've wasted my time learning how to passthrough a GPU to an #LXC container on #Proxmox (as well as mount a SMB/CIFS share) and write up a guide (haven't been able to test yet, cept with the latter) - all by doing some seemingly magic #Linux fu with some user/group mappings and custom configs, if it turns out that you could actually achieve the same result just as easily graphically using a standard wizard on PVE.
It's 4am, I'll prolly try to find time later during the day, or rather evening (open house to attend at noon), and try using the wizard to 1) Add a device passthrough on an LXC container for my #AMD iGPU (until my #Intel #ArcA380 GPU arrives) and see if the root user + service user on the container could access it/use it for transcoding on #Jellyfin/#ErsatzTV, and 2) Add a SMB/CIFS storage on the Proxmox Datacenter, tho my #NAS is also just a Proxmox VM in the same cluster (not sure if this is a bad idea?) and see if I could mount that storage to the LXC container that way.
#Homelab folks who have done this, feel free to give some tips or wtv if you've done this before!
I just posted Part 4 of my homelab journey!
This update describes how I pivoted my #LenovoThinkStation to a daily driver running #Pop_OS plus some more hardware modifications. I also list the essential apps to keep my workflows flowing.
Project "Talos Most of the #HomeLab Things" has taken another step closer to fruition. The new NAS head machine arrived yesterday so I put in an HBA and a 10Gbit card and installed #TrueNAS.
This morning I swapped the ZFS array from its old home (nibbler) to the new NAS head (morbo), imported the pools and set up NFSv4 exports. After mounting those shares on nibbler my #Jellyfin LXC booted right up.
The next step is to convert Jellyfin into a #Kubernetes workload with an NFS-backed PVC. After I've got that working for everything but transcodes I'll be able to pave nibbler with Talos and get transcodes back, then work on the rest of the media stack.
Future steps are to pave hypnotoad and lrrr with Talos, put TrueNAS on the backup machine (crushinator) and maybe put control plane nodes into TrueNAS VMs.
I think if we could manage to bottle the post-"Oh god, I think I just nuked my storage cluster" feeling of elation when it works again, we might have a really potent new drug on our hands.
Cositas en mi homelab:
- HomeAssitant (automatización del hogar)
- Jellyfin (media player)
- Nextcloud (fotos, calendario, contactos, RSS, documentos, Kanban, ...)
- Searxng (buscador agregador de otros)
- LibreTranslate (traductor LLM local)
- Forgejo (gestor de repositorios)
- Motion (detector de movimiento para cámaras IP)
- Prosody y Synapse (XMPP para video llamadas, chat, etc.)
- Y más cositas para gestionar lo anterior (Gatus, Nagios, Grafana, Prometheus, Graphite, Puppet, ...)
#HomeLab
New blog post!
In which I talk in detail how I deployed Nextcloud locally in my homelab using LXD and NextcloudPi.
I go through the whole process of setting up LXD, importing the NCP image, installing, configuring, and scheduling backups for my own personal cloud.
https://stfn.pl/blog/67-deploying-nextcloud-locally-with-lxd/