mindly.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
Mindly.Social is an English speaking, friendly Mastodon instance created for people who want to use their brains and their hearts to make social networking more social. 🧠💖

Administered by:

Server stats:

1.2K
active users

#arca380

1 post1 participant0 posts today
Mika<p>Done this with the <a href="https://sakurajima.social/tags/AMD" rel="nofollow noopener noreferrer" target="_blank">#AMD</a> 5600G's iGPU, now testing <a href="https://sakurajima.social/tags/Jellyfin" rel="nofollow noopener noreferrer" target="_blank">#Jellyfin</a> hardware transcoding with my <a href="https://sakurajima.social/tags/Intel" rel="nofollow noopener noreferrer" target="_blank">#Intel</a> <a href="https://sakurajima.social/tags/ArcA380" rel="nofollow noopener noreferrer" target="_blank">#ArcA380</a><span> GPU - one question though, is it normal for CPU usage to be rather high at least in the early parts of streaming (while transcoding), even with hardware transcoding?<br><br>I'm just trying to figure out if it is actually hardware transcoding - I'm assuming it is, bcos on the admin dashboard I'm seeing that it's transcoding as AV1 and I'm sure my Ryzen 7 1700 would not be able to do/handle that, esp considering that I'm testing with 4 streams playing concurrently? but the CPU usage is rather high the first minute or two or more from when the stream starts, it does lower down afterwards - the video playback is perfectly fine, no stuttering or anything like that.<br><br>My method of passthrough is the same as I did with the 5600G, that is a simple passthrough to the </span><a href="https://sakurajima.social/tags/LXC" rel="nofollow noopener noreferrer" target="_blank">#LXC</a> container, then to the <a href="https://sakurajima.social/tags/Docker" rel="nofollow noopener noreferrer" target="_blank">#Docker</a> container running Jellyfin. I don't think I noticed this high CPU usage when testing the 5600G, and the only minor configuration difference between the two was I'd used <a href="https://sakurajima.social/tags/VAAPI" rel="nofollow noopener noreferrer" target="_blank">#VAAPI</a> with the 5600G and disabled <a href="https://sakurajima.social/tags/AV1" rel="nofollow noopener noreferrer" target="_blank">#AV1</a> encoding (since idt it supports it), while on the Arc A380 GPU I'm using Intel's <a href="https://sakurajima.social/tags/QSV" rel="nofollow noopener noreferrer" target="_blank">#QSV</a><span> and have enabled AV1 encoding.<br><br>Am I correct to assume that hardware transcoding is indeed working? Cos again, I'm quite certain my Ryzen 7 1700 would definitely NOT be able to handle this lol esp since I only give the LXC container 2 cores.</span></p>
Mika<p>I have finally caved in and dove into the rabbit hole of <a href="https://sakurajima.social/tags/Linux" rel="nofollow noopener noreferrer" target="_blank">#Linux</a> Container (<a href="https://sakurajima.social/tags/LXC" rel="nofollow noopener noreferrer" target="_blank">#LXC</a>) on <a href="https://sakurajima.social/tags/Proxmox" rel="nofollow noopener noreferrer" target="_blank">#Proxmox</a><span> during my exploration on how to split a GPU across multiple servers and... I totally understand now seeing people's Proxmox setups that are made up exclusively of LXCs rather than VMs lol - it's just so pleasant to setup and use, and superficially at least, very efficient.<br><br>I now have a </span><a href="https://sakurajima.social/tags/Jellyfin" rel="nofollow noopener noreferrer" target="_blank">#Jellyfin</a> and <a href="https://sakurajima.social/tags/ErsatzTV" rel="nofollow noopener noreferrer" target="_blank">#ErsatzTV</a> setup running on LXCs with working iGPU passthrough of my server's <a href="https://sakurajima.social/tags/AMD" rel="nofollow noopener noreferrer" target="_blank">#AMD</a> Ryzen 5600G APU. My <a href="https://sakurajima.social/tags/Intel" rel="nofollow noopener noreferrer" target="_blank">#Intel</a> <a href="https://sakurajima.social/tags/ArcA380" rel="nofollow noopener noreferrer" target="_blank">#ArcA380</a> GPU has also arrived, but I'm prolly gonna hold off on adding that until I decide on which node should I add it to and schedule the shutdown, etc. In the future, I might even consider exploring (re)building a <a href="https://sakurajima.social/tags/Kubernetes" rel="nofollow noopener noreferrer" target="_blank">#Kubernetes</a>, <a href="https://sakurajima.social/tags/RKE2" rel="nofollow noopener noreferrer" target="_blank">#RKE2</a><span> cluster on LXC nodes instead of VMs - and if that's viable or perhaps better.<br><br>Anyway, I've updated my </span><a href="https://sakurajima.social/tags/Homelab" rel="nofollow noopener noreferrer" target="_blank">#Homelab</a> Wiki with guides pertaining LXCs, including creating one, passing through a GPU to multiple unprivileged LXCs, and adding an <a href="https://sakurajima.social/tags/SMB" rel="nofollow noopener noreferrer" target="_blank">#SMB</a><span> share for the entire cluster and mounting them, also, on unprivileged LXC containers.<br><br></span>🔗 <a href="https://github.com/irfanhakim-as/homelab-wiki/blob/master/topics/proxmox.md#linux-containers-lxc" rel="nofollow noopener noreferrer" target="_blank">https://github.com/irfanhakim-as/homelab-wiki/blob/master/topics/proxmox.md#linux-containers-lxc</a></p>
Mika<p>Bruh, I might've wasted my time learning how to passthrough a GPU to an <a href="https://sakurajima.social/tags/LXC" rel="nofollow noopener noreferrer" target="_blank">#LXC</a> container on <a href="https://sakurajima.social/tags/Proxmox" rel="nofollow noopener noreferrer" target="_blank">#Proxmox</a> (as well as mount a SMB/CIFS share) and write up a guide (haven't been able to test yet, cept with the latter) - all by doing some seemingly <i>magic</i> <a href="https://sakurajima.social/tags/Linux" rel="nofollow noopener noreferrer" target="_blank">#Linux</a> <i>fu</i><span> with some user/group mappings and custom configs, if it turns out that you could actually achieve the same result just as easily graphically using a standard wizard on PVE.<br><br>It's 4am, I'll prolly try to find time later during the day, or rather evening (open house to attend at noon), and try using the wizard to 1) Add a device passthrough on an LXC container for my </span><a href="https://sakurajima.social/tags/AMD" rel="nofollow noopener noreferrer" target="_blank">#AMD</a> iGPU (until my <a href="https://sakurajima.social/tags/Intel" rel="nofollow noopener noreferrer" target="_blank">#Intel</a> <a href="https://sakurajima.social/tags/ArcA380" rel="nofollow noopener noreferrer" target="_blank">#ArcA380</a> GPU arrives) and see if the root user + service user on the container could access it/use it for transcoding on <a href="https://sakurajima.social/tags/Jellyfin" rel="nofollow noopener noreferrer" target="_blank">#Jellyfin</a>/<a href="https://sakurajima.social/tags/ErsatzTV" rel="nofollow noopener noreferrer" target="_blank">#ErsatzTV</a>, and 2) Add a SMB/CIFS storage on the Proxmox Datacenter, tho my <a href="https://sakurajima.social/tags/NAS" rel="nofollow noopener noreferrer" target="_blank">#NAS</a><span> is also just a Proxmox VM in the same cluster (not sure if this is a bad idea?) and see if I could mount that storage to the LXC container that way.<br><br></span><a href="https://sakurajima.social/tags/Homelab" rel="nofollow noopener noreferrer" target="_blank">#Homelab</a> folks who have done this, feel free to give some tips or wtv if you've done this before!</p>