This is an automated archive made by the Lemmit Bot.
The original was posted on /r/selfhosted by /u/crazyclown87 on 2025-08-07 14:20:56+00:00.
Setup:
- Dell T3500 workstation (X5670 6c/12t, 24GB RAM, GTX 1050 Ti)
- Proxmox 8.4.5 with Ubuntu 24.04.2 VM (8 cores, 18GB RAM)
- LXC container managing storage share, VM mounted to share
- Docker Compose running Jellyfin + *arr stack
- Server at my shop (AT&T fiber: 942↓/890↑ Mbps)
- Streaming to home via Starlink (356↓/24↑ Mbps)
- Content: 1080p movies and shows
The Problem: Casting from my Samsung S22 Ultra to Chromecast was stuttering terribly. CPU hitting 130% on single core while GPU sat around 50%. Playing on phone worked fine (even when transcoding, once I fixed the bitrate in the player), but any casting = stutter fest. I do realize from a technology standpoint, I’m running prehistoric hardware. The Dell T3500 had it’s hay day around 2010, the X5670 from 2010, and the not as old 1050 Ti from 2016.
What I Tried:
- Upgraded from GTX 950 to 1050 Ti (didn’t help)
- Verified hardware acceleration was enabled in Jellyfin
- Checked bandwidth, drivers, GPU passthrough - all good
- Monitored with htop and nvtop during playback
The Revelation: The issue wasn’t the hardware - it was content format vs device compatibility. Most of my media was HEVC with EAC3 audio in MKV containers. Even with GPU handling video decode/encode, the CPU was getting destroyed by:
-
Audio transcoding (EAC3 → AAC) - single threaded bottleneck
-
Container remuxing (MKV → MP4) - single threaded
-
Chromecast’s strict format requirements
Real-time transcoding forced everything through single-core CPU processes, while batch encoding could use all cores efficiently.
The Solution:
Pre-encoded problematic files to universal format:
bash ffmpeg -i input.mkv -c:v libx264 -profile:v high -level 4.1 -pix_fmt yuv420p -crf 20 -c:a aac -ac 2 -b:a 128k -f mp4 -movflags +faststart output.mp4
This creates H264 8-bit + stereo AAC in MP4 - compatible with everything.
Results: Perfect direct play on all devices. No more transcoding, no more stuttering. The T3500 handles overnight batch encoding beautifully using all cores.
System Monitoring:
Built a Python script combining sensors
, and system stats. The T3500 has surprisingly good sensor support - shows temps for all 6 RAM sticks (26-28°C), CPU cores (max 69°C under load), and both system fans.
Questions for the community:
- What client do you use to consume your jellyfin media?
- Anyone else hit this transcoding bottleneck with mixed format libraries?
- Better approaches than pre-encoding everything?
- Worth setting up Tdarr for automated re-encoding?
- Is running media server at separate location common?
- VM vs LXC for media server workloads - any performance difference?
- Workflow automation question: Has anyone successfully integrated automatic pre-encoding into their *arr workflow? I’m thinking of adding a Python script that runs after NZBGet downloads but before Sonarr/Radarr import - encode to compatible format, replace original, then let normal rename/move happen. Is this feasible or am I overcomplicating things? Alternative would be Tdarr monitoring download folders, but wondering about timing issues with the *arr import process.
Key Takeaway: Sometimes the “hardware problem” can actually be a workflow problem. Spent money on GPU upgrade when the real solution was understanding codec compatibility and avoiding real-time transcoding entirely.