This is an automated archive made by the Lemmit Bot.
The original was posted on /r/unraid by /u/nmmnmnmmn on 2023-09-14 17:31:06.
I’m considering rebuilding my server. Currently I just have 2x SSD cache pool (RAID 1) that contains appdata, domains, isos, and system shares. And an array of HDDs where everything else goes. My Plex transcodes just go to /mnt/cache/appdata/binhex-plex/tmp
and my media share (torrents, libraries) goes straight to Array. It doesn’t touch the cache since I know mover won’t work if the file is in use. My Plex server is mostly 1080p but definitely a lot of 4K HDR/DV content that does get transcoded sometimes. I have a good number of users but peak times can be 3-5 simultaneous streams that are a mix of transcode and direct play.
My assumption is if I have dual Gen 4x4 NVMe for my cache pool, that would be plenty of speed to keep the same setup as above? Or more nuanced:
Plex transcodes
I keep hearing about people transcoding to RAM, but every time I’ve done this it fills up my RAM and causes my entire server to crash. It seems like it doesn’t dump the files until the stream is done, so transcoding a single 4K movie stream will eventually fill up my RAM before the movie is over. I only have 16 GB of RAM (7 GB free) but still would concern me that if enough 4K transcodes happen, my server will crash. Is something setup incorrectly or would I just need 64 GB+ of RAM to be safe?
I’ve also heard of people putting transcodes on a dedicated drive separate from appdata. How necessary is that, even?
Plex appdata
I’ve heard of people putting specifically Plex’s appdata on a separate NVMe because when someone browses Plex, there’s a lot of random reads to load posters and information so dedicating a drive to it means no other process can get in the way. Plus good latency with an NVMe helps. But how necessary is this? Would having all my appdata on a single NVMe pool be just fine?
As far as latency, I’d be surprised if SSD -> NVMe would really increase perceived Plex poster load latency versus if the end user is using WiFi or is physically far from the server. Does it really help?
Torrents
Also heard of putting torrents somewhere else because seeding causes a lot of random reads. I’ve always kept my torrents
and libraries
folder in the same media share because this makes sonarr/radarr imports instant. If I put them on a different share, it does a slow copy of the file (and I’m not sure if hardlinks would still work?). Storage has gotten pretty cheap though so I do wonder if a big 2-4 TB SSD/NVMe as a “seedbox” would be ideal.
How necessary is this one? Does random reads on an HDD array really affect it? I do have ~3.4k torrents at the moment, but typically only 1-4 actively seeding at any time.
In conclusion
I’ve always seen Unraid as a very “set it and forget it” solution. I’ve never considered messing around with how it uses storage. For example, I’ve also heard of people putting important files on a dedicated encrypted HDD, but I always thought you want files to be as spread out as possible for drive failure safety.
Overall it seems like these ideas boil down to: Cold storage/sequential writes and reads (backup, torrent downloads, file cache, Plex media), Random writes (Plex transcodes), Random reads (appdata, Plex appdata, torrent seeding). And most server sequential writes/reads are slow enough that HDDs perform just fine.
I’ve even thought of separating my server in two: a personal cold storage server (very import files, dual parity, prioritizing stuff like ECC RAM) and a Plex media server (exclusively torrents, *arrs, Plex, Tautulli, etc.) But that always sounded excessive. Anyone do this?
Any tips or insight is appreciated!