This is an automated archive made by the Lemmit Bot.

The original was posted on /r/machinelearning by /u/lapurita on 2024-11-08 18:27:06+00:00.


Lets say we have a dataset that is much larger than we have disk storage. For example:

  • Dataset: 1PB
  • Our disk storage: 10TB
  • GPU RAM: 8x80GB (not super relevant to this discussion)

What are the usual approaches to training on something like this? What I can think of intuitively is to do the following in parallel somehow:

  • prefetch block n, train on block n-1, delete block n-2 from disk

Lets say we use PyTorch, so we have a PyTorch Dataset that has all the paths to where the data is stored in the cloud. Do we need to write code for the prefetcher/deleter that downloads from the cloud and store on disk and have it run in a separate process, then have a DataLoader for training that just assumes that it can read from disk (because the prefetcher does its job correctly)? Having the DataLoader read from S3 would be bad for GPU utilization, right?

To take a step back, I’m assuming that this is ordinary and often occuring “problem” for every company that trains on large datasets, so I’m skeptical to writing all of this code by myself as I feel like there should be standard out of the box solutions for this, but can’t really find anything that matches perfectly.