Description
First, congrats on this amazing repo! 🎉
While testing the code locally, I ran into a disk space issue due to the size of HuggingFaceM4/the_cauldron. I tried setting data_cutoff_idx in the config file to reduce training size—which works for training—but it still downloads the entire dataset upfront, taking the same disk space.
This defeats the purpose of cutting off the dataset size for quick experiments or running low-spec machines.
Suggestion:
It would be great to integrate streaming=True from datasets.load_dataset, or offer a config option to load a small subset using Hugging Face's streaming mode. This would let people download and train on only a portion of the dataset—without needing full disk space.
Since this repo is also an amazing learning resource, supporting streaming can make it more accessible to students and devs with limited local storage.
Thanks again for your great work! 🚀