Training machine learning models on massive datasets is expensive and time-consuming. Dataset distillation addresses this by creating a small synthetic dataset that achieves the same performance as the full dataset. Recent methods use diffusion models to generate distilled datasets, but existing approaches produce redundant training signals—disjoint subsets capture 70–80% overlapping information.
We propose learnability-driven dataset distillation, which constructs synthetic datasets incrementally through successive stages. Starting from a small distilled dataset, we train a model and generate new samples guided by learnability scores that identify what the current model can learn from, creating an adaptive curriculum. We introduce learnability-guided diffusion that balances current-model informativeness with reference-model validity, automatically generating curriculum-aligned samples.
Our approach reduces redundancy by 39.1%, enables specialization across training phases, and achieves state-of-the-art results on ImageNet-1K (60.1%), ImageNette (87.2%), and ImageWoof (72.9%).
@inproceedings{chansantiago2026learnability,
title = {Learnability-Guided Diffusion for Dataset Distillation},
author = {Chan-Santiago, Jeffrey A. and Shah, Mubarak},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision
and Pattern Recognition (CVPR)},
year = {2026},
}