Flat minima, known to enhance generalization and robustness in supervised learning, remain largely unexplored in generative models. In this work, we systematically investigate the role of loss surface flatness in generative models, both theoretically and empirically, with a particular focus on diffusion models. We establish a theoretical claim that flatter minima improve robustness against perturbations in target prior distributions, leading to benefits such as reduced exposure bias -- where errors in noise estimation accumulate over iterations -- and significantly improved resilience to model quantization, preserving generative performance even under strong quantization constraints. We further observe that Sharpness-Aware Minimization (SAM), which explicitly controls the degree of flatness, effectively enhances flatness in diffusion models, whereas other well-known methods such as Stochastic Weight Averaging (SWA) and Exponential Moving Average (EMA), which promote flatness indirectly via ensembling, are less effective. Through extensive experiments on CIFAR-10, LSUN Tower, and FFHQ, we demonstrate that flat minima in diffusion models indeed improves not only generative performance but also robustness.
+SAM
Achieves Better FID Scores.Dataset | CIFAR-10 (32×32) | LSUN Tower (64×64) | FFHQ (64×64) | |||
---|---|---|---|---|---|---|
T' | 20 steps | 100 steps | 20 steps | 100 steps | 20 steps | 100 steps |
ADM | 34.47 | 8.80 | 36.65 | 8.57 | 30.81 | 7.53 |
+EMA | 10.63 | 4.06 | 7.87 | 2.49 | 19.03 | 6.19 |
+SWA | 11.00 | 3.78 | 8.72 | 2.31 | 17.93 | 5.49 |
+IP | 20.11 | 7.23 | 25.77 | 7.00 | 15.03 | 13.55 |
+IP+EMA | 9.10 | 3.46 | 7.66 | 2.43 | 11.72 | 4.00 |
+IP+SWA | 9.04 | 3.07 | 8.55 | 2.34 | 12.99 | 3.54 |
+SAM |
9.01 | 3.83 | 16.02 | 4.79 | 11.59 | 5.29 |
+SAM+EMA |
7.00 | 3.18 | 6.66 | 2.30 | 11.41 | 5.04 |
+SAM+SWA |
7.27 | 2.96 | 6.50 | 2.27 | 12.15 | 4.17 |
Loss plots under perturbation for CIFAR-10
Norm of the predicted noise for CIFAR-10
A conceptual illustration of theoretical analysis. Theorem 1 (Corollary 1)
translates the perturbation in the parameter space into the set of perturbed distributions.
Theorem 2 (Corollary 2)
shows that flat minima leads
to the robustness against the distribution gap
A Δ-flat minimum achieves ε-distribution gap robustness, such that ε is upper-bounded as follows:
@article{lee2025understanding,
title={Understanding Flatness in Generative Models: Its Role and Benefits},
author={Lee, Taehwan and Seo, Kyeongkook and Yoo, Jaejun and Yoon, Sung Whan},
journal={arXiv preprint arXiv:2503.11078},
year={2025}
}