The evaluation of deep generative models has been extensively studied in the machine learning community. While the existing evaluation methods focus on centralized learning problems with training data stored by a single client, many applications of generative models concern distributed learning settings, e.g. in federated learning, where training data are collected by and distributed among several clients. In this seminar, we discuss the evaluation of generative models in distributed contexts. We show potential inconsistent rankings following different aggregations of standard evaluation scores, such as FID distance, in a distributed network. We present numerical results on benchmark datasets and generative model training schemes to support our theoretical findings on the evaluation of generative models in distributed learning settings.
Assistant Professor @ The Chinese University of Hong Kong