
In machine learning, we are working with models, and use data to train our models. There are many instances of machine learning models where the parameters do not lie in the Euclidean space, and in some cases they lie on a differentiable manifold. Furthermore, the natural data used for developing the models do not necessary lie in the Euclidean space, this is called manifold hypothesis in the literature. Manifolds are spaces that are locally like the Euclidean space. In this talk, I briefly talk about differential manifolds, and manifold optimization. Then, I give some example applications of manifold optimization in machine learning. I also talk about manifold hypothesis, and how it is useful in many applications including generative models, adversarial robustness, and out-of-distribution detection.

Assistant Professor @ University of Tehran