distinguished lecture series presents

Anima Anandkumar

Caltech

Research Area

Large-scale machine learning, non-convex optimization and high-dimensional statistics

Visit

Wednesday, April 13th, 2022 to Thursday, April 14th, 2022

Location

MS 6627 and Zoom https://ucla.zoom.us/j/9264073849

abstracts
Heralding Scientific Breakthroughs through AI at Supercomputing Scale: Many scientific applications currently rely on the use of brute-force numerical methods performed on high-performance computing (HPC) infrastructure. However, these methods have their limits even with the growing hardware capabilities, e.g. for fine-scale climate prediction and large-molecule quantum chemistry. Can artificial intelligence (AI) methods augment or even entirely replace these brute-force calculations to obtain million-x speed-ups? Can we make groundbreaking new discoveries because of such speed-ups? I will present exciting recent advances that build new foundations in AI that are applicable to a wide range of problems such as fluid dynamics and quantum chemistry.
Neural operator: Learning in infinite dimensions with applications to PDEs: Standard neural networks assume finite-dimensional inputs and outputs, and hence, are unsuitable for modeling phenomena such as those arising from the solutions of Partial Differential Equations (PDE). We introduce neural operators that can learn operators, which are mappings between infinite dimensional spaces. By framing neural operators as non-linear compositions of kernel integrations, we establish that they can universally approximate any operator. They are independent of the resolution or grid of training data and allow for zero-shot generalization to higher resolution evaluations. We find that the Fourier neural operator can solve turbulent fluid flows with a 1000x speedup compared to numerical solvers. I will outline several applications where neural operator has shown orders of magnitude speedup.
Neural operator: Learning in infinite dimensions with applications to PDEs: Standard neural networks assume finite-dimensional inputs and outputs, and hence, are unsuitable for modeling phenomena such as those arising from the solutions of Partial Differential Equations (PDE). We introduce neural operators that can learn operators, which are mappings between infinite dimensional spaces. By framing neural operators as non-linear compositions of kernel integrations, we establish that they can universally approximate any operator. They are independent of the resolution or grid of training data and allow for zero-shot generalization to higher resolution evaluations. We find that the Fourier neural operator can solve turbulent fluid flows with a 1000x speedup compared to numerical solvers. I will outline several applications where neural operator has shown orders of magnitude speedup.
recordings & notes
Lecture 1
Lecture 2
Lecture 3
Facebook
Twitter
LinkedIn
Email
Print
2300 Murphy Hall - Box 951438 - Los Angeles, CA 90095-1438 © 2018