search instagram arrow-down

Rio Yokota

Professor at the Global Scientific Information and Computing Center, Tokyo Institute of Technology

“Training vision transformers with synthetic images”

Wednesday 15 February, 8.30am – 9.05am

Abstract

Transformers have become the dominant neural network architecture not only in natural language processing, but also in computer vision and other modalities. The potential of transformers lies in its scaling laws, where pre-training larger models on larger datasets leads not only to increased accuracy on a wide range of downstream tasks, but also emergence of new capabilities. However, the need for very large datasets poses many challenges, which include societal biases, copyright, and privacy in data scraped from the Internet. The cost to clean the datasets to avoid these issues becomes prohibitive at scale, which is the next major challenge in deep learning. In this talk, I will discuss the possibility of using synthetic datasets that are free of such issues.

Bio

Rio Yokota is a Professor at the Global Scientific Information and Computing Center, Tokyo Institute of Technology. His research interests lie at the intersection of high performance computing, linear algebra, and machine learning. He is the developer of numerous libraries for fast multipole methods (ExaFMM), hierarchical low-rank algorithms (Hatrix), and information matrices in deep learning (ASDFGHJKL) that scale to the full system on the largest supercomputers today. He has been optimizing algorithms on GPUs since 2006, and was part of a team that received the Gordon Bell prize in 2009 using the first GPU supercomputer. Rio is a member of ACM, IEEE, and SIAM.

Website

Slides