Page tree
Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 50 Next »


Colloquium on Digital Transformation Science

  • April 8, 3 pm CT

    Recent Advances in Analysis of Implicit Bias of Gradient Descent on Deep Networks

    Matus Telgarsky, Assistant Professor of Computer Science, University of Illinois at Urbana-Champaign


    The purpose of this talk is to highlight three recent directions in the study of implicit bias, a promising approach to developing a tight generalization theory for deep networks interwoven with optimization. The first direction is a warm-up with purely linear predictors: here, the implicit bias perspective gives the fastest known hard-margin SVM solver! The second direction is on the early training phase with shallow networks: here, implicit bias leads to good training and testing error, with not just narrow networks but also arbitrarily large ones. The talk concludes with deep networks, providing a variety of structural lemmas that capture foundational aspects of how weights evolve for any width and sufficiently large amounts of training. This is joint work with Ziwei Ji.

    Matus Telgarsky is an Assistant Professor of Computer Science at the University of Illinois at Urbana-Champaign, specializing in deep learning theory. He received a PhD at the University of California, San Diego under Sanjoy Dasgupta. He co-founded the Midwest ML Symposium in 2017 with Po-Ling Loh and organized a Simons Institute summer 2019 program on deep learning with Samy Bengio, Aleskander Madry, and Elchanan Mossel. He received an NSF CAREER Award in 2018.

Quick Links: DTI Webpage


Information on Call for Proposals

Proposal Matchmaking DTI Training Materials Overview (password protected)

C3 Administration (password protected)

Have Questions? Please contact one of us:

Space contributors


  • No labels