Page tree
Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 53 Next »


Colloquium on Digital Transformation Science

  • April 29, 3 pm CT

    Understanding Deep Learning through Optimization Bias

    Nathan Srebro, Professor, Toyota Technological Institute at Chicago


    How and why are we succeeding in training huge non-convex deep networks? How can deep neural networks with billions of parameters generalize well, despite not having enough capacity to overfit any data? What is the true inductive bias of deep learning? And, does it all just boil down to a big fancy kernel machine? In this talk, I will highlight the central role the optimization geometry and optimization dynamics play in determining the inductive bias of deep learning, showing how specific optimization methods can allow generalization even in underdetermined, overparameterized models.

    Nathan Srebro, Professor, Toyota Technological Institute at Chicago, is interested in statistical and computational aspects of machine learning, and the interaction between them. He has done theoretical work in statistical learning theory and in algorithms, devised novel learning models and optimization techniques, and has worked on applications in computational biology, text analysis, and collaborative filtering. Before coming to TTIC, Srebro was a postdoctoral fellow at the University of Toronto and a visiting scientist at IBM Research.

Quick Links: DTI Webpage


Information on Call for Proposals

Proposal Matchmaking DTI Training Materials Overview (password protected)

C3 Administration (password protected)

Have Questions? Please contact one of us:

Space contributors


  • No labels