placeholder

Lunch at the Lab

Lunch at the Lab is a weekly lunchtime gathering of members of the Laboratory and anyone else interested. We meet for a brown-bag lunch and an informal seminar and discussion. If you would like to be added to our email list for `Lunch at the Lab' announcements or other upcoming activities email us. The Lunch at the Lab seminar series is supported by funding from MITACS.

Recent Lunch at the Lab Events

'Created, organized and coordinated by Anatoliy Swishchuk since October 7th, 2004’

Edited and maintained by Qi Guo (since 2018 Fall) and Aiden Huffman (2018 Spring & Summer)

Fall 2020 December 1, 2020

Speaker:  Joshua McGillivray

TOPIC: Review of ‘Deep Learning’ book by I. Goodfellow, Y. Bengio and A. Courville, The MIT Press, 2016:  Chapter 17: Monte Carlo Methods and Chapter 18: Confronting the Partition Function (sec. 18.1 (The Log-Likelihood Gradient)+18.2 (Stochastic Maximum Likelihood and Contrastive Divergence))

Abstract: Many problems in applied mathematics are infeasible to solve algebraically, thus we must rely on numerical methods to approximate a solution. Some numerical methods incorporate randomness to account for the unpredictability of the world around us. Monte Carlo methods are a group of randomized algorithms which rely on randomness to create a solution within a certain error. Monte Carlo methods are ubiquitous in machine learning and are all but a necessity to solve fundamental problems underlying the machine learning process. This presentation presents an overview of Monte Carlo methods within the context of deep learning as well as discusses some problems that arise due to their use. The log-likelihood gradient (sec. 18.1) and stochastic maximum likelihood and contrastive divergence (sec. 18.2) from Chapter 18 will be also discussed.

November 17, 2020

Speaker:  Myles Sjogren

TOPIC: Review of ‘Deep Learning’ book by I. Goodfellow, Y. Bengio and A. Courville, The MIT Press, 2016. Chapter 16: Structured Probabilistic Models for Deep Learning

Abstract: Deep learning draws upon many modeling formalisms that researchers can use to guide their design efforts and describe their algorithms. One of these formalisms is the idea of structured probabilistic models. A structured probabilistic model is a way of describing a probability distribution, using a graph to describe which random variables in the probability distribution interact with each other directly. This chapter provides basic background on some of the most central ideas of graphical models, and focuses on the concepts that have proven most useful to the deep learning research community. The use of graphs to describe probability distributions is common practice when dealing with complex scenarios involving higher dimensional data with rich structure. The practise of structured probabilistic modelling can simplify the task of developing models in which variables interact in both direct and indirect ways. This infers the development of a formal framework allowing models to have significantly fewer parameters and therefore be estimated reliably from less data. This presentation gives an overview of the basics of structured probabilistic modelling and talks briefly about how it can be adapted for use in applications of deep learning.

uofc

Lunch at the Lab past events

Past Events