#49 - Meta-Gradients in RL - Dr. Tom Zahavy (DeepMind)

The race is on, we are on a collective mission to understand and create artificial general intelligence. Dr. Tom Zahavy, a Research Scientist at DeepMind thinks that reinforcement learning is the most general learning framework that we have today, and in his opinion it could lead to artificial general intelligence. He thinks there are no tasks which could not be solved by simply maximising a reward.  Back in 2012 when Tom was an undergraduate, before the deep learning revolution he attended an online lecture on how CNNs automatically discover representations. This was an epiphany for Tom. He decided in that very moment that he was going to become an ML researcher.  Tom's view is that the ability to recognise patterns and discover structure is the most important aspect of intelligence. This has been his quest ever since. He is particularly focused on using diversity preservation and metagradients to discover this structure.  In this discussion we dive deep into meta gradients in reinforcement learning.  Video version and TOC @  https://www.youtube.com/watch?v=hfaZwgk_iS0

Om Podcasten

Welcome! We engage in fascinating discussions with pre-eminent figures in the AI field. Our flagship show covers current affairs in AI, cognitive science, neuroscience and philosophy of mind with in-depth analysis. Our approach is unrivalled in terms of scope and rigour – we believe in intellectual diversity in AI, and we touch on all of the main ideas in the field with the hype surgically removed. MLST is run by Tim Scarfe, Ph.D (https://www.linkedin.com/in/ecsquizor/) and features regular appearances from MIT Doctor of Philosophy Keith Duggar (https://www.linkedin.com/in/dr-keith-duggar/).