#101 DR. WALID SABA - Extrapolation, Compositionality and Learnability

Machine Learning Street Talk (MLST) - En podcast af Machine Learning Street Talk (MLST)

Kategorier:

MLST Discord! https://discord.gg/aNPkGUQtc5 Patreon: https://www.patreon.com/mlst YT: https://youtu.be/snUf_LIfQII We had a discussion with Dr. Walid Saba about whether or not MLP neural networks can extrapolate outside of the training support, and what it means to extrapolate in a vector space. Then we discussed the concept of vagueness in cognitive science, for example, what does it mean to be "rich" or what is a "pile of sand"? Finally we discussed behaviourism and the reward is enough hypothesis. References: A Spline Theory of Deep Networks [Balestriero] https://proceedings.mlr.press/v80/balestriero18b/balestriero18b.pdf The animation we showed of the spline theory was created by Ahmed Imtiaz Humayun (https://twitter.com/imtiazprio) and we will be showing an interview with Imtiaz and Randall very soon! [00:00:00] Intro [00:00:58] Interpolation vs Extrapolation [00:24:38] Type 1 Type 2 generalisation and compositionality / Fodor / Systematicity [00:32:18] Keith's brain teaser [00:36:53] Neural turing machines / discrete vs continuous / learnability

Visit the podcast's native language site