#041 - Biologically Plausible Neural Networks - Dr. Simon Stringer

Dr. Simon Stringer. Obtained his Ph.D in mathematical state space control theory and has been a Senior Research Fellow at Oxford University for over 27 years. Simon is the director of the the Oxford Centre for Theoretical Neuroscience and Artificial Intelligence, which is based within the Oxford University Department of Experimental Psychology. His department covers vision, spatial processing, motor function, language and consciousness -- in particular -- how the primate visual system learns to make sense of complex natural scenes. Dr. Stringers laboratory houses a team of theoreticians, who are developing computer models of a range of different aspects of brain function. Simon's lab is investigating the neural and synaptic dynamics that underpin brain function. An important matter here is the The feature-binding problem which concerns how the visual system represents the hierarchical relationships between features. the visual system must represent hierarchical binding relations across the entire visual field at every spatial scale and level in the hierarchy of visual primitives. We discuss the emergence of self-organised behaviour, complex information processing, invariant sensory representations and hierarchical feature binding which emerges when you build biologically plausible neural networks with temporal spiking dynamics.  00:00:09 Tim Intro  00:09:31 Show kickoff  00:14:37 Hierarchical Feature binding and timing of action potentials  00:30:16 Hebb to Spike-timing-dependent plasticity (STDP)  00:35:27 Encoding of shape primitives  00:38:50 Is imagination working in the same place in the brain  00:41:12 Compare to supervised CNNs  00:45:59 Speech recognition, motor system, learning mazes  00:49:28 How practical are these spiking NNs  00:50:19 Why simulate the human brain  00:52:46 How much computational power do you gain from differential timings  00:55:08 Adversarial inputs  00:59:41 Generative / causal component needed?  01:01:46 Modalities of processing i.e. language  01:03:42 Understanding  01:04:37 Human hardware  01:06:19 Roadmap of NNs?  01:10:36 Intepretability methods for these new models  01:13:03 Won't GPT just scale and do this anyway?  01:15:51 What about trace learning and transformation learning  01:18:50 Categories of invariance  01:19:47 Biological plausibility  https://www.youtube.com/watch?v=aisgNLypUKs

Om Podcasten

Welcome! We engage in fascinating discussions with pre-eminent figures in the AI field. Our flagship show covers current affairs in AI, cognitive science, neuroscience and philosophy of mind with in-depth analysis. Our approach is unrivalled in terms of scope and rigour – we believe in intellectual diversity in AI, and we touch on all of the main ideas in the field with the hype surgically removed. MLST is run by Tim Scarfe, Ph.D (https://www.linkedin.com/in/ecsquizor/) and features regular appearances from MIT Doctor of Philosophy Keith Duggar (https://www.linkedin.com/in/dr-keith-duggar/).