Open-Ended AI: The Key to Superhuman Intelligence? - Prof. Tim Rocktäschel

Machine Learning Street Talk (MLST) - En podcast af Machine Learning Street Talk (MLST)

Kategorier:

Prof. Tim Rocktäschel, AI researcher at UCL and Google DeepMind, talks about open-ended AI systems. These systems aim to keep learning and improving on their own, like evolution does in nature. Ad: Are you a hardcore ML engineer who wants to work for Daniel Cahn at SlingshotAI building AI for mental health? Give him an email! - [email protected] TOC: 00:00:00 Introduction to Open-Ended AI and Key Concepts 00:01:37 Tim Rocktäschel's Background and Research Focus 00:06:25 Defining Open-Endedness in AI Systems 00:10:39 Subjective Nature of Interestingness and Learnability 00:16:22 Open-Endedness in Practice: Examples and Limitations 00:17:50 Assessing Novelty in Open-ended AI Systems 00:20:05 Adversarial Attacks and AI Robustness 00:24:05 Rainbow Teaming and LLM Safety 00:25:48 Open-ended Research Approaches in AI 00:29:05 Balancing Long-term Vision and Exploration in AI Research 00:37:25 LLMs in Program Synthesis and Open-Ended Learning 00:37:55 Transition from Human-Based to Novel AI Strategies 00:39:00 Expanding Context Windows and Prompt Evolution 00:40:17 AI Intelligibility and Human-AI Interfaces 00:46:04 Self-Improvement and Evolution in AI Systems Show notes (New!) https://www.dropbox.com/scl/fi/5avpsyz8jbn4j1az7kevs/TimR.pdf?rlkey=pqjlcqbtm3undp4udtgfmie8n&st=x50u1d1m&dl=0 REFS: 00:01:47 - UCL DARK Lab (Rocktäschel) - AI research lab focusing on RL and open-ended learning - https://ucldark.com/ 00:02:31 - GENIE (Bruce) - Generative interactive environment from unlabelled videos - https://arxiv.org/abs/2402.15391 00:02:42 - Promptbreeder (Fernando) - Self-referential LLM prompt evolution - https://arxiv.org/abs/2309.16797 00:03:05 - Picbreeder (Secretan) - Collaborative online image evolution - https://dl.acm.org/doi/10.1145/1357054.1357328 00:03:14 - Why Greatness Cannot Be Planned (Stanley) - Book on open-ended exploration - https://www.amazon.com/Why-Greatness-Cannot-Planned-Objective/dp/3319155237 00:04:36 - NetHack Learning Environment (Küttler) - RL research in procedurally generated game - https://arxiv.org/abs/2006.13760 00:07:35 - Open-ended learning (Clune) - AI systems for continual learning and adaptation - https://arxiv.org/abs/1905.10985 00:07:35 - OMNI (Zhang) - LLMs modeling human interestingness for exploration - https://arxiv.org/abs/2306.01711 00:10:42 - Observer theory (Wolfram) - Computationally bounded observers in complex systems - https://writings.stephenwolfram.com/2023/12/observer-theory/ 00:15:25 - Human-Timescale Adaptation (Rocktäschel) - RL agent adapting to novel 3D tasks - https://arxiv.org/abs/2301.07608 00:16:15 - Open-Endedness for AGI (Hughes) - Importance of open-ended learning for AGI - https://arxiv.org/abs/2406.04268 00:16:35 - POET algorithm (Wang) - Open-ended approach to generate and solve challenges - https://arxiv.org/abs/1901.01753 00:17:20 - AlphaGo (Silver) - AI mastering the game of Go - https://deepmind.google/technologies/alphago/ 00:20:35 - Adversarial Go attacks (Dennis) - Exploiting weaknesses in Go AI systems - https://www.ifaamas.org/Proceedings/aamas2024/pdfs/p1630.pdf 00:22:00 - Levels of AGI (Morris) - Framework for categorizing AGI progress - https://arxiv.org/abs/2311.02462 00:24:30 - Rainbow Teaming (Samvelyan) - LLM-based adversarial prompt generation - https://arxiv.org/abs/2402.16822 00:25:50 - Why Greatness Cannot Be Planned (Stanley) - 'False compass' and 'stepping stone collection' concepts - https://www.amazon.com/Why-Greatness-Cannot-Planned-Objective/dp/3319155237 00:27:45 - AI Debate (Khan) - Improving LLM truthfulness through debate - https://proceedings.mlr.press/v235/khan24a.html 00:29:40 - Gemini (Google DeepMind) - Advanced multimodal AI model - https://deepmind.google/technologies/gemini/ 00:30:15 - How to Take Smart Notes (Ahrens) - Effective note-taking methodology - https://www.amazon.com/How-Take-Smart-Notes-Nonfiction/dp/1542866502 (truncated, see shownotes)

Visit the podcast's native language site