506 Episoder

  1. Direct Preference Optimization with Unobserved Preference Heterogeneity: The Necessity of Ternary Preferences

    Udgivet: 24.10.2025
  2. The Coverage Principle: How Pre-Training Enables Post-Training

    Udgivet: 24.10.2025
  3. The Era of Real-World Human Interaction: RL from User Conversations

    Udgivet: 24.10.2025
  4. Agent Learning via Early Experience

    Udgivet: 24.10.2025
  5. Demystifying the Mechanisms Behind Emergent Exploration in Goal-conditioned RL

    Udgivet: 22.10.2025
  6. Rewriting History: A Recipe for Interventional Analyses to Study Data Effects on Model Behavior

    Udgivet: 22.10.2025
  7. A Definition of AGI

    Udgivet: 22.10.2025
  8. Provably Learning from Language Feedback

    Udgivet: 21.10.2025
  9. In-Context Learning for Pure Exploration

    Udgivet: 21.10.2025
  10. On the Role of Preference Variance in Preference Optimization

    Udgivet: 20.10.2025
  11. Training LLM Agents to Empower Humans

    Udgivet: 20.10.2025
  12. Richard Sutton Declares LLMs a Dead End

    Udgivet: 20.10.2025
  13. Demystifying Reinforcement Learning in Agentic Reasoning

    Udgivet: 19.10.2025
  14. Emergent coordination in multi-agent language models

    Udgivet: 19.10.2025
  15. Learning-to-measure: in-context active feature acquisition

    Udgivet: 19.10.2025
  16. Andrej Karpathy's insights: AGI, Intelligence, and Evolution

    Udgivet: 19.10.2025
  17. Front-Loading Reasoning: The Synergy between Pretraining and Post-Training Data

    Udgivet: 18.10.2025
  18. Representation-Based Exploration for Language Models: From Test-Time to Post-Training

    Udgivet: 18.10.2025
  19. The attacker moves second: stronger adaptive attacks bypass defenses against LLM jail- Breaks and prompt injections

    Udgivet: 18.10.2025
  20. When can in-context learning generalize out of task distribution?

    Udgivet: 16.10.2025

1 / 26

Cut through the noise. We curate and break down the most important AI papers so you don’t have to.

Visit the podcast's native language site