109 Episoder

  1. Managing frontier model training organizations (or teams)

    Udgivet: 19.3.2025
  2. Gemma 3, OLMo 2 32B, and the growing potential of open-source AI

    Udgivet: 13.3.2025
  3. Interviewing Eugene Vinitsky on self-play for self-driving and what else people do with RL

    Udgivet: 12.3.2025
  4. Elicitation, the simplest way to understand post-training

    Udgivet: 10.3.2025
  5. Where inference-time scaling pushes the market for AI companies

    Udgivet: 5.3.2025
  6. GPT-4.5: "Not a frontier model"?

    Udgivet: 28.2.2025
  7. Character training: Understanding and crafting a language model's personality

    Udgivet: 26.2.2025
  8. Claude 3.7 thonks and what's next for inference-time scaling

    Udgivet: 24.2.2025
  9. Grok 3 and an accelerating AI roadmap

    Udgivet: 18.2.2025
  10. An unexpected RL Renaissance

    Udgivet: 13.2.2025
  11. Deep Research, information vs. insight, and the nature of science

    Udgivet: 12.2.2025
  12. Making the U.S. the home for open-source AI

    Udgivet: 5.2.2025
  13. Why reasoning models will generalize

    Udgivet: 28.1.2025
  14. Interviewing OLMo 2 leads: Open secrets of training language models

    Udgivet: 22.1.2025
  15. DeepSeek R1's recipe to replicate o1 and the future of reasoning LMs

    Udgivet: 21.1.2025
  16. Let me use my local LMs on Meta Ray-Bans

    Udgivet: 15.1.2025
  17. (Voiceover) DeepSeek V3 and the actual cost of training frontier AI models

    Udgivet: 9.1.2025
  18. The state of post-training in 2025

    Udgivet: 8.1.2025
  19. Quick recap on the state of reasoning

    Udgivet: 2.1.2025
  20. (Voiceover) 2024 Interconnects year in review

    Udgivet: 31.12.2024

2 / 6

Audio essays about the latest developments in AI and interviews with leading scientists in the field. Breaking the hype, understanding what's under the hood, and telling stories. www.interconnects.ai

Visit the podcast's native language site