Future of Life Institute Podcast

En podcast af Future of Life Institute

Kategorier:

210 Episoder

  1. Vincent Boulanin on the Dangers of AI in Nuclear Weapons Systems

    Udgivet: 1.12.2022
  2. Robin Hanson on Predicting the Future of Artificial Intelligence

    Udgivet: 24.11.2022
  3. Robin Hanson on Grabby Aliens and When Humanity Will Meet Them

    Udgivet: 17.11.2022
  4. Ajeya Cotra on Thinking Clearly in a Rapidly Changing World

    Udgivet: 10.11.2022
  5. Ajeya Cotra on how Artificial Intelligence Could Cause Catastrophe

    Udgivet: 3.11.2022
  6. Ajeya Cotra on Forecasting Transformative Artificial Intelligence

    Udgivet: 27.10.2022
  7. Alan Robock on Nuclear Winter, Famine, and Geoengineering

    Udgivet: 20.10.2022
  8. Brian Toon on Nuclear Winter, Asteroids, Volcanoes, and the Future of Humanity

    Udgivet: 13.10.2022
  9. Philip Reiner on Nuclear Command, Control, and Communications

    Udgivet: 6.10.2022
  10. Daniela and Dario Amodei on Anthropic

    Udgivet: 4.3.2022
  11. Anthony Aguirre and Anna Yelizarova on FLI's Worldbuilding Contest

    Udgivet: 9.2.2022
  12. David Chalmers on Reality+: Virtual Worlds and the Problems of Philosophy

    Udgivet: 26.1.2022
  13. Rohin Shah on the State of AGI Safety Research in 2021

    Udgivet: 2.11.2021
  14. Future of Life Institute's $25M Grants Program for Existential Risk Reduction

    Udgivet: 18.10.2021
  15. Filippa Lentzos on Global Catastrophic Biological Risks

    Udgivet: 1.10.2021
  16. Susan Solomon and Stephen Andersen on Saving the Ozone Layer

    Udgivet: 16.9.2021
  17. James Manyika on Global Economic and Technological Trends

    Udgivet: 7.9.2021
  18. Michael Klare on the Pentagon's view of Climate Change and the Risks of State Collapse

    Udgivet: 30.7.2021
  19. Avi Loeb on UFOs and if they're Alien in Origin

    Udgivet: 9.7.2021
  20. Avi Loeb on 'Oumuamua, Aliens, Space Archeology, Great Filters, and Superstructures

    Udgivet: 9.7.2021

4 / 11

The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change. The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions. FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.

Visit the podcast's native language site