106 - Why GPT and other LLMs (probably) aren't sentient

Philosophical Disquisitions - En podcast af John Danaher

In this episode, I chat to Robert Long about AI sentience. Robert is a philosopher that works on issues related to the philosopy of mind, cognitive science and AI ethics. He is currently a philosophy fellow at the Centre for AI Safety in San Francisco. He completed his PhD at New York University. We do a deep dive on the concept of sentience, why it is important, and how we can tell whether an animal or AI is sentient. We also discuss whether it is worth taking the topic of AI sentience seriously. You can download the episode here or listen below. You can also subscribe the podcast on Apple, Spotify, Google, Amazon or whatever your preferred service might be. #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Relevant LinksRobert's webpageRobert's substack Subscribe to the newsletter

Visit the podcast's native language site