51. AGI Safety and Alignment with Robert Miles

This episode we're chatting with Robert Miles about why we even want artificial general intelligence, general AI as narrow AI where its input is the world, when predictions of AI sound like science fiction, covering terms like: AI safety, the control problem, Ai alignment, specification problem; the lack of people working in AI alignment, AGI doesn’t need to be conscious, and more

Om Podcasten

Discourse on AI Ethics. News, explanation and Interviews with academics, authors, business leaders, creatives and engineers on the subject of autonomous algorithms, artificial intelligence, machine learning, technology ethics and more.