FBL91: Connor Leahy - The Existential Risk of AI Alignment

The Feedback Loop by Singularity - En podcast af Singularity University

Kategorier:

This week our guest is AI researcher and founder of Conjecture, Connor Leahy, who is dedicated to studying AI alignment. Alignment research focuses on gaining an increased understanding of how to build advanced AI systems that pursue the goals they were designed for instead of engaging in undesired behavior. Sometimes, this means just ensuring they share the values and ethics we have as humans so that our machines don’t cause serious harm to humanity. In this episode, Connor provides candid insights into the current state of the field, including the very concerning lack of funding and human resources that are currently going into alignment research. Amongst many other things, we discuss how the research is conducted, the lessons we can learn from animals, and the kind of policies and processes humans need to put into place if we are to prevent what Connor currently sees as a highly plausible existential threat.  Find out more about Conjecture at conjecture.dev or follow Connor and his work at twitter.com/NPCollapse ** Apply for registration to our exclusive South By Southwest event on March 14th @ www.su.org/basecamp-sxsw Apply for an Executive Program Scholarship at su.org/executive-program/ep-scholarship Learn more about Singularity: su.org Host: Steven Parton - LinkedIn / Twitter Music by: Amine el Filali

Visit the podcast's native language site