Is Power-Seeking AI an Existential Risk?

AI Safety Fundamentals: Alignment - En podcast af BlueDot Impact

This report examines what I see as the core argument for concern about existential risk from misaligned artificial intelligence. I proceed in two stages. First, I lay out a backdrop picture that informs such concern. On this picture, intelligent agency is an extremely powerful force, and creating agents much more intelligent than us is playing with fire -- especially given that if their objectives are problematic, such agents would plausibly have instrumental incentives to seek power over hum...

Visit the podcast's native language site