“AI companies’ eval reports mostly don’t support their claims” by Zach Stein-Perlman
EA Forum Podcast (All audio) - En podcast af EA Forum Team

Kategorier:
AI companies claim that their models are safe on the basis of dangerous capability evaluations. OpenAI, Google DeepMind, and Anthropic publish reports intended to show their eval results and explain why those results imply that the models' capabilities aren't too dangerous.[1] Unfortunately, the reports mostly don't support the companies' claims. Crucially, the companies usually don't explain why they think the results, which often seem strong, actually indicate safety, especially for biothreat and cyber capabilities. (Additionally, the companies are undereliciting and thus underestimating their models' capabilities, and they don't share enough information for people on the outside to tell how bad this is.) Bad explanation/contextualization OpenAI biothreat evals: OpenAI says "several of our biology evaluations indicate our models are on the cusp of being able to meaningfully help novices create known biological threats, which would cross our high risk threshold." It doesn't say how it concludes this (or what results [...] ---Outline:(00:54) Bad explanation/contextualization(04:34) Dubious elicitationThe original text contained 6 footnotes which were omitted from this narration. --- First published: June 9th, 2025 Source: https://forum.effectivealtruism.org/posts/7PPD6JfzXsqghqbCu/ai-companies-eval-reports-mostly-don-t-support-their-claims --- Narrated by TYPE III AUDIO.