My Bet: AI Size Solves Flubs

Astral Codex Ten Podcast - En podcast af Jeremiah

Kategorier:

https://astralcodexten.substack.com/p/my-bet-ai-size-solves-flubs?s=r On A Guide To Asking Robots To Design Stained Glass Windows, I described how DALL-E gets confused easily and makes silly mistakes. But I also wrote that: I’m not going to make the mistake of saying these problems are inherent to AI art. My guess is a slightly better language model would solve most of them...For all I know, some of the larger image models have already fixed these issues. These are the sorts of problems I expect to go away with a few months of future research. Some readers pushed back: why did I think this? For example, Vitor: Why are you so confident in this? The inability of systems like DALL-E to understand semantics in ways requiring an actual internal world model strikes me as the very heart of the issue. We can also see this exact failure mode in the language models themselves. They only produce good results when the human asks for something vague with lots of room for interpretation, like poetry or fanciful stories without much internal logic or continuity […] I'm registering my prediction that you're being . . . naive now. Truly solving this issue seems AI-complete to me. I'm willing to bet on this (ideas on operationalization welcome).

Visit the podcast's native language site