Sakana, Strawberry, and Scary AI

Astral Codex Ten Podcast - En podcast af Jeremiah

Kategorier:

Sakana (website, paper) is supposed to be “an AI scientist”. Since it can’t access the physical world, it can only do computer science. Its human handlers give it a computer program. It prompts itself to generate hypotheses about the program (“if I change this number, the program will run faster”). Then it uses an AI coding submodule to test its hypotheses. Finally, it uses a language model to write them up in typical scientific paper format. Is it good? Not really. Experts who read its papers say they’re trivial, poorly reasoned, and occasionally make things up (the creators defend themselves by saying that “less than ten percent” of the AI’s output is hallucinations). Its writing is meandering, repetitive, and often self-contradictory. Like the proverbial singing dog, we’re not supposed to be impressed that it’s good, we’re supposed to be impressed that it can do it at all. https://www.astralcodexten.com/p/sakana-strawberry-and-scary-ai 

Visit the podcast's native language site