Rishi Bommasani on Foundation Models

The Gradient: Perspectives on AI - En podcast af The Gradient

Kategorier:

In episode 19 of The Gradient Podcast, we talk to Rishi Bommasani, a Ph.D student at Stanford focused on Foundation Models. Rish is a second-year Ph.D. student in the CS Department at Stanford, where he is advised by Percy Liang and Dan Jurafsky. His research focuses on understanding AI systems and their social impact, as well as using NLP to further scientific inquiry. Over the past year, he helped build and organize the Stanford Center for Research on Foundation Models (CRFM).Sections:(00:00:00) Intro(00:01:05) How did you get into AI?(00:09:55) Towards Understanding Position Embeddings(00:14:23) Long-Distance Dependencies don’t have to be Long(00:18:55) Interpreting Pretrained Contextualized Representations via Reductions to Static Embeddings(00:30:25) Masters Thesis(00:34:05) Start of PhD and work on foundation models(00:42:14) Why were people intested in foundation models(00:46:45) Formation of CRFM(00:51:25) Writing report on foundation models(00:56:33) Challenges in writing report(01:05:45) Response to reception(01:15:35) Goals of CRFM(01:25:43) Current research focus(01:30:35) Interests outside of research(01:33:10) OutroPapers discussed:* Towards Understanding Position Embeddings* Long-Distance Dependencies don’t have to be Long: Simplifying through Provably (Approximately) Optimal Permutations* Interpreting Pretrained Contextualized Representations via Reductions to Static Embeddings* Generalized Optimal Linear Orders* On the Opportunities and Risks of Foundation Models* Reflections on Foundation Models Get full access to The Gradient at thegradientpub.substack.com/subscribe

Visit the podcast's native language site