Dr. Rachael Tatman: Data Scientists at Kaggle, Computational Sociolinguistics

26.1 AI Podcast - En podcast af Brian Ray; Don Sheu

Kategorier:

This week’s episode may be our most cerebral to date, probably thanks to hosting our first University of Washington Husky, Ph.D. graduate. Dr. Rachael Tatman shares snippets of her experience at Kaggle, stochastic approaches to ML models, errors in ML models, understanding prediction, and importance of reproducibility. A big takeaway references Dr. Tatman’s PyCon talk, “Put down the deep learning: When not to use neural networks and what to do instead.” We explore how many businesses benefit from a simple linear regression model instead of investing millions of dollars of compute time to a deep learning approach. Included with our conversation with Dr. Tatman is a discussion of linguistics and will Don ever get accurate translations of his Korean friends’ tweets in real-time. So far, our conclusion is that we’re far away from the singularity. Episode seven of 26.1 AI Podcast delivers lots of value for business leaders contemplating their future AI strategy.

Visit the podcast's native language site