58. David Duvenaud - Using generative models for explainable AI

Towards Data Science - En podcast af The TDS team

Kategorier:

In the early 1900s, all of our predictions were the direct product of human brains. Scientists, analysts, climatologists, mathematicians, bankers, lawyers and politicians did their best to anticipate future events, and plan accordingly. Take physics, for example, where every task we think of as part of the learning process, from data collection to cleaning to feature selection to modeling, all had to happen inside a physicist’s head. When Einstein introduced gravitational fields, what he was really doing was proposing a new feature to be added to our model of the universe. And the gravitational field equations that he put forward at the same time were an update to that very model. Einstein didn’t come up with his new model (or “theory” as physicists call it) of gravity by running model.fit() in a jupyter notebook. In fact, he never outsourced any of the computations that were needed to develop it to machines. Today, that’s somewhat unusual, and most of the predictions that the world runs on are generated in part by computers. But only in part — until we have fully general artificial intelligence, machine learning will always be a mix of two things: first, the constraints that human developers impose on their models, and second, the calculations that go into optimizing those models, which we outsource to machines. The human touch is still a necessary and ubiquitous component of every machine learning pipeline, but it’s ultimately limiting: the more of the learning pipeline that can be outsourced to machines, the more we can take advantage of computers’ ability to learn faster and from far more data than human beings. But designing algorithms that are flexible enough to do that requires serious outside-of-the-box thinking — exactly the kind of thinking that University of Toronto professor and researcher David Duvenaud specializes in. I asked David to join me for the latest episode of the podcast to talk about his research on more flexible and robust machine learning strategies.

Visit the podcast's native language site