The AutoML Podcast

The AutoML Podcast

A show about the science and engineering behind AutoML.
MLGym: A New Framework and Benchmark for Advancing AI Research Agents
The AutoML PodcastOctober 31, 202501:28:3360.83 MB

MLGym: A New Framework and Benchmark for Advancing AI Research Agents

AutoML is dead an LLMs have killed it? MLGym is a benchmark and framework testing this theory. Roberta Raileanu and Deepak Nathani discuss how well current LLMs are doing at solving ML tasks, what the biggest roadblocks are, and what that means for AutoML generally. Check out the paper: https://arxi...

Leverage Foundational Models for Black-Box Optimization
The AutoML PodcastSeptember 22, 202500:56:4839.03 MB

Leverage Foundational Models for Black-Box Optimization

Where and how can we use foundation models in AutoML? Richard Song, researcher at Google DeepMind, has some answers. Starting off from his position paper on leveraging foundation models for optimization, we chat about what makes foundation models valuable for AutoML, how the next steps could look li...

Nyckel - Building an AutoML Startup
The AutoML PodcastMarch 07, 202501:20:5955.64 MB

Nyckel - Building an AutoML Startup

Oscar Beijbom is talking about what it's like to run an AutoML startup: Nyckel. Beyond that, we chat about the differences between academia and industry, what truly matters in application and more. Check out Nyckel at: https://www.nyckel.com/

Neural Architecture Search: Insights from 1000 Papers
The AutoML PodcastDecember 03, 202401:15:4452.04 MB

Neural Architecture Search: Insights from 1000 Papers

Colin White, head of research at Abacus AI, takes us on a tour of Neural Architecture Search: its origins, important paradigms and the future of NAS in the age of LLMs. If you're looking for a broad overview of NAS, this is the podcast for you!

Quick-Tune: Quickly Learning Which Pretrained Model to Finetune and How
The AutoML PodcastAugust 08, 202400:53:0436.46 MB

Quick-Tune: Quickly Learning Which Pretrained Model to Finetune and How

There are so many great foundation models in many different domains - but how do you choose one for your specific problem? And how can you best finetune it? Sebastian Pineda has an answer: Quicktune can help select the best model and tune it for specific use cases. Listen to find out when this will ...

Discovering Temporally-Aware Reinforcement Learning Algorithms
The AutoML PodcastJune 24, 202400:51:1535.22 MB

Discovering Temporally-Aware Reinforcement Learning Algorithms

Designing algorithms by hand is hard, so Chris Lu and Matthew Jackson talk about how to meta-learn them for reinforcement learning. Many of the concepts in this episode are interesting to meta-learning approaches as a whole, though: "how expressive can we be and still perform well?", "how can we get...