
AutoML is dead an LLMs have killed it? MLGym is a benchmark and framework testing this theory. Roberta Raileanu and Deepak Nathani discuss how well current LLMs are doing at solving ML tasks, what the biggest roadblocks are, and what that means for AutoML generally. Check out the paper: https://arxi...
Where and how can we use foundation models in AutoML? Richard Song, researcher at Google DeepMind, has some answers. Starting off from his position paper on leveraging foundation models for optimization, we chat about what makes foundation models valuable for AutoML, how the next steps could look li...
Oscar Beijbom is talking about what it's like to run an AutoML startup: Nyckel. Beyond that, we chat about the differences between academia and industry, what truly matters in application and more. Check out Nyckel at: https://www.nyckel.com/
Colin White, head of research at Abacus AI, takes us on a tour of Neural Architecture Search: its origins, important paradigms and the future of NAS in the age of LLMs. If you're looking for a broad overview of NAS, this is the podcast for you!
There are so many great foundation models in many different domains - but how do you choose one for your specific problem? And how can you best finetune it? Sebastian Pineda has an answer: Quicktune can help select the best model and tune it for specific use cases. Listen to find out when this will ...
Designing algorithms by hand is hard, so Chris Lu and Matthew Jackson talk about how to meta-learn them for reinforcement learning. Many of the concepts in this episode are interesting to meta-learning approaches as a whole, though: "how expressive can we be and still perform well?", "how can we get...