Powering AI with the World's Largest Computer Chip with Joel Hestness - #684

The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence) - Podcast tekijän mukaan Sam Charrington

Kategoriat:

Today we're joined by Joel Hestness, principal research scientist and lead of the core machine learning team at Cerebras. We discuss Cerebras’ custom silicon for machine learning, Wafer Scale Engine 3, and how the latest version of the company’s single-chip platform for ML has evolved to support large language models. Joel shares how WSE3 differs from other AI hardware solutions, such as GPUs, TPUs, and AWS’ Inferentia, and talks through the homogenous design of the WSE chip and its memory architecture. We discuss software support for the platform, including support by open source ML frameworks like Pytorch, and support for different types of transformer-based models. Finally, Joel shares some of the research his team is pursuing to take advantage of the hardware's unique characteristics, including weight-sparse training, optimizers that leverage higher-order statistics, and more. The complete show notes for this episode can be found at twimlai.com/go/684.

Visit the podcast's native language site