#165 Andy Hock: Will AI Revolutionize How We Do Business?

PODCAST:Eye On A.I.
TITLE:#165 Andy Hock: Will AI Revolutionize How We Do Business?
DATE:2024-01-17 00:00:00
URL:
MODEL:gpt-4-gizmo


In episode 165 of the podcast "Eye on AI," host Craig Smith interviews Andy Hock, the Head of Product for Cerebras Systems, about their revolutionary wafer-scale chip technology. The episode, titled "Will AI Revolutionize How We Do Business," aired on January 17, 2024, and delves into how Cerebras' technology is influencing the AI and machine learning sectors.

Key points from the interview include:

  1. Cerebras' Wafer-Scale Engine: Cerebras' processor is unique, designed from the ground up specifically for AI, differentiating it from traditional GPUs and CPUs. It features 850,000 cores, 40 gigabytes of on-chip memory, and all cores are interconnected directly over silicon. This design allows it to function like a cluster of AI compute on one device.

  2. Advantages Over Traditional GPUs: Unlike GPUs, the Cerebras chip is optimized for sparse, tensor-based linear algebra operations, making it highly suitable for AI compute. Its architecture supports both training and inference, focusing on large-scale AI models. The system allows for running large models on a single machine, simplifying programming and reducing inefficiencies present in distributed GPU systems.

  3. Efficient AI Model Training: Cerebras' technology enables significantly faster training of large models like GPT models. Its simplicity in programming and distribution, along with its high compute efficiency, accelerates the AI research and development process.

  4. Cerebras' Business and Deployment Model: Cerebras offers its technology both directly and through cloud partnerships, catering to a range of customers from traditional supercomputing centers to cloud-based users. Customers retain ownership of their data and trained models, offering flexibility in application and further development.

  5. Partnerships and Market Reach: Cerebras is engaged with various organizations, including government labs and commercial enterprises, to provide AI compute resources. Their partnership with G42 in the UAE is notable for developing large state-of-the-art language models and applications in various fields such as medicine and climate science.

  6. Future of AI Compute Infrastructure: Hock envisions a future where data centers will have heterogeneous compute infrastructures, combining various types of accelerators to handle a diverse range of AI and HPC workloads efficiently.

  7. National AI Infrastructure: The conversation also touches on the importance of national AI infrastructure. Cerebras is in discussions with various governments, including the United States, about building national compute resources to power AI-driven economic and societal advancements.

  8. Sponsorship: The episode is sponsored by Babald, a language learning app that provides 10-minute lessons designed by language experts.

In summary, the podcast offers an in-depth look at how Cerebras' wafer-scale engine technology is revolutionizing the field of AI and machine learning, providing faster, more efficient model training, and impacting the development of AI infrastructure on a global scale.