Only a few companies can afford to build and study advanced AI models, impacting independent nonprofit research.
Graphics processing units (GPUs) are essential for training and deploying AI models, but are expensive. Big Tech companies like Meta, Microsoft, and xAI have spent billions on hundreds of thousands or millions of GPUs.
Many foundational concepts underpinning today's generative AI boom were first explored in academia.
Recent advances have come from making models bigger, analyzing more data, and using more compute.
Author's summary: California explores public GPU infrastructure for AI research.