
Elon Musk has reportedly assembled a new supercomputer- Colossus, running 100,000 Nvidia GPUs in four months. In a tweet, Musk mentioned that the supercomputer, based in Memphis, Tennessee, is now online after the city initially announced the project back in June and was completed in 122 days. He also claimed that “Colossus is the most powerful AI training system in the world.” The Nvidia GPU costs around $30,000, implying Musk spent at least $3 billion to build the new supercomputer.
The supercomputer was constructed using 100,000 Nvidia H100 GPUs, a highly sought-after component for training new AI models. The supercomputer will also require substantial electricity and cooling resources.
The Colossus supercomputer is set to grow over time. According to Musk’s tweet, the facility will “double in size to 200k” GPUs in a few months with the addition of 50,000 Nvidia H200 GPUs, which offer upgraded memory.
Musk developed this facility for xAI, his latest venture focused on generative AI technologies, including the contentious pro-free speech chatbot Grok. By combining hundreds of thousands of GPUs, xAI aims to accelerate the training of Grok and other AI initiatives, driving advancements and new capabilities.
It's unlikely that Musk's supercomputer is the most powerful AI training system globally, given that companies like Meta, Microsoft, and OpenAI are also acquiring hundreds of thousands of Nvidia GPUs for their own AI projects. Nevertheless, Colossus demonstrates the rapid pace at which the industry is developing new AI training facilities. In a separate tweet, Musk mentioned that xAI had received forecasts suggesting it could take 12 to 18 months to fully operationalize the supercomputer.
See What’s Next in Tech With the Fast Forward Newsletter
Tweets From @varindiamag
Nothing to see here - yet
When they Tweet, their Tweets will show up here.