
AMD will collaborate with Rapt AI, a GPU workload management software maker, to improve AI model inference and training on AMD's Instinct processors. The companies will work together to integrate Rapt AI's workload automation platform with AMD's GPUs. The Rapt AI software will help enterprises with resource allocation and GPU management, and assist them in fixing performance bottlenecks. This will offer customers a scalable and cost-effective solution for deploying AI applications. In the future, Rapt AI will also focus on memory utilization.
While AI adoption continues at a rapid pace, infrastructure complexity is holding back innovation, including high demand for compute resources, GPU management complexity, and costly inefficiencies.
Key highlights include:
· Cost-Effective AI: Combines AMD Instinct GPUs with Rapt AI’s intelligent workload automation to maximize GPU utilization, reduce TCO, and optimize resource allocation for AI inference and training.
· Simplified Deployment & Scalability: Streamlines GPU management across on-premise and multi-cloud environments, enabling seamless AI deployment and increased inference performance through intelligent job density and resource allocation.
· Optimized & Future-Ready: Provides out-of-the-box performance benefits with AMD Instinct GPUs and ongoing collaboration for future optimizations in GPU scheduling, memory utilization and more.
Rapt AI software supports AMD and Nvidia GPUs in a private or cloud data center. It also supports tensor processing units available on Google Cloud and AWS' Trainium AI accelerator. A TPU is an application-specific integrated circuit, or ASIC, designed explicitly for AI applications' high-volume mathematical and logical processing tasks.
See What’s Next in Tech With the Fast Forward Newsletter
Tweets From @varindiamag
Nothing to see here - yet
When they Tweet, their Tweets will show up here.