Nutanix announced the Nutanix GPT-in-a-Box solution for customers looking to jump-start their artificial intelligence (AI) and machine learning (ML) innovation, while maintaining control over their data. The new offering, available, is a full-stack software-defined AI-ready platform, along with services to help organizations size and configure hardware and software infrastructure suitable to deploy a curated set of large language models (LLMs) using the leading open source AI and MLOps frameworks on the Nutanix Cloud Platform. It allows customers to easily procure AI-ready infrastructure to fine-tune and run generative pre-trained transformers (GPT), including LLMs at the edge or in their datacenter.
Many enterprises are grappling with how to quickly, efficiently and securely take advantage of the power of generative AI and AI/ML applications, especially for use cases that cannot be run in the public cloud because of data sovereignty, governance and privacy concerns. New use cases emerge every day as organizations look to leverage generative AI to improve customer service, developer productivity, operational efficiency and more. From automated transcription of internal documents, to high-speed search of multimedia contents, and automated analysis, many organizations see the opportunity with AI but are struggling with growing concerns regarding intellectual property leakage, compliance and privacy. Additionally, organizations looking to build an AI-ready stack often struggle with how to best support ML administrators and data scientists, while the prospect of large AI investment costs has enterprises stalled in their AI and ML strategy.
The Nutanix GPT-in-a-Box solution delivers ready-to-use customer-controlled AI infrastructure for the edge or the core data center and allows customers to run and fine-tune AI and GPT models while maintaining control over their data. Nutanix provides a full complement of security and data protection offerings ideal for AI data protection.
“Helping customers tackle the biggest challenges they face in IT is at the core of what we do, from managing increasing multicloud complexity, to data protection challenges, and now adoption of generative AI solutions while keeping control over data privacy and compliance,” said Thomas Cornely, SVP, Product Management at Nutanix. “Nutanix GPT-in-a-Box is an opinionated AI-ready stack that aims to solve the key challenges with generative AI adoption and help jump-start AI innovation.”
This new solution includes:
* The Industry-leading Nutanix Cloud Infrastructure platform, with the Nutanix Files Storage and Objects Storage solutions, the Nutanix AHV hypervisor and Kubernetes, along with NVIDIA GPU acceleration, which can be sized for large to small scale.
* Nutanix services to help customers size their cluster and deploy an opinionated stack with the leading open source deep learning and MLOps frameworks, inference server, and a curated set of large language models such as Llama2, Falcon and MPT.
* Ability for data scientists and ML administrators to immediately consume these models with their choice of applications, enhanced terminal UI, or standard CLI.
* The platform can also be leveraged to run other GPT models, as well as fine tune these models leveraging internal data, hosted on included Nutanix Files or Objects Storage services.
The Nutanix GPT-In-a-Box solution builds on the full stack scalability, performance, resilience and ease of use that the Nutanix Cloud Platform is known for. Nutanix’s expertise with scalable infrastructure across public cloud, datacenter and edge use cases delivers the ideal environment to fine-tune and run AI applications while maintaining control over the data. In fact, in a recent survey, 78% of Nutanix customers indicated they were likely to run their AI/ML workloads on the Nutanix Cloud Platform.
Nutanix’s expertise and involvement in the open source AI community provide customers with a strong foundation on which to build their AI strategy. Key contributions include: participation in the MLCommons (AI standards) advisory board; co-founding and technical leadership in defining the ML Storage Benchmarks and Medicine Benchmarks; serving as a co-chair of the Kubeflow (MLOps) Training and AutoML working groups at the Cloud Native Computing Foundation (CNCF).
See What’s Next in Tech With the Fast Forward Newsletter
Tweets From @varindiamag
Nothing to see here - yet
When they Tweet, their Tweets will show up here.