In order to better serve RTX GPU users, NVIDIA's experimental ChatRTX chatbot now includes a range of AI models. It used to rely on Mistral or Llama 2 for document analysis, but now it offers faster photo searches with Google's Gemma, ChatGLM3, and OpenAI's CLIP. This update, called "Chat with RTX," requires a GPU from the RTX 30/40 series that has 8GB of VRAM.
The demo app "Chat with RTX" requires an RTX 30/40-series GPU with a minimum of 8GB VRAM to function. It operates by setting up a local chatbot server accessible via a browser. Users can input their local documents and also YouTube videos into the server, transforming it into a robust search tool that provides summaries and answers questions based on their personal data.
The integration of Google's Gemma model into ChatRTX is particularly significant. The app, designed to run directly on high-performance laptops or desktop PCs, simplifies the process of running these models locally. This results in a chatbot interface that offers users the flexibility to choose between models based on the type of data they wish to analyze or search.
Available for download from NVIDIA's website as a 36GB file, ChatRTX now also supports ChatGLM3, which is an open bilingual (English and Chinese) large language model. Furthermore, OpenAI's Contrastive Language-Image Pre-training (CLIP) has been incorporated into the system. This feature allows users to search as well as interact with local photo data, effectively training the model to recognize images.
In its most recent update, NVIDIA has enhanced ChatRTX with voice query support by integrating Whisper, an AI-based speech recognition system. This new feature allows users to search their data using their voice, adding a new level of convenience and functionality to the chatbot.
See What’s Next in Tech With the Fast Forward Newsletter
Tweets From @varindiamag
Nothing to see here - yet
When they Tweet, their Tweets will show up here.