Nvidia’s Chat with RTX: A Local, Fast, Custom Generative AI

Nvidia’s Chat with RTX: A Local, Fast, Custom Generative AI

Nvidia's Chat with RTX: On-device AI Boosts Productivity with Personalized, Local Processing

Nvidia has unveiled Chat with RTX, a generative AI chatbot designed for Windows PCs. This local application enables enterprises to harness AI capabilities directly on employees' local environments, enhancing productivity without relying on external platforms. The chatbot allows users to personalize it by customizing data sources for large language models (LLMs), keeping private data on their PC. Users can quickly search for answers based on their data without sharing it with third parties or requiring an internet connection.

According to Jesse Clayton, an Nvidia product manager, Chat with RTX runs locally on Windows RTX PCs and workstations, ensuring fast results while maintaining data privacy. Unlike cloud-based LLM services, this approach enables users to process sensitive data on their local PC, avoiding the need to share it externally.

The application supports two open-source LLMs, Mistral or Llama 2, and requires an Nvidia GeForce RTX 30 Series GPU or higher with at least 8GB of video RAM. It runs on Windows 10 or 11 with the latest Nvidia GPU drivers, utilizing retrieval-augmented generation (RAG), Nvidia TensorRT-LLM software, and Nvidia RTX acceleration.

Chat with RTX allows users to type queries instead of searching through notes or saved content. For instance, users can ask questions like, "What was the restaurant my partner recommended while in Las Vegas?" "The chatbot examines the local files specified by the user, delivering answers with contextual relevance."

This local, personalized AI aligns with Nvidia's strategy to position itself as a key supplier of hardware and software for rapidly evolving genAI technology. By making AI accessible across various platforms, from cloud to edge computing, Nvidia aims to democratize the technology. The company sees itself contributing to the evolution of genAI, with applications like Chat with RTX addressing privacy concerns associated with AI-based chatbots.

The application supports various file formats, including text, pdf, doc/docx, and xml. Users can augment the chatbot's library by pointing the application at a folder containing files. Additionally, users can provide a YouTube playlist URL, and Chat with RTX will load transcriptions of the videos in the playlist, enabling queries based on the content they cover.

Chat with RTX aims to provide users with their personal AI assistant, enhancing productivity while addressing privacy and security concerns. The local execution of the chatbot reduces the risk of exposing sensitive information externally, making it an attractive solution for enterprises concerned about data privacy.


As genAI-based applications gain popularity, concerns about security and privacy continue to rise. Localized solutions like Chat with RTX present a potential way forward, allowing organizations to benefit from AI capabilities while maintaining control over sensitive data in a secure, private environment.

Disclaimer: Analytics Insight does not provide financial advice or guidance. Also note that the cryptocurrencies mentioned/listed on the website could potentially be scams, i.e. designed to induce you to invest financial resources that may be lost forever and not be recoverable once investments are made. You are responsible for conducting your own research (DYOR) before making any investments. Read more here.

Related Stories

No stories found.
Analytics Insight