For the past decade, artificial intelligence has made remarkable strides and natural language processing (NLP) is one of the top applications of it. From Siri to Alexa, Cortana, Google Assistant, and now Nvidia is set to power the voice assistant. A new Nvidia’s AI-powered speech recognition is slated to strengthen the voice assistant search in Microsoft Bing and a new generation of chatbots. With this move, a new kind of development in chatbot systems will be seen that can respond more like a real human.
According to the company, the new system could power chatbots that operate more realistically than existing AI systems. To achieve this, NVIDIA cited its new platform has been optimised to run queries on massive datasets. However, it is extremely complex for chatbots, intelligent personal assistants and search engines to operate with human-level comprehension due to the inability to implement very large AI models in real-time. So, to overcome this challenge, the computer gaming company added key optimisations to its AI platform that could deliver complete AI inference in just over two milliseconds.
The Vice President of applied deep learning research at Nvidia, Bryan Catanzaro said, “Large language models are revolutionising AI for natural language. They are helping us solve exceptionally difficult language problems, bringing us closer to the goal of truly conversational AI.”
With this latest move, NVIDIA’s AI becomes the first platform to train one of the most advanced AI language models – BERT, within an hour and complete AI inference in just over 2 milliseconds. This trailblazing level of performance makes it possible for developers to leverage state-of-the-art language understanding for large-scale applications they can make accessible to hundreds of millions of consumers globally.
Microsoft comes at first as early adopters of NVIDIA’s performance advances and some of the world’s most innovative startups are also harnessing the company’s platform to build highly intuitive, immediately responsive language-based services for their customers.
This rate of advancements in AI research is putting the power of language understanding and conversational interface into the hands of developers. And this can enable data scientists and developers to develop custom AI models that work similarly to voice assistants like Alexa and Siri. This will benefit doctors and lawyers to interact with expert agents that can understand the terminology and the context of the conversation. And this new user experience will be going to be a part of business applications line in the years to come.