Recently, Google demonstrated Google I/O keynote featured – the latest version of its Voice Assistant which will roll out later in 2019. The company also said that it is working to expand the understanding of assistant with personal references. Google said that the next-gen assistant will come with new Pixel phones which will come after current Pixel 3 line later this year. The company seems to be accelerating toward the dream of designing a virtual assistant that can efficiently handle complex tasks using voice.
Google is also working on its machine learning software to enable Google-created and other software to employ better ML techniques. Specifically, the company is making a great move to shift ML operations from the cloud onto mobile devices of people. The shift will encourage ML-centric applications to run faster and be more private.
Google I/O has been focusing on machine learning for the last 3 years. The company believes that a small squad of ML experts, a great amount of data and Google’s own custom silicon can make it placed to grab opportunities put forward by machine learning.
Google has served us with a bulk of ML baked products – Android with voice recognition and Google Assistant, Google photos have an ML-based search function. Last year Google launched Google Duplex which makes a reservation on behalf of the customer using the software.
Now the ML in Google is Further Zooming into Two Areas:
Shifting ML Activity onto Smartphones
The efforts of Google over the years have contributed to making its neural network stronger, accurate, deeper and complex. While the approach has shown some impressive outcomes, the network often ends up being too complicated to run on mobile devices.
Therefore, the tech giant is working efficiently to shift a volume of computation on-device. The basic on-device voice recognition capabilities are underpinned Android currently but the company’s virtual assistant needs an internet connection for that. Google said that the new offline mode of Google Assistant will be introduced in late 2019. With this newness the feature will be ten times faster for certain tasks, says Google.
Using ML to Help Disabled/Disadvantaged People
The Google I/O keynote focused on the methods in which the ML technology will aid an array of disadvantaged people including deaf, illiterate, cancer patients and others.
Although the Google translate feature already enables users to translate text in another language. With new advancement, users will be able to read the text aloud which can be either in the original language or converted language.
• Google recently launched live transcribe app which presents people with subtitles for real-life conversation who cannot hear properly.
• A new feature of Google called Live Caption enables Android users to showcase real-time transcriptions to any audio played by phone.
• Also, a feature called live relay enables the deaf population to treat a phone call just like a text chat in which the words of the caller are transcribed as chat message in real-time.
• The company is also trying to employ machine learning to aid with degenerative conditions that prevent them from speaking altogether.
The mission of Google involves arranging the world’s information and make it accessible for most of the population. Google is clearly hinting that its ML-centric projects are just getting started where new chips, algorithms, and platforms are getting developed. In today’s world technologies have surely a large space to grow and improve. The company believes that the proper blend of resources and talent with technology can be fully exploited for producing marvels to the world.