
Google has introduced two new AI-powered features into Google Translate, owing to which people can now translate around 1 trillion words across Google Translate, Search, and in visual translations in Lens and Circle to Search. The new features, backed by the advanced reasoning and multimodal capabilities, aim to help users with live conversations and language learning.
Google said that it has built upon existing live conversation experience and introduced the ability to have a back-and-forth conversation in real time with audio and on-screen translations through the Translate app. This will help users connect with people from across cultures in a meaningful way,
The company’s advanced AI models are now making it even easier to have a live conversation in over 70 languages, including Hindi, Arabic, French, Korean, Spanish, Tamil, etc. The new live translation capabilities are now available in the US, India, and Mexico.
Users will hear the translation aloud and also see a transcript of the conversation in both languages on their device. The feature effortlessly switches between two languages, making conversations feel natural.
According to Google, Translate has also been designed to identify conversational pauses, accents, and intonations. This is possible as Translate’s live capabilities are backed by Google’s advanced voice and speech recognition models that are trained to help isolate sounds.
Google is also piloting a new language practice feature designed to help users meet their unique learning needs. Early learners, someone looking to practise conversation, or even an advanced speaker looking to brush up their vocabulary, can now use Translate to offer tailored listening and speaking practice sessions.
See What’s Next in Tech With the Fast Forward Newsletter
Tweets From @varindiamag
Nothing to see here - yet
When they Tweet, their Tweets will show up here.