"If you talk to a man in a language he understands, that goes to his head. If you talk to him in his own language, that goes to his heart." - Nelson Mandela
Language, as we say, can build bridges and mend hearts; the words just need to be spoken. Until the rise of technology, it might have been hard for people to understand or learn a new language. But as technology advanced, the language barrier between people and oceans began to grow thinner.
What significance does language have in the tech world?
However, sometimes even technology can have its limitations when it comes to language. Some programs and software might only exist in the universally known and acknowledged language; mostly English. Which would definitely have been difficult for most of the people who wouldn’t have known English; especially when it comes to operating an AI integrated technology.
Well, good news is on its way for people speaking different languages across the world who want to operate an AI tech using their Regional or National language but could not due to language barriers.
Google Taking the AI Leap to Break Language Barriers
AI and Language have undoubtedly been at the heart of Google’s products always, however, recent advances in machine learning; particularly the development of multi-functional “large language models” or LLMs, have evidently placed a new found emphasis on these domains.
What are Google’s early plans?
As per Interviews and Reports
“By having a single model that is exposed to and trained on many different languages, we get much better performance on our low resource languages,” Ghahramani (VP of Research at Google) said to the Verge in an interview.
He also cited, “The way we get to 1,000 languages is not by building 1,000 different models. Languages are like organisms, they’ve evolved from one another and they have certain similarities. And we can find some pretty spectacular advances in what we call zero-shot learning when we incorporate data from a new language into our 1,000 language model and get the ability to translate [what it’s learned] from a high-resource language to a low-resource language.”
What’s more to it?
In the recent past, research has shown the scalability of this approach and the scale of Google’s model could offer considerable profit over the past work.
Evidently, large-scale and competitive projects have become ubiquitous to tech companies’ and their ambition to dominate the AI research market. They are also looking forward to drawing on these firms’ unique advantages such as access to vast computing power and training data.
What can be seen as a comparable project is Facebook’s parent company Meta’s ongoing research to develop a “universal speech translator.”
Some Issues to Deal with
Some problems may arise while Google will attempt to put together this large scale process such as:
The Plan Further
Google has implied that it has no definitive plans on the application of the functionality of this model. As of now, it expects that the AI Model will have a range of improvement uses across Google’s products, starting from Google Translate to YouTube captions and many more.
In another open interview, Ghahramani said, “The same language model can turn commands for a robot into code; it can solve math problems; it can do translation.”
“One of the really interesting things about large language models and language research in general is that they can do lots and lots of different tasks,” implied Ghahramani.
As per Ghahramani, “The same language model can turn commands for a robot into code; it can solve maths problems; it can do translation. The really interesting thing about language models is that they’re becoming repositories of a lot of knowledge, and by probing them in different ways you can get to different bits of useful functionality.”
All in all, tech and language experts are yet to test Google’s ambitious project and give it a nod.
Will it really make AI language friendly to that extent that it will definitely bring the tech world closer?