For the past two years, Google Translate has been using machine learning to provide more accurate translations. But the algorithm is now attempting to translate phrases or even random combination of words that do not make any sense into meaningful, and somewhat scary messages.
The flaw caught Motherboard‘s eye, which reports that Google Translate is converting nonsensical words into well-structured sentences. Some examples include multiple repetitions of words like “dog” and “ag” which Google Translate recognizes as coming from certain foreign languages.
While some people have attributed this to unearthly and demonic powers, subreddit named “TranslateGate” suspects that these could be due to the text learned by Google by peeking into private messages and emails.
However, Google spokesperson has denied this possibility, claiming “Google Translate learns from examples of translations on the web and does not use ‘private messages’ to carry out translations, nor would the system even have access to that content. This is simply a function of inputting nonsense into the system, to which nonsense is generated.”
It is entirely possible that these random and vaguely striking outputs are fed by miscreants and jilted employees at Google. Alternatively, these could be due to troublemaking users who are misusing the “Suggest an edit” button. But such an intrusion would not escape Google’s radar.
But experts suggest that it is possibly due to the neural network trying to make sense out of discrete information. This is part of commercial neural networks’ tendency to find order in chaos. It is likely that Google used religious texts – the Bible, in particular – to teach languages like Maori or Somali to its neural networks.
Andrew Rush, an assistant professor at Harvard says, “The vast majority of these [black boxes] will look like human language, and when you give it a new one it is trained to produce something, at all costs, that also looks like human language.“