Featured, Tech

How Does Earpiece Translation Technology Learn?

The technology behind audio translation is not new — Skype launched its ‘real-time’ translator back in 2014, however it featured a slight delay as it translated the speech, with some minor mistakes as it struggled to correctly pick up on the words and phrases used. But the technology has grown significantly, and earlier this year Skype added Japanese to its collection of real-time spoken languages, bringing the total number up to 10, along with the 60 instant messaging languages it already boasts.

Now, companies are trying to use that audio translation technology to build earpieces that can recognise and translate languages in real time, such as Waverly Labs’ Pilot earpiece. Here, we look at how this learning takes place, and what the effect earpiece translation will have on the translation and interpretation industry as a whole.

How earpiece translation technology works

Using a series of complex algorithms, machines are able to ‘learn’ common phrases spoken in a variety of languages, and correctly identify the closest translated phrase. The simplest approach is to translate the sequence word for word, however, this ignores the grammar and context of the sentence, which could result in a very different end result. To combat this, the machine uses ‘deep learning’ known as sequence-to-sequence learning to translate a string of words and phrases together.

Using this ‘deep learning’, machines are also able to figure out the probability of the next word in a sentence or phrase, after just the first few words. It can also swap the order of nouns and adjectives for the phrase to match the rules and make sense in the target language. More rules for this algorithm are added with the introduction of more languages for the machines to translate.

Once the algorithms have been developed, the technology can be worked into different machines to be used for translation. For example, Siri was recently updated to be able to translate directly back to the user (Siri had previously pulled up relevant external links). In the case of the Pilot earpiece, the technology is simply placed inside an earpiece which is linked via Bluetooth to a smartphone. With the help of the Pilot app, users can speak to each other and hear the translations via their earpieces.

The effect of earpiece translation on the industry

News of the Pilot earpiece first broke last year, after securing over $4 million in funding from supporters on Indiegogo. Since then, makers Waverly Labs have been collecting pre-orders of the earpieces, ready to be shipped out later this year. Written translations by machines have previously been criticised for not being able to take into account non-verbal communication, which makes up 93 per cent of conversations. However, being able to have a face-to-face conversation with someone through the Pilot app eliminates this issue, and helps build more meaningful relationships.

While there has been a buzz around the technology and the earpieces, critics have already noticed drawbacks to relying on machines to interpret conversations. Perhaps the biggest criticism is the fact that machines may not always take dialects into consideration, which could lead to incorrect translations. It also doesn’t account for the development and evolution of language, and may not pick up any new words. However, human translators and interpreters will be experts in various dialects of their chosen language and are able to respond immediately to any new phrases that may crop up.

For those needing an interpreter for important business decisions, it’s recommended to stick with using human interpreters rather than machine translation technology. However, machines could be helpful for anyone looking to learn any small phrases while on holiday.

Previous ArticleNext Article

Simon Davies is a London based freelance writer with an interest in startup culture, issues and solutions.

Leave a Reply

Your email address will not be published. Required fields are marked *

Send this to a friend