Machine Translation (MT) has come a long way since its birth in the 1950s.
And since its inception, different theories and practices have come and gone. Most recently, there’s been quite a bit of talk about Neural Machine Translation (NMT), a new method that uses Deep Learning to translate foreign language texts.
Google started using NMT late last year and lauded the move as a big step towards better MT quality. NMT produces translations that are more accurate than its predecessor, Phrase Based Machine Translation (PBMT), thanks to its superior ability to translate complete sentences at a time.
PBMT is one mode of Statistical MT, which has been around for more than half a century. And before NMT came around, Statistical MT was the method translation practitioners and researchers were most interested in.
Statistical MT uses predictive algorithms to teach a computer how to translate text. These models are created, or learned, from parallel bilingual text corpora and used to create the most probable output, based on different bilingual examples.
Using this already translated text, a statistical model guesses or predicts how to translate foreign language text. Statistical MT has different subgroups, including word-based, phrase-based, syntax-based and hierarchical phrase-based.
The benefit of Statistical MT is its automation. One drawback is that this system needs bilingual material to work from, and it can be hard to find content for obscure languages. Statistical MT is a “rule-based” MT method, using the basis of corpora translations to create its own text segments.
NMT is the newest method of MT and said to create much more accurate translations than Statistical MT. NMT is based on the model of neural networks in the human brain, with information being sent to different “layers” to be processed before output.
NMT uses deep learning techniques to teach itself to translate text based on existing statistical models. It makes for faster translations than the statistical method and has the ability to create higher quality output.
NMT is able to use algorithms to learn linguistic rules on its own from statistical models. The biggest benefit to NMT is its speed and quality.
NMT is said by many to be the way of the future, and the process will no doubt continue to advance in its capabilities.
MT can also be done using a “hybrid” method that combines both techniques to create a desired result. The efficacy of each depends on a number of factors, including the languages used and available linguistic resources, or example text.
Even though it’s been around for quite some time, MT is still a burgeoning resource. Advances in the MT field have been great, but using it as a sole means of translation isn’t yet an option. The margin for error, no matter what method is used, is still too large to depend on MT for documents that will be published or used externally.
With that said, MT does provide a “gist” translation and acts as a great way to decipher documents quickly and cost-efficiently. Knowing how to use it effectively depends on project scope, cost and end goals.
ULG will be hosting Machine Translation 101, a webinar on MT’s best practices on Feb. 16. You can register here.