Saturday, December 02, 2006

Translation by machine

An article by Evan Ratliff in the December issue of Wired Magazine explores the problems and possibilities of computer translations and how Meaningful Machines is attempting to create a computer program that can work its way around human languages with all their idioms and multiple meanings. An excerpt:

From its genesis at the post-World War II dawn of computing – when ambitious
researchers believed it would take only a few years to crack the language
problem – until the late 1980s, machine translation, or MT, consisted almost
entirely of what are known as rule-based systems. As the name implies, such
translation engines required human linguists to combine grammar and syntax rules
with cross-­language dictionaries. The simplest rules might state, for
example, that in French, adjectives generally follow nouns, while in English,
they typically precede them. But given the ambiguity of language and the vast
number of exceptions and often contradictory rules, the resulting systems ranged
from marginally useful to comically inept.


In spite of the advances made in recent years with such easily available web-based translation services such as Babelfish, translating the poetic language of art song and aria texts into English still works best when done the old-fashioned way, that is, spending the time on learning the nuts and bolts of foreign languages and using that knowledge to bring at least a part of that meaning and sensibility back into English.

1 comment:

  1. I guess that the only way to obtain a really high-quality machine translation, comparable to the human translation, is improving the "good old" rules-based method. Statistical or other formal approaches are perhaps even more limited, because the natural language itself is not formal, it provides an unlimited number of combinations which cannot be found in texts already existing (especially in parallel texts). If we want computers translate natural texts as accurate as human translators, we have to imitate the mentals steps made by humans when in the process of translation. The modern rules-based machine translation systems, such as PROMT, Systran or T1, use a restricted number of rules, perhaps only 10 % of the information which can be extracted from a text, whereas a human translator use everything, including the external informations about the real world. That's why humans translate correctly, whereas computers do not. As for the statistical method, it also may be of some use for checking the results of translation.

    ReplyDelete