From its genesis at the post-World War II dawn of computing – when ambitious
researchers believed it would take only a few years to crack the language
problem – until the late 1980s, machine translation, or MT, consisted almost
entirely of what are known as rule-based systems. As the name implies, such
translation engines required human linguists to combine grammar and syntax rules
with cross-language dictionaries. The simplest rules might state, for
example, that in French, adjectives generally follow nouns, while in English,
they typically precede them. But given the ambiguity of language and the vast
number of exceptions and often contradictory rules, the resulting systems ranged
from marginally useful to comically inept.
In spite of the advances made in recent years with such easily available web-based translation services such as Babelfish, translating the poetic language of art song and aria texts into English still works best when done the old-fashioned way, that is, spending the time on learning the nuts and bolts of foreign languages and using that knowledge to bring at least a part of that meaning and sensibility back into English.