There are many differences between human translation and machine translation (MT). For one thing, human translation has been with us for thousands of years, machine translation for only about a half century.
The Decree of King Ptolemy V, issued in 196 BC, is the first known example of human translation, (written in ancient Egyptian hieroglyphs, Demotic script and ancient Greek), which was recovered on a stone slab known as Rosetta stone.
Other known examples of old translations include a translation of Buddhist scriptures known as Lotus Sutra from Sanskrit to ancient Chinese by Dharmarakṣa, aka Zhu Fahu, from 286 AD, and the translation of the Old and New Testament from Hebrew and ancient Greek into Latin, known as the Vulgate, by St. Jerome (Eusebius Hieronimus Sophronius) at the end of the 4th century. It should be noted that St. Jerome was translating from Hebrew and ancient Greek, which he learned only later in life, into Latin, which was not his native language (his native language was Illyrian and he learned Latin only after he moved to Rome as a young man).
While the mental processes that human interpreters and translators employ in their respective, similarly-wired heads to interpret speech or translate written documents from one language into another probably have not changed much if at all in the last few thousand years, the technical tools that interpreters and translators in particular use for their work have changed greatly.
Instead of writing on stone or wooden slabs, which must have been quite a hassle, translators later started writing on papyrus, which was invented in Egypt around 3,000 BC, and paper, invented in China in 105 AD. After a thousand years or so when translators and non-translators alike where using a quill with ink to write documents such as Magna Charta and the Declaration of Independence, nobody uses a quill for writing anymore, although some people still use a fountain pen with ink, and some still use a typewriter which was invented around the year 1800.
Computers and word processing software largely replaced typewriters by the end of nineteen eighties and machine translation (MT) has become widely available on the Internet in the last two decades.
The translators who translated the Decree of King Ptolemy V more than 22 centuries ago would have been much better off had they been able to use tools such as a typewriter, or a computer with word processing software, or even MT software. But I think that the fact that these tools were not available to them did not have that much influence on the quality of their translation. The most important thing, then and now, was how well they knew the source and the target language, the subject that they were translating, and whether they really were trained as translators, or whether they were amateur translators without much education and professional training. I suspect that unlike now, only people who were educated and trained as translators would have applied for the job back then as a mistranslation was probably punishable by death.
The simple facts about how translations are created, (namely during processes taking place in human head), have not changed at all since the times of King Ptolemy V. The mistake that people make now, at the beginning of the 21st century, is that they confuse MT, which is just another ingenious tool in a series of tools that have been created over the centuries by ingenious humans, with the end result, namely a real translation from one language into another, which can be created only by a well functioning and highly trained human brain of a competent and experienced translator.
It is an understandable mistake. The MT product looks just like a real translation, that is until you start reading it. In some cases, machine translations can appear to be very close to human translation. I usually download a machine-translated version of the patents that I am translating if one is available and I compare the MT product to the original language word by word at the beginning of my translation.
Machine translations of patents from Japanese, German or French and other languages are very useful not only to monolingual people because they do provide a lot of information about the original document, but also to specialized translators such as myself, because they suggest possible translations of technical terms in a wide range of fields such as medicine, biochemistry, physics, electrical engineering, etc.
But these terms must be first validated by a human translator before they can be used in a real translation. Unlike a human translator, a computer remembers and will find instantaneously for example the Latin name of a particular bone in your cranium, or of sea algae, or of a bacterial strain, based on Japanese or Chinese characters, because there is only one possible match in this case, and if both the characters and the Latin name have been stored in the memory by a human operator, the computer will find them correctly most of the time.
But when there are several matches for words that are commonly used in both languages, it is essentially impossible to design software that will place every technical term translated from one language into another in the proper context without using first the one element that computers lack – human thinking.
What I am doing when I use MT in my work is in fact post-editing of the MT product. While I look at the MT product a lot when I start translating, after a few hundred words I usually only refer to it when I am not sure about a certain term because it would slow me down too much if I were to constantly compare the original text to the MT product and to my own translation.
This is why I think that the new business model that is based on the premise that editing of machine translated texts by human translators will be more effective (read “cheaper”) than human translation is not a viable model. I think that editing of the detritus that is often left behind by MT is more time consuming than the actual translation that is done from scratch by human translators, and that the result of such editing is always inferior to translations that were created by highly educated and highly experienced human translators.
There may be place for “somewhat inferior but much cheaper” translations obtained when the MT product has been edited by humans working for a low hourly rate for some purposes.
But probably not in my line of work. Patent translations are used for example in court as evidence in order to validate or invalidate technological issues and thus entire patents in patent litigations, or to file patents in various countries.
Most people who deal with patents, such as intellectual property managers of large corporations, patent lawyers and inventors realize that there is simply too much money at stake to leave translation to computers with specialized software, even if the product of machine translation should then be revised by a human translator.
I don’t think that translations of patents that were obtained in this manner would even be usable at all in court. It should be a very simple matter to have a lawsuit dismissed during patent litigation if the “evidence” is a document that has been “translated” using machines and software.