Longread: Alice in Machine Translation Land




  • Greater than 12 minutes, my friend!

    In the sometimes heavily emotional discussion about machine translation, professional translators end up by agreeing that machine translation eventually will take on tasks. They believe that translation engines will take over a tiny amount of work that is now being done by people. But these translators immediately add that there will always be fields and specialisms that cannot be replaced by machines. One of those is literary translations.
    But… is that really the case? If robots are even taking over real human tasks like caring, why shouldn’t they replace specific translation tasks as well? In this article I discuss the results of literary translations using a machine.


    This article is a written version of the topic I planned to present at Elia Together in Athens last week. For personal reasons I was unable to attend so here is what I was planning to say.

    Pieter Beens at ELIA Together 2018

     

    Book translations by a machine

    The first question that pops to mind when it comes to translating books with the help of machine translations, is: “Why on earth should you do that?” Although that is a reasonable question, there is, however, a question that precedes that: Why on earth should you even consider it?

    Book translations have always been a source of pride for the author, the publisher and the literary translator. The mere job of professional book translators has always been regarded with attention, admiration, and some sense of jealousy. For many people inside and outside the translation industry, being a book translator has always been the summit of what you can achieve as a translator. Despite the decreasing attention to reading in general and literature in particular, books still carry some kind of magic. Many people still dream of writing a great novel and becoming immortal in the minds of millions. And, if one in a million authors become that famous, it is great to have the honour to translate it into your native language. Indeed, translating books, and in particular bestselling books, carries a sense of being extremely good. In some countries translators are as famous as the authors themselves.
    It might therefore not be surprising that many translators, no matter their specialism, admit that the field of literary translations will forever exclude translation robots.

    A different reason for that strong positioning is that book translations should require a different approach from other translations. It is commonly thought that translating a book requires different skills and expertise than a technical translation, while a book translation should also be much more of a creative translation. It must be said that book translations require another way of translating, not the same as technical manuals or content for fashion websites. And that is exactly what robots have proven not to be good at. In my article Feeding the translation robot, I wrote about the difficulties of machine translations and creativity. It is exactly that difficulty that cannot be mimicked by translation engines.

    A last reason why book translations and machine translations are not a good match in discussions is an emotional one. As noted, book translations bring a sense of pride to authors, publishers and translators. At the same time they also influence their readers on an emotional level. They often feel attracted to the physical book and to the main character alike. Admitting to using a translation engine to translate an outstanding piece of work will then not only annoy the author, who feels that their process of thinking, writing and deleting can never be matched by a robot, but it will also undo the magic of a book translation. Who will ever feel the emotions a great book can convey if they know that the translation was at least partially made by a computer?

    These three reasons are compelling enough to set the thought of machine translations for literature completely aside.
    That brings us back to the question posed at the beginning of this paragraph. Why on earth should you consider book translations by a machine?
    The answer is both based on my personal experience and on some sense of curiosity.

    My experience as a book translator

    In the past seven years I have had the honour to translate a dozen books from English into Dutch. All those books, mainly for well-known Dutch publishing houses and world-famous authors, were in the broad field of non-fiction, and especially in the fields of history and politics.
    It must be said that non-fiction books need as much creativity as fiction books in order to be successful. Books only become bestsellers when they are authentic, deal with the right subjects and are written by authorities in their respective fields. With the right mix of ingredients they can become a steady source of income for their authors. But in contrast to fiction, non- fiction books need to stick to the truth. They deal with facts, persons, times and places, and often that ensures that the books have a different tone than fiction does.
    While my experience has taught me that only a few things beat a human translation, I still believe that the best thing for the translation of literature is a translation by flesh and blood people. They are able to creep under the skin of an author and understand his style of thinking and writing. Literary translators have the skills and knowledge of literature and language that cannot be matched by a computer or robot. But it is exactly that quality that a translation engine lacks that makes me curious: “What happens to literature when we try to translate it by a robot?” It is probably the same question that triggered the developers of the Asibot: what happens when we feed a robot with tons of books and ask it to write one?

    Looking for a match between man and machine

    Until now most professionals in the translation and publishing industry have argued that book translations will always be safeguarded from machines. That argument in itself calls for a test.
    With recent developments in the field of machine translations, where translation engines have become better and better at translations, the discussion about digitized book translations becomes even more acute. Of course we can be convinced of the thought that book translations cannot be handled by computers, but is it really the case?
    My interest in the result of machine translations, not the technology behind them, and my willingness to evaluate claims of impossibility to let “the robots” translate books, formed the basis for this small research.

    A road paved by technological advancements

    As frequent readers of my blogs might know, a couple of years ago I invested in software to create my own translation engines. This software, Slate Desktop, works offline and uses translator’s very own translation memories to create a translation engine (you can read the review here). This way the software does not create security vulnerabilities and risks of hacked, stolen and/or lost content. It thoroughly analyses the translation memories you feed to it, ensuring it learns your unique style and wording. The results are best when Slate Desktop works, like almost all translation engines, with an extensive and highly specialized memory. The simple fact that I had software to create my own secure offline translation engine in combination with some large book translations at hand made me curious about delving into the question of what would happen if a robot could mimic my style and wording. Could a translation engine work as a partner for literary translation?

    Before answering this question two things must be said. First of all, Slate Desktop, by nature cannot translate a single word without input. In order to translate any type of text it first must have learned the meaning of words, the structure of sentences and other essential linguistic details. So if it is set to translate a particular text it should have the right knowledge to fulfil that task. In other words: if Slate Desktop is to translate a book, it should first have learned from similar book translations in order to find the right words in the right order.
    That also means that the translation engine cannot translate a book one hundred percent on its own. If we want to produce a great literary translation by machine we should do a big part of it ourselves in order to be able to teach the translation engine all it should know to do a proper job.

    Secondly, translating literature with a translation engine does not follow an entirely different process than normally translating a book. A normal translation process requires a translation, internal editing and correction, external editing, typesetting and a last correction round. Translating a book by a robot requires pre-translation, internal editing and correction (“post-machine editing”), external editing, typesetting and a final correction. Basically the process is the same but in step 1 we use a robot, and in step 2 we are talking about post-machine editing instead of human editing.

    Traditional book translation process versus machine translation process

    While other translation engine providers, like SDL LanguageCloud, could have helped me a lot in translating literature because they boast enormous translation memories with billions of words, I decided not to rely on cloud-based solutions. If something went wrong and the text ended up online it could have severe consequences for both the publishing houses, authors and me. I simply wanted to avoid that and therefore chose my offline solution.

    Travelling to machine translation land

    So after some careful consideration I decided to take the test. In the past couple of years I’ve received some great book translation projects (apart from the books mentioned at on this website). I chose to test a translation engine for three books: a book about research on education, a book on political developments a couple of centuries ago and an authorized biography from a former presidential candidate. Unfortunately I cannot disclose the titles yet owing to some contractual constraints, but let’s stick to the “education book”, “upheaval book” and “candidate’s book”.
    While the subjects of the book differed for a great part they all had something in common: the book chapters were divided by subject and ended with a summary.
    For all these books I applied the same methodology. I opened the source text in Trados Studio and translated a specific chapter by hand. Once finished I exported the translation memory and fed it to Slate Desktop. Slate Desktop then created a translation engine with the specific terms and my particular style of translation. I then pre-translated the summary or last chapter with Slate Desktop and edited it.

    Literary translation with Slate Desktop

    Because of the structure of the books I had to repeat this process for every single chapter I translated, of course continually expanding the master translation memory for the book. Because each chapter had its specific subject, simply applying a translation memory with the contents of another chapter did not work: that chapter contained less specific terminology relevant to the particular chapter, making Slate Desktop doomed to fail even before it started. To gain the best results I had to translate the chapter and teach my robot new terms every single time in order to make it able to partner in translating parts of the books.
    This resulted in fairly small corpuses of some 10,000 words for the translation engines. The small size that I was able to feed to the translation engines was compensated somewhat however by the specialization: it only contained specific content for the books and chapters, so the chance of mistranslations was fairly low.

    Measuring the success of a literary translation robot

    In book translation processes a translation round is usually followed by an editing round with feedback on the quality and content of a book translation. As was explained earlier, that round was still included in this test of literary translation with a machine, but in order to measure the quality of the machine translation I chose to measure the difference between the raw machine translation and my own edits of this translation. This evaluation did not say much about the quality of my translation, but it did about the quality of the machine’s translation. The quality of my translation was still evaluated by the editor of the publishing house. Basically, a new step was added to the process described above.
    In order to measure the quality, I made use of the “editing distance”. This measure compares the original machine translation and the edited machine translation, and calculates a number that indicates the extent to which a particular segment was edited. The tool I used for this was Post Edit Compare, a useful tool that compares two files and then calculates the Post-Edit Modifications percentage. After each comparison a report can be generated with a detailed mathematical overview of all edits in the text. The developers of the tool describe the Post-Edit Modification percentage (PEM%) as follows:

    “The Post-Edit Modifications percentage in principal is calculated using an algorithm respecting the ‘Damerau–Levenshtein’ edit distance, by counting the minimum number of operations needed to transform one string into the other where an operation is defined as an insertion, deletion, or substitution of a single character, or a transposition of two adjacent characters.
    Understanding the ‘Damerau–Levenshtein’ edit distance, we can then calculate the PEM % (i.e. the weight of changes made to each translation during the post-edit phase represented as a percentage).
    A single character, tag & placeable represent the smallest single unit when calculating the distance.
    Note: to include the influence of changes made to tags or placeables – they are represented as single units”

    The editing destance is calculated by taking the number of added letters to an edited segment (d) and dividing it by the number of characters from the updated segment (n). That formula results in the weighted edits (w), which is subtracted from 100 (the percentage of the unedited segment). So: 100-(d/n).
    The higher the PEM % is, the lower the editing distance and the better the machine translation. A low score for the editing distance means that there was a lot to change in the original translation, indicating that the machine translation was poor.

    Header of Post-edit comparison report

    The results of machine-translating literature

    After I had translated a couple of chapters for a specific book and built a small specific translation memory with highly specific terminology and syntax, I applied it to the last chapter of a part of the book, or to the last subsection of a specific chapter. I then edited it the normal way in order to create the best possible translation. In some cases during this process, it soon became clear that the translation engine did a poor job: the corpus of the translation engine was sometimes simply too small to generate a good translation with the right terms. On the other hand, the engine created sentences with my own syntax, vocabulary and style, which made the translation still sound natural. A summary per test is provided below.

    Education book

    For this book I was able to put together only a small corpus. At the same time the source text was a bit poor, with bad hyphenation and tags. That resulted in a machine translation with considerable garbage and a style that was not very obvious. So, in the end, a good deal of editing was needed to bring the machine translation to quality.

    Upheaval book

    For this book a corpus was used of at lest 31,000 words (growing larger for each chapter that was machine-translated). While the machine translation was sometimes good, for some segments it was still very poor due to the introduction of new terms that were not translated before. Therefore there was still much editing needed in order to produce a good quality translation.

    Politics book

    For this smaller book, a small corpus was used. The lack of a large corpus and the specific content of each chapter made a good machine translation difficult. Because of the syntax, translating sentences in a natural way was not that difficult for the translation engine, but the lack of size resulted in strange sentences which needed much editing.

    As these small summaries show, the machine translation was challenging. That was mostly due to the fact that the corpuses were really small. That is, however, a recurring problem if one is to translate literature with a translation engine. You first need to build a specific corpus with a particular style and terminology. However, due to the specific nature and subject of a book and its limited size, building a corpus is very difficult. Putting together dozens of literary translations by different translators from different books in the genre can improve the output, but will do harm to the translation as it mixes different terms, styles and highly specific treatments of the different source texts they were derived from.

    As expected, the post-edit comparison reports showed a very high number of segments with a large editing range, varying from as high as 34% to as low as 95%. A couple of segments are displayed below:

    Results of different machine translations of books

    Will Alice dwell in machine translation land forever?

    After having tested the machine translation on several book translation projects one thing is clear: while a custom-made offline translation engine can certainly proves its worth, a specific translation engine for books is a challenge that cannot easily be overcome. Whatever the size of a book translation project, the very nature of literature makes it difficult to create an engine that is able to do a good job. If we are going to build a translation engine that can translate a book flawlessly, we need to have a massive translation memory that can form the basis for machine translation software to extract the right terminology, syntax and the particular literary style of the translator. Software is increasingly able to do that but the input of a specific book translation project is too low to deliver the quality that we are want to ensure.
    That challenge can be overcome by making use of the general translation memories of different authors, subjects and literary translators, but contradicts the very notion of literary translation. The only possible solution is to have a massive translation memory from a translator who translated the whole oeuvre of an author, but even then chances are that machine translation will fail to do its job.
    Another solution would be to make use of online resources and translation engines in the cloud, but that puts the secrets and works of authors at risk and that is the last thing translators, publishing houses and authors want.
    The machine translation of literature is therefore not a viable solution for the time being. The road to machine translation land is easy, but the way out is still impossible to find.

    Pieter Beens

    About Pieter Beens

    Freelance translator English-Dutch. Works for high-profile clients worldwide. Professional. Punctual. Passionate.

    3 thoughts on “Longread: Alice in Machine Translation Land

    1. Your article on MT is always quite educative. I am one of those that believe that MT cannot replace human translation, but gradually, i am beginning to understand that despite this stance, MT has a lot of advantages and that it actually assists translators in achieving a better output, with human input, of course.

    2. I love this article so much. Above all, I love this comment. We should all come to terms that machine translation can only ASSIST human translators and NOT replace them

    3. The size of a training corpus is not the biggest limitation. Let’s say someone creates a huge TM with seemingly all it needs to study and learn. Well, MT works because of consistency not size. It seems like an oxymoron, but it needs consistent variation in the training corpus. And the variations must be within the same sentence pair, and within a bilingual concordance window of 5 or 6 words. That’s boring, sorry.

      But that boils down to this. If you choose a translated phrase this time because it’s a better match for the preceding or succeeding sentence, and then the next time you choose a different translation for the same source because (a) it feels better with the surrounding sentences or (b) you want to intentionally add variety (like my 8th grade English teacher told me to do), then you confound the machine learning that drives MT.

      MAYBE MT at zero cost per word can help inspire creativity, but it’s a horrible creative literary translation environment for post editing.

    Leave a Reply

    The Open Mic

    Where translators share their stories and where clients find professional translators.

    Find Translators OR Register as a translator