TRANSLATING BY MACHINE: A RAPPROCHEMENT
Is post-editing in your future? Will you soon be cleaning up after the machines? The AMTA Conference addressed this and more. BY MIKE DILLINGER, FOREWORD BY YVES AVÉROUS
What’s not to like in a conference taking place in Waikiki, Hawaii? Besides, the program put together last October by AMTA, the Association for Machine Translation in the Americas, was quite compelling. After a recent general meeting presentation touting the merits of post-editing-or fixing the manageable translations some machines achieve today, it looked like the matter needed more examination. The MT trend also reached the halls of the ATA conference that followed a month later. Not coincidentally, Jiri Stejksal, President of ATA, attended the Waikiki conference to represent us.
Many professional translators are already being introduced every day to machine translation in their regular work via SDL Trados’s embedded new MT features. The rest of us have long ago turned our backs on those automatic web translators and their funny results. Still, after ruining its credibility, the technology has evolved and deserves a second look. More importantly, it is the workflow from machine translation to post-editing, the right tools for a simpler process, that remain to be honed. There are promising contenders to follow, like local company PROMT, and a wide array of research. Ex-AMTA President Mike Dillinger guides you through the whole meeting. YA
MT at Work
The Association for Machine Translation in the Americas held its eighth biennial conference last October in Waikiki, Hawaii. The Association gathers researchers and users of translation software from around the world to discuss progress and prospects for the technology. Interest in machine translation is clearly on the rise: this conference was the best attended and had the most presentations of any in the Association’s 15-year history. Five days of programming included dozens of presentations both from researchers and from users of machine translation and tutorial sessions on topics from the history of machine translation to Arabic text processing. There were also workshops on organizing successful deployments of translation technology and on new techniques for evaluating translation quality as well as a public showcase of translation products that are in development or available on the market. The ATA President, Jiri Stejksal, represented the translator community and spoke at a panel discussion about What we need from MT. All of the presentations are available on the conference website.
For the first time, the conference focused on MT at work, with a majority of presentations from the many people who actually put machine translation to good use on a day-to-day basis in industry, government operations, and translation agencies. Very few presentations were about situations where MT was the only form of translation used. This seems to reflect today’s reality: machine translation is most often used alongside translation memory as an adjunct to human translation. Consequently, many presentations focused on how to bridge the gap between translators and translation software.
One important topic was system usability: building MT systems to be easier to use and to better serve translators’ needs. MT systems today already include familiar translation memory technology and extend this by using linguistic or statistical models to suggest translations for unmatched input segments. Several presentations described translators’ use of and reactions to different kinds of translation tools, to help developers understand what translators are used to and find most helpful. Other presentations described how to get translators involved directly in system development, for example by having them determine the most important errors to be fixed. Examples described were the systems in use at the Pan American Health Organization (PAHO) and under development at Traslán Translation in Ireland.
Another important topic was translator training. Newer MT systems with statistical tools can use post-edited translations directly to continuously refine and improve the system’s performance – but high-quality translated input is essential. Consequently, post-editing MT output and training translators for post-editing were much more important topics than ever before, particularly among the diverse government users of MT who were present. There was a consensus that more involvement of translators is needed to develop effective training and certification programs for post-editing. Post-editing tools specifically designed for translators’ needs have begun to appear in some MT systems, such as the PAHO system. A fast-growing segment of the translation market is for translators who can understand and work with MT in very-high-translation-volume organizations.
Evaluation of translation adequacy is an on-going concern in the MT community, both to measure progress and to identify specific areas in most need of improvement. As translation volume increases, fully automatic evaluation, however difficult, is becoming more and more important. The National Institute for Standards and Technology (NIST) organized an innovative workshop at this meeting to promote and compare new kinds of translation evaluation software, as part of an on-going effort to promote technological progress in the field through systematic, objective comparisons of different technologies. Current automatic measures of translation adequacy – such as BLEU and METEOR – correlate very highly with expert human translators’ ratings.
The Association for Machine Translation in the Americas will continue to focus on bridging the gap between human and machine translation when it hosts the international XII Machine Translation Summit in Ottawa, Canada in August of 2009. One innovation for this meeting is an additional track of presentations that will focus on tools in translation practice and technology in translator training. Go to the conference web site to send in your suggestions for sessions and workshops. We look forward to seeing you there! MD