In China, the word “Mandarin” is rarely used and may have originally been derived from a word in the Sanskrit language. BY SONG WHITE

Years ago when I was asked if I speak Mandarin, I was puzzled. What’s Mandarin? I later learned that “Mandarin” means the official spoken Chinese language. I am not alone—many of my Chinese friends have the same confusion. That’s because we don’t call it “Mandarin” in China. Instead, it’s “Putonghua,” meaning “common spoken language.” → continue reading


It is one of the oldest dilemmas in literary translation: if it is the translator’s mission to remain invisible, how does one recognize—and value—the translator’s work? 

By Anne Milano Appel

“Good humor is a paradox,” writes humor aficionado Mel Helitzer. “The unexpected juxtaposition of the reasonable next to the unreasonable.” The literary translator must indeed be equipped with good humor to be able to hover in that paradoxical and perpetual state between visibility and invisibility. If I needed yet another reminder of this chronic condition (which I don’t!), it came in the form of a recent article by Umberto Eco in the Italian weekly L’espresso (“Cappelli alti di forma”, Sept. 28, 2007)i. In it the well-known semiotician, philosopher, medievalist, and writer (perhaps best known for his ambitious novel The Name of the Rose) makes a statement whose obviousness on the surface may seem equivalent to “aria fritta” (fried air), as the Italians say when they mean “so what else is new?” Something that is “fritta e rifritta” (fried and refried) is an old story, old news. As Eco puts it (I’m translating of course):

“The translator’s job is therefore difficult and paradoxical, since he should do all he can to make himself invisible, … and yet he would (justly) like this invisibility to be rewarded with a certain visibility. Yet the translator’s success lies precisely in achieving invisibility… ii

Now I have two reactions to this statement. The first is the duh-uh factor: so the translator’s job is difficult and paradoxical. Tell me about it. The second, and more serious, issue has to do with the use of the word “rewarded” (premiati). As I see it, the “certain visibility” sought by the translator should not be considered a “reward,” but something he rightfully deserves for his success at being “invisible.” Eco himself uses the word “justly” in describing the translator’s desire for visibility. Indeed, it seems to me that a distinction should be maintained between the invisibility of the translator’s hand in the work he produces – something that is decidedly desirable – and the fair, just and merited attribution of the work that is rightly due him. There is a vast difference between striving for invisibility in the act of translation (not letting your hand show through) and being treated as invisible when it comes to having your name identified with the work you’ve produced. Unfortunately, the “invisibility” that is most associated with the translator is all too often not his skill in hiding his hand, but rather the lack of attribution. For example, some publications, here and abroad, regularly neglect to include the translator’s name when referring to a book, and many publishing houses refuse to put the translator’s name on the cover. As a colleague recently put it: this type of recognition should not be considered a “reward” but, given the circumstances, it often ends up being regarded as such. And therein lies the intriguing paradox: if the translator is invisible (“good,” in Eco’s world), who then is able to notice him, and presumably accord him some form of visibility?

Erasing the tracks

Invisibility in the text is certainly something to strive for. One way to see if a translator has “erased” his own tracks is to check his body of works. If the translations are of works by different authors yet they all display (betray? traduttore, traditore…) the same hand, chances are the “voice” you’re hearing is the translator’s and not the author’s. This cookie cutter approach lies at the far end of the spectrum from invisibility and transparency. Malcolm Jones, writing about two new English translations of Tolstoy’s War and Peace, mentions Constance Garnett’s approach as what may be considered an example of cookie cutting:

“Garnett was a woman in a hurry—she translated some 70 Russian books into English—but what she gained in speed, she lost in subtlety. Her version of “War and Peace” isn’t bad, but it’s not exactly Tolstoy either. It has a sort of one-size-fits-all quality”. (Malcolm Jones, “Lost in Translations”, Newsweek, Oct. 15, 2007)iii

Invisibility in rendering the author’s text is prized and justly so. Wyatt Mason, for example, reviewing Margaret Jull Costa’s translation of Javier Marías’ Your Face Tomorrow: Dance and Dream, notes that Marías’ style is “faithfully rendered by Margaret Jull Costa, his principal English translator, who achieves a rare feat: presence and near invisibility.” (Wyatt Mason, “Interpreter of Malefactors”, The New York Times, August 27, 2006)iv More recently, Kathryn Harrison’s review of a new work by Mario Vargas Llosa praised translator Edith Grossman as having produced “the fluid artistry readers have come to expect from her renditions of Latin American fiction.” (Kathryn Harrison, “Dangerous Obsession,” The New York Times Book Review, October 14, 2007)v The reviewer goes on to speak of “a remaking rather than a recycling”, and though she is referring to Vargas Llosa’s recreation of Emma Bovary’s story, the words could readily be applied to the translator’s craft as well:

“The genius of ‘Madame Bovary,’ as Vargas Llosa describes it in ‘The Perpetual Orgy,’ is the ‘descriptive frenzy … the narrator uses to destroy reality and recreate it as a different reality.’ In other words, Flaubert was a master of realism not because he reproduced the world around him, but because he used language to create an alternate existence, a distillate whose emotional gravity transcends that of life itself.”vi

The writer creates an alternate existence, much like that created by the literary translator. Just as A is to B (the real world is to the author), so C is to D (the author’s text is to the translator). By engaging in a form of rewriting or re-creation of the original text (while remaining invisible) the translator gives the writer a voice in another language. It has been said that the act of translation allows the translator to have a love affair with the author’s words. Indeed, there is a sensual component to the process, since words appeal to the senses and have a voluptuous quality. On one level it is all about seduction and attraction. It is paradoxical then that the translator should vanish after weeks and months of living in close, intimate contact with the author, attempting to render the subtle meanderings of his mind… after the “I” has become “we” and distinctions have blurred.

Part accomplice

The invisibility of the translator in the text stands in distinct contrast to the invisibility that is all too often represented by the denial of due recognition for the work he has produced – a recognition that is not only fair but merited. In his article in L’espresso Eco also writes: “For years one of the battles translators have waged has been that of having their name on the title page (not as co-author but at least as an essential intermediary)…”vii This “not as co-author” is interesting and telling. Certainly many authors (and many translators) would agree with Eco. Others, on the other hand, are more generous and more acknowledging. For Claudio Magris, for example, the translator is a co-author. In Ilide Carmignani’s interview of Magris which I translated for Absinthe: New European Writing (March 2007), the writer states: “unquestionably, both when one translates and when one is translated, there is a strong sense that the translator is truly a co-author, part accomplice, part rival, part lover…”.viii

Accomplice, rival, lover… heady stuff. Definitely at the other extreme from the prosaic intermediary, middleman or go-between. There is perhaps a second paradox to be noted here. Without the translator, the author would be invisible! José Saramago, the Portuguese novelist and winner of the 1998 Nobel Prize for literature, once stated that writers create national literatures with their language, but world literature is written by translators.ix French filmmaker Robert Bresson,x whose films were characterized by a profound intensity, wrote that his aim was to “Make visible what … might perhaps never have been seen.” And Swiss artist Paul Klee: “Art does not reproduce the visible; rather, it makes visible [sic]”.xi The translator too “makes visible” the author. Another way to think about how the translator brings visibility to the author while remaining in the shadows is to imagine the act of translation as a mask. The mask as an age-old form of disguise and masquerade is worn over the face to conceal an individual’s identity and, by its own features, create a new persona. In this metaphor, when the wearer (translator) is attired in the mask (engaged in the act of translation), there is a loss of his previous identity and the birthing of a new one (the author’s new voice). And so we have the Masquerade of Translation. 

If there is indeed a second paradox to be found in the literary translator’s craft, the words of physicist Edward Teller come to mind: “Two paradoxes are better than one; they may even suggest a solution.” Solutions, anyone?


i. Umberto Eco, “Cappelli alti di forma”, in the column La Bustina di Minerva, L’espresso, Sept. 28, 2007, online at
  ii. Ibid.: “Dura e paradossale fatica è dunque quella del traduttore, il quale dovrebbe fare il massimo per rendersi invisibile… eppure vorrebbe (giustamente) che questa invisibilità fosse premiata con una certa visibilità. Eppure il successo del traduttore è proprio il raggiungimento dell’invisibilità…”
  iii. Malcolm Jones, “Lost in Translations”, Newsweek, Oct. 15, 2007.
  iv. “Interpreter of Malefactors”, Review by Wyatt Mason, The New York Times, August 27, 2006.
  v. “Dangerous Obsession”, The Bad Girl by Mario Vargas Llosa, Review by Kathryn Harrison, The New York Times Book Review, October 14, 2007.
  vi. Ibid.
  vii. Eco, op. cit.: “Una delle battaglie dei traduttori è stata da anni quella per avere il nome sul frontespizio (non come co-autore ma almeno come mediatore fondamentale)…”
  viii. “Part Accomplice, Part Rival: the Translator is a True Co-Author: An interview with Claudio Magris”, Anne Milano Appel, translator. Absinthe: New European Writing, Dwayne Hayes, ed., March 2007. (Translation of an interview by Ilide Carmignani that originally appeared in Comunicare. Letterature .Lingue, Il Mulino, Bologna, n. 6, 2006, pp. 221-226.)
  ix. Saramago reportedly made the statement in May 2003 during a speech to attendees at the Fourth Latin American Conference on Translation and Interpretation in Buenos Aires.
  x. Bresson felt a deep feeling of responsibility to his audience, and thought that it was not the aim of his filmmaking to impress the viewers with his brilliance or the brilliance of his performers, but to make them share something of his own vision. Cited at:
  xi. “L’art ne reproduit pas le visible ; il rend visible…

“Glocalization”: The Power of Centralized Website Localization

By Anna Schlegel

How do transnational companies maintain their brand integrity across the multiple localizations of their Internet presence? A case study from VeriSign.

To put the globalization challenge for VeriSign’s website in perspective, it is first necessary to understand that the firm protects, with digital certificates, the secure websites of a majority of companies that have a presence on the Internet. This means more than 750,000 web servers, including 93 percent of the Fortune 500. VeriSign operates the largest independently owned specialized network in the world, routing billions of connections from carrier to carrier—between protocols and across national boundaries. The company monitors 300 million retail transactions and delivers more than 200 million mobile-originated intercarrier messages and more than one million multimedia messages every day.

In 2003, VeriSign had just four international sites—in France, Germany, Japan, and the U.K.—in addition to its corporate website, supported by a single Global Project Manager. Today, the company has more than 18 international sites organized under a centralized global web operation, supported by five language service providers across the following countries: Australia, Brazil, Denmark, France, Germany, Hong Kong, India, Japan, Mexico, Norway, Singapore, Spain, Sweden, Switzerland, Taiwan and the U.K. The team has grown to include an international executive producer, developers, designers, a localization team and project managers in various geographical regions.

Starting at the bottom

Prior to my joining the company in 2004, VeriSign was using an array of consultants and tools to launch websites with little in-country support or dedicated international developers, as the company was not yet fully staffed internationally. Issues such as international tax, customs, and legal matters were not being addressed consistently (if at all), so appropriate content definition was a complex issue.

Upon my arrival I was told, “Look at what any of the global top five companies do and implement something similar. And, while you’re at it, choose and implement a global content management system (GCMS).” In other words, I had to start from scratch, developing a global website strategy, and virtually every one of its component parts, from budget, affiliates, and team resources to workflow automation and vendor management to localization, website maintenance, and, of course, buy-in from both corporate leadership and international management.

Aligning goals

I determined that the best way to implement such sweeping changes was to ensure that my globalization strategy meshed with the business plans of both senior and line management. I attended many presentations at the director level and above, always making sure to ask the presenter how international needs and requirements figured into their plans; many times, of course, they didn’t. I presented executives with visual evidence of current and future pages in order to educate them on what was currently wrong with the various websites, and how they could be changed to support the VeriSign brand.

It took two and a half years, some burnout (the workload and strategic planning during this period were handled by just two staffers, one vendor, and no translation tools) and a lot of hard work to clean up everything and to gain management support to build a real team with a real budget. An enormous resource was the International Product Commercialization (IPC) Group. By joining this organization, I was able to have a voice in creating VeriSign’s global brand in terms of what the company could market and legally sell around the world.

Another important factor in our eventual success was creating a vision and mission statement for international operations, and continuously sharing it companywide at every opportunity. Our team set up glossaries and style guides, and I recruited as many allies as I could throughout the organization, focusing on the brand managers (who were key) and in-country personnel. Where positions weren’t filled, we used contractors. We built our budget dollar by dollar, until we could finally support the team that was required to carry out a globalization strategy appropriate for our company.

Keys to success

As proof of VeriSign’s success with our global website strategy, we saw the number of words processed doubled from 1.1 million to over 2 million between 2004 and 2006. In 2003, there were no stakeholders for this endeavor; by 2006, there were 14 major stakeholders actively engaged in the localization process. The number of countries supported jumped from seven in 2004 to 16 in 2006. Non-core language support now includes Arabic, Catalan, Chinese, Czech, Farsi, Finnish, Greek, Hebrew, Hindi, Hungarian, Japanese, Korean, Norwegian, Polish, Russian and Turkish. Infrastructure tools include glossaries and style guides in four areas: keywords, website buttons, products/services/descriptions, and a monolingual glossary to support translators. We recently launched an initiative to deploy a global content management system to replace the old, email-based system, with full approval from top executives.

As of the beginning of last year, our corporate site had 5.3 million unique visitors and 30 million page views. The international sites, including 10 in the EMEA region (Europe/Middle East/Africa) and two in Latin America, had 557,000 unique visitors and 2.7 million page views. While the corporate site has experienced a modest 3 percent increase in visitors and a 1 percent increase in page views year-over-year, the increases for the international sites were 74 percent and 41 percent, respectively! France was up by 288 percent, Switzerland by 139 percent and Germany by 122 percent.

37 percent of all traffic on VeriSign’s intranet is now generated from international offices, and 34 percent of the traffic on the company’s sales portal is from outside of the U.S. In terms of lead generation, 308,000 web leads were collected worldwide in 2006.

Along the way, our team took on responsibility for translation as well. Engineering depends on our team to provide guidance, and they share the same language service providers. Our team mandates the translation processes, QA processes, and more. There are now three Web Globalization Managers based in EMEA and some funding for usability studies in that region, as well as in Latin America and Asia-Pacific.

What has been responsible for the success of our team’s work? Perhaps the most important factor has been the engagement of the in-country marketing teams. By creating Service Level Agreements, providing tools and access for each region to request localized content, offering local training, preparing a glossary repository, and conducting frequent meetings with colleagues in the regions, our team members have been able to effectively integrate their goals and operations with those of corporate.

Lessons learned

In carrying out our plan, our team learned a number of important lessons. These included, among others, the notions that globalization must be supported by top-level management; a vision and a mission for international operations are crucial, and must be integrated with the company’s corporate vision; metrics must be communicated through lead generation, chat forums, and the company’s support site; and tools must be tested before buying, based on client requirements—not those of the vendors.

As we launched more websites, our team focused on overcoming still new challenges, which encompassed the need for consistency across all sites, a sensitivity to audience diversity, attention to revenue and strategy, success metrics, regional representation, and doing more with less. Throughout it all, though, we were able to effectively demonstrate how centralized website localization builds the VeriSign brand through the creation of consistent positioning, messaging, and voice and tone—all of which, in turn, helped build brand awareness and recognition. Such efforts also protected the company’s brand assets through trademark protection, by means of proper content translation, and mitigated the company’s worldwide exposure by protecting the company’s key trademark assets.

A clear globalization strategy and execution supports international expansion as well, through the extension of product offerings on a global basis via acquisition and monthly IPC approvals. Simultaneous website launches and support of worldwide sales activity also contribute to this.

Maturing with our markets

As the U.S. market matures, VeriSign’s corporate management looks to international markets as the company’s new frontier for expansion. Building on our team’s success over the past several years, we are now concentrating on issues such as contained English (that is, engaging writers to reduce the amount of content they create), expanding in Asia, allowing more customization at the local level, and, as always, maintaining a high concentration on quality.

Googling Machine Translation

By Paula Dieli

Mention the words “machine translation,” and a translator’s thoughts will range from job security to the ridiculously funny translations we’re able to produce with so-called online translation tools. Should we be worried that machines will take over our jobs? Paula Dieli thinks not, and explains why in this report.

I recently attended a presentation on “Challenges in Machine Translation,” sponsored by the International Macintosh Users Group (IMUG), at which Dr. Franz Josef Och, Senior Staff Research Scientist at GoogleResearch, presented some of the challenges Google is facing in its machine translation (MT) research, and how some of these challenges are being addressed. Excitement about successes in machine translation research initially came to a head back in 1954 with a report in the press regarding the Georgetown University/IBM experiment which had used a computer to translate Russian into English. Since then, over the past 50 years, we have continued to read about the great advances that will be possible in “the next 20 years,” but these great advances never came to pass. When the Internet came of age, online translation tools surfaced and we translators amused ourselves by seeing what crazy translations we could come up with by entering seemingly simple phrases.

The linguistics of MT

So why did the research never produce anything really viable? It was based on a linguistic approach; that is, an analysis of the structure of a language followed by an attempt to map it into machine language such that one could input a source language text and out would come a wonderful translation in the target language, albeit with a few minor errors. As we all know, a language is filled with so many cultural, contextual, idiomatic, and exceptional uses that this task became virtually impossible, and no real progress has been made with this approach in the past 50 years.

Dr. Geoffrey Nunberg, Adjunct full professor at UC Berkeley, linguist, researcher, and consulting professor at Stanford University, had this to say at a recent NCTA presentation: “I asked a friend of mine, who is the dean of this [MT] field, once, ‘if you asked people working in machine translation how long it will be until we have perfect, idiomatic machine translation of text …?’, they would all say about 25 years. And that’s been a constant since 1969.”

The data-driven approach

In recent years, MT researchers have begun to take a different approach, which can be loosely compared to the work you do as a translator when you use a tool such as SDL Trados WinAlign or Translator’s Workbench. That is, you use a data-driven methodology. As you translate, you store your translations in a translation memory (TM), so that if that same or a similar translation appears again, the tool will notify you and let you use that translation as is, or modify it slightly to match the source text. The more you translate similar texts in a particular domain, the more likely it is that you will find similar translations already in your TM.

Similarly, if before you began to translate a weekly online newsletter of real estate announcements, for example, you searched the Internet for already existing translations in your language pair and then aligned them and input them, via WinAlign, into your TM, you might find that much of the work had already been done for you. Imagine now if you were to input 47 billion words worth of these translations. Your chances of being able to “automatically” translate much of your source text would certainly increase. This is the approach that Google is taking.

Google’s goal, as stated by Dr. Och, is “to organize the world’s information and make it universally accessible and useful.” Now before you go thinking you’re out of a job, their data-driven approach has proven successful only for certain language pairs, and only in certain specialized domains. They have achieved success in what they call “hard” languages, that is from Chinese to English, and from Arabic to English in domains such as blogging, online FAQs, and interviews by journalists.

Dr. Och reported that their reasons for progress were due to “learning from examples rather than from a rule-based approach.” He admits that “more data is better data.” He went on to say that adding 2 trillion words to their data store would result in a 1 percent improvement for specific uses such as the ones described above. They see a year-to-year improvement of 4 percent by doubling the amount of data in their data store, or “corpus.” The progress reported by Dr. Och is supported by a study conducted by the NIST (National Institute of Standards and Technology) in 2005. Google received the highest BLEU (Bilingual Evaluation Understudy) scores using their MT technology to translate 100 news articles in the language pairs mentioned above. A BLEU score ranges from 0 (lowest) to 1 (highest) and is calculated by comparing the quality of the target segments with their associated source segments (a penalty is applied for short segments since that artificially produces a higher score).

Challenges and limitations

So what are the limitations of this data-driven approach? When asked by a member of the audience if Google’s technology could be used to translate a logo, Dr. Och instantly replied that such a translation would require a human translator. It’s clear that Google’s approach handles a very specific type of translation. Similar data-driven MT implementations can be used to translate highly specialized or technical documents with a limited vocabulary which wouldn’t be translated 100 percent correctly, but which would be readable enough to determine whether the document is of interest. In that case, a human translator would be needed to “really” translate it.

The Google approach described above deals with a tremendous amount of data and a very targeted use. It works only for some languages—German, for example, has been problematic—and in order to improve in more than just small increments, human intervention is required to make corrections to errors generated by this approach. One example that Dr. Och provided—the number “1,173” was consistently incorrectly translated into the word “Swedes”—confirms that a machine can’t do it all.

And if you think for a minute about the amount of Internet-based data being generated on just an hourly basis, it’s great to have machines around to handle some of the repetitive (read: uninteresting) work, and let us translators handle the rest. That still leaves plenty of work for us humans.

Alternative technologies

There are other approaches to MT, including example-based technology, which relies on a combination of existing translations (such as you have in your translation memory) along with a linguistic approach that involves an analysis of an unmatched segment to a set of heuristics, or rules, based on the grammar of the target language. Some proponents of this approach concede that large amounts of data would be needed to make this approach successful, and have all but abandoned their research. Once again, we can see that any approach that relies even partially on linguistics has not met with a reasonable level of success.

Other advances occurring in the MT arena include gisting and post-editing. MT can be used successfully in some settings where the gist of a document is all that is needed in order to determine if it is of enough interest to warrant a human translation. There are also MT systems on the market that produce translations that require post-editing by human translators who spend (often painful) time “fixing” these translations, correcting the linguistic errors that such a system invariably produces. While this may not be the translation work you’re looking for, I know of at least one large translation agency that provides specific training for this type of post-editing to linguists willing to do this kind of work. This is another example that shows that while machines play a part, there is still a role for human translators in the overall process.

Still other advancements include the licensing of machine translation technology based on a data-driven approach, which can be tailored to work with existing translations and terminology databases at a specific company. As with the Google solution, such technologies typically work on a limited set of languages. However, if they can help translate some of the less interesting, repetitive information out there, with more information being produced at a continually increasing rate, have no fear; there will still be plenty of work for human translators to do!

The road ahead

Where does that leave us? From the typewriter to word processors to CAT (Computer-Assisted or Computer-Aided Translation) tools and the pervasiveness of the Internet, our livelihood has been transformed, in a positive way. We are more productive and able to work on more interesting translations than ever before.

I encourage you to embrace technology; understand how it is helping to make information accessible, and learn how technology can help translators do the work that only humans can do.

more information

The calendar of the International Macintosh User Group (IMUG) upcoming presentations can be found at

You can get the official results of the 2005 Machine Translation Evaluation from the National Institute of Standards and Technology (NIST) at

The Civilian Language Reserve Corps, Part II

By Stafford Hemmer

In the May issue of Translorial, we learned of the history and mission of the Civilian Language Reserve Corp., the U.S. government’s 2004 initiative to widen the scope of qualified volunteer language professionals in the wake of the September 11th attacks. In this concluding segment, we hear from representatives of the program and the president of ATA about this unusual effort to invigorate American foreign language abilities.

On May 8, 2007, the Office of the Assistant Secretary of Defense (Public Affairs) for the Department of Defense issued an official News Release: “DoD Announces Pilot Language Corps.” Initially proposed to Congress shortly after the devastation of September 11, 2001, the U.S. Department of Defense was one of several agencies working jointly to originate “a vital new approach to address the nation’s needs for professionals with language skills … an integral component of the Department of Defense’s language roadmap, and the President’s National Security Language Initiative.”

According to Gail McGinn, Deputy Undersecretary for Defense for Plans, “the department is confident that a successful Language Corps will not only address gaps in federal preparedness, but also serve to reinforce the importance of language skills in the American population and the U.S. education system.” Yet true to the Leviathan nature of the U.S. bureaucracy, organizing, funding, approving, revising, debating, and moving forward with the Corps has turned into a multi-year process. Even the name of the group—originally the “Civilian Linguists Reserve Corps”— has been changed several times and is now the “National Language Service Corps.,” according to Robert Slater, Director of the National Security Education Program.

Further, while the original charter stated that “the pilot Corps will include no fewer than 1,000 members drawn from all sectors of the U.S. population,” to date no volunteers have been recruited; enrollment is not likely to start until 2008.

According to DoD information, the newly christened NLSC, which “will be an entirely civilian organization managed by the DoD for the federal sector, composed of members who will voluntarily join and renew their membership,” begins with a pilot effort involving approximately 10 languages (see Part I). Although not able to indicate which languages have been identified for the pilot project, Mr. Slater confirmed that “the final list of languages is still in development, and will be announced in the fall.”

Organization and structure

The NLSC is basically divided into two groups of participants: the “national pool” and the “dedicated pool.” All volunteers will have their skills certified by the NLSC, and it is likely that renewal procedures will involve coursework or projects that hone or elevate current skill sets. But while the national pool of volunteers is intended for deployment in the event of “war, national emergency, or other national needs,” the dedicated pool will consist of a smaller number of participants, who will serve specific federal agencies on a contractual basis, and “agree to perform specific responsibilities and duties.”

According to Mr. Slater, “the major difference between the two pools is the nature of the contractual relationship involving the individual member. In the case of the national pool, members are not obligated to serve. They will be activated only depending upon their availability. In the case of dedicated members, they will actually enter into contractual relationships with specific federal agencies. They will be expected to be available up to the days specified in their contract.” Volunteers in both pools will be expected to travel, both within the U.S. and abroad.

When asked if volunteers in either pool will be involved in the interrogation of enemy combatants, or other individuals detained by what the U.S. government deems to be terrorist-related activities, Mr. Slater replied “we are not nearly at a point where this question can be answered.”

The ATA viewpoint

Back in July 2006, ATA President Marian Greenfield announced to the organization’s membership that the government would soon be enrolling volunteers in the CLRC. Since that message, Ms. Greenfield reports that “there was no measurable response from membership, other than members who were grateful to know about such translation/interpreting volunteer opportunities, particularly those that could potentially lead could lead to paying jobs.” Compensation for the “volunteer” work, in fact, is still intended under the NLSC. “Compensation plans are still under development, explained Mr. Slater. “The assumption at this point is that national pool members will be compensated only if they are activated. However, all members will derive other benefits from membership in the Corps.”

Ms. Greenfield remains optimistic about the prospects for the NLSC and interested linguists, although there is no official ATA position on the project. As Ms. Greenfield explains, “If the [NLSC] works as planned, it will be of tremendous value to those who need help during times of local and/or national emergencies. It has the potential to possibly create jobs for ATA members. And, once again, the important role that professional translators and interpreters play in bridging the languages, customs, and cultures of different communities will be highlighted.”

The Civilian Language Reserve Corps, Part I

By Stafford Hemmer

In an attempt to widen the scope of qualified volunteer language professionals in the wake of the September 11th attacks, the U.S. government in 2004 instituted the Civilian Language Reserve Corps. In this first of a two-part series, we examine the CLRC’s history and mission. In the concluding segment, in the September Translorial, we’ll hear from many parties involved in this unusual effort to invigorate America’s foreign language abilities.

In July 2006, NCTA members who also belong ATA received an email appeal from ATA President Marian Greenfield. As a follow-up the ATA’s successful response to the Red Cross request for volunteers, Ms. Greenfield extended an invitation to interested translators and interpreters to consider joining the national Civilian Linguist Reserve Corps. “CLRC volunteers may be called upon during a national crisis of one sort or another, such as supporting preparations for evacuations before and after natural disasters,” she explained. According to the CLRC’s own mission statement, the Corps aims “to provide and maintain a readily available civilian corps of certified expertise in languages determined to be important to the security of the nation.”

The Corps is operated today under the auspices of the National Security Language Initiative, launched by the Bush Administration in 2004 as an endeavor to “dramatically increase the number of Americans learning critical-need foreign languages.” In this context, “critical need” refers to nine specific languages, including Arabic, Chinese, Russian, Hindu, Korean, Urdu, and Farsi. The NSLI is a department of the U.S. Departments of State, Defense, and Education, as well as the Office of the Director of National Intelligence. The initiative is comprised of: 1) programs to encourage the learning and teaching of foreign languages; 2) scholarships, exchanges, and projects to promote international learning and exposure; 3) the creation of “feeder programs” to educational institutions, from kindergarten through university level; and finally 4) “strategic partnerships” between the national government and U.S. universities to promote instruction in “critical languages.” The CLRC itself falls under this latter prong of NSLI agenda. In fiscal year 2007, the Bush administration requested $114 million from Congress to fund this program.

The National Guard model
On the face of it, and as reflected by Ms. Greenfield’s email, this battalion of linguists should operate like the National Guard, except that it will take command of language-related issues instead of public disorder during national crisis situations. Its genesis actually precedes the NSLI itself, in a proposal to Congress in 2001 by the National Security Education Program of the Department of Defense’s National Defense University. Following the government-funded initial feasibility study, NSEP’s Dr. Robert Slater, in his testimony of April 1, 2004, asked the House Permanent Select Committee on Intelligence to “consider how effective and beneficial it would have been for the nation if, on September 12, 2001, the Director of the FBI had been able to request an immediate call-up of a select number of Arabic specialists who were commissioned as part of a Civilian Linguist Reserve Corps.”

Dr. Slater’s words had their effect on Capitol Hill. When the feasibility study, operational plan, and implementation plan were completed, the time had come in mid-2006 to launch the CLRC’s pilot program. Over the next three years, the Corps’ goal is to assemble a list of no fewer than 1,000 linguists by the year 2010 in the nine critical-need languages. Enrolled language professionals would be matched to the requirements stipulated by the more than 80 federal government departments, bureaus, and agencies that need their service. Reservists have to be certified not only in terms of language acumen, but also in terms of their national loyalty, in order to garner the necessary U.S. Government security clearance. With that imprimatur, members of the Corps would be available to take on sensitive defense-related work. Skills will have to be maintained and certified on a consistent basis. In exchange for the demanding level of paperwork, background clearance, and ongoing skills maintenance, the candidates in the program would be treated as federal civilian employees, receiving pay, benefits, and other incentives when finally called into service.

Mobilizing the Corps
According to a press release during the feasibility stage, the Corps was touted as an opportunity for U.S. civilians to help out during national emergencies—hurricanes Katrina and Rita being recent examples of such situations. To be clear, the CLRC would not be a military reserve; its members would have the right to refuse deployment, but should they do so, they would be required to reimburse the government for their training and education. Despite the non-military nature of the Corps’ charter, however, there appears some evidence that the Department of Defense’s intentions for this program may include grooming these language specialists to work on more delicate security matters—such as, for example, interrogations of so-called “enemy combatants” in the war on terror. Whether this falls within the purview of a “volunteer” corps is a matter for further investigation. 3