Sunday, May 31, 2009

Stopping the Word Count Insanity

By Andrzej Zydron,
xml-Intl Ltd.

In the localization industry, there is a total lack of consistency among word or character counts, not only between rival products, but even among different versions of the same product. The same can be said for word processing software: word and character counts differ among vendors and versions. An additional problem is that none of this software provides any proper verifiable specification as to how the actual metrics are determined. You have to accept them as they are.

This is effectively the same situation that existed for weights and measures before the French Revolution established a sane and uniform system that everyone could agree upon, one that we still use today (with minor exceptions).

Trying to establish a measure for the size of a given localization task poses a real problem for the professional who is trying to calculate a price. The differences in word and character counts among different translation or word processing tools can be as much as 20 percent. And such a gap can mean the difference between profitability and loss.

Realizing that this problem needed to be addressed by an independent industry body, LISA OSCAR undertook the task, in 2004, of establishing a standard that everyone can agree on and that can be independently verified.

Nearly three years later, we finally have a far-reaching and considerably reviewed approach to this problem. The core of the new standard comes under the umbrella concept of Global Information Management Metrics Exchange or GMX for short.

We all know that word and character counts are not the only measure of a given localization task. Thus, GMX comprises three standards:

# GMX-V (for volume)
# GMX-Q (for quality
# GMX-C (for complexity)

GMX-V is the first of the three standards to be completed. Work will commence in 2007 on GMX-Q and GMXC. Quality (GMX-Q) will deal with the level of quality required for a task. For example, the quality required for the translation of a legal document is much higher than that for technical documentation that will have a relatively small audience. Complexity (GMX-Q) will take into consideration the source and format of the original document and its subject matter. For example, a highly complex document dealing with a specific tight domain is far more complex to translate than user instructions for a simple consumer device.

All of the GMX family of standards relies on an XML vocabulary for the exchange of metric data. Using the three standards together, it will be possible to have a uniform measure for defining the specific aspects of a localization task, to a point where one can completely automate all the pricing aspects of the task and exchange this data electronically.

GMX-V

GMX-V is designed to fulfill two primary roles:

* Establish a verifiable way of calculating the primary word and character counts for a given electronic document.
* Establish a specific XML vocabulary that enables the automatic exchange of metric data

As with all good standards, GMX-V is itself based on other well established standards:

* Unicode 5.0 normalized form
* Unicode Technical Report 29 – Text Boundaries
* OASIS XML Localization Interchange File Format (XLIFF) 1.2
* LISA OSCAR Segmentation Rules Exchange (SRX) 2.0

WORDS AND CHARACTERS

GMX-V mandates both word and character counts. Character counts convey the most precise definition of a localization task, whereas word counts are the most commonly used metric in the industry.

OTHER METRICS

The XML exchange notation of GMX-V allows for the exchange of all metrics relating to a given localization task, such as page counts, file counts, screen shot counts, etc.

CANONICAL FORM

One of the main problems with calculating word and character counts is the sheer range of differing proprietary file formats. Trying to establish a standard that addresses all formats is impossible. GMX-V required a canonical form that effectively levels the playing field. Such a common format is available through the OASIS XLIFF standard, which is now supported by all of the localization tool providers.

Within XLIFF, inline codes are interpreted as inline XML elements. The inline elements are not included in the word and character counts, but form a separate inline element count of their own. The frequency of inline elements can have an impact on the translation workload, so a separate count is useful when sizing a job. Punctuation and white space characters are also featured as additional categories.

GMX-V addresses all issues related to counting words and characters in the XLIFF canonical format. Since the sentence is the commonly accepted atomic unit for translation, it proposes sentence-level granularity for counting purposes within XLIFF.

GMX-V does not preclude producing metrics directly from non-XLIFF files, as long as the format for counting is based on the XLIFF canonical form for each text unit being counted. This can be done dynamically on the fly, and it requires an audit file for verification purposes.

WORDS

GMX-V uses “Unicode Technical Report 29 (TR29-9) – Text Boundaries” to define words and characters. This provides a clear and unambiguous definition of word or “grapheme” boundaries.
LOGOGRAPHIC SCRIPTS

Word counts have little relevance for Chinese, Japanese, Korean (CJK) and Thai source text. For these languages, GMX-V recommends using only character counts.

There is a proposal before ISO TC 37, submitted by Professor Sun Maosong, relating to the automatic identification of word boundaries for CJK languages. Should this recommendation become a standard, GMX-V should reference it for the provision of CJK word counts.

QUANTITATIVE AND QUALITATIVE MEASUREMENTS

GMX-V counts fall into two categories: how many and what type. The primary count is unqualified. For example, how many characters and words are in the file? This is the minimal conformance level proposed for GMX-V.

A typical translatable document will contain a variety of text elements. Some of these elements will contain non-translatable text, some will have been matched from translation memory, and some will have been fuzzy matched by the customer. Therefore, it is important to be able to categorize the word and character counts according to type, in order to provide a figure in words and characters for a given localization task. GMX-V also provides an extension mechanism that enables user defined categories.

COUNT CATEGORIES

Apart from the total-word-count and total-charactercount values, GMX-V also includes these count categories:

* In-context exact matches – An accumulation of the word and character count for text units that have been matched unambiguously with a prior translation and that require no translator input.

* Leveraged matches – An accumulation of the word and character count for text units that have been matched against a leveraged translation memory database.

* Repetition matches – An accumulation of the word count for repeating text units that have not been matched in any other form. Repetition matching is deemed to take precedence over fuzzy matching.

* Fuzzy matches – An accumulation of the word and character count for text units that have been fuzzy matched against a leveraged translation memory database.

* Alphanumeric-only text units – An accumulation of the word and character counts for text units that have been identified as containing only alphanumeric words.

* Numeric-only text units – An accumulation of the word and character counts for text units that have been identified as containing only numeric words.

* Punctuation characters – An accumulation of the punctuation characters.

* White Spaces – An accumulation of white space characters.

* Measurement-only – An accumulation of the word and character count from measurement-only text units.

* Other Non-translatable words – An accumulation of other non-translatable word and character counts.

* Automatically treatable text – A count of automatically treatable inline elements, such as date, time, measurements, or simple and complex numeric values.

VERIFIABILITY

Any measurement standard must have a reference implementation, as well as an authoritative body that tests and validates the measuring instruments. In the US, this is provided by the National Institute of Standards and Technology. In order to be successful, GMX-V must provide for a certification authority that will (1) maintain reference documents with known metrics and (2) provide an online facility to test given XLIFF documents. In this way, both customers and suppliers can be confident that GMX-V provides an unambiguous and reliable way of quantifying a localization or global-information-management task.

NON-VERIFIABLE METRICS AND EXCHANGE NOTATION

There are many instances where it is not possible to verify electronically the metrics data, such as screen shots, number of pages, etc. GMX-V allows for the annotation and exchange of all relevant metrics for a given localization task.

SUMMARY

GMX-V has been widely peer reviewed and published for open public comment for eighteen months. Much valuable feedback has been submitted and incorporated into the standard. All major localization tool providers have been consulted, to insure no obstacles to implementing it. GMX-V also provides a specification that can be used by word processing tool vendors and localization tool suppliers. It provides a consistent and unambiguous common standard for word and character counts.

Further details of GMX-V are available at the following URL: www.lisa.org/standards/gmx

ClientSide News Magazine - http://www.clientsidenews.com/

Corporate Blog of Elite - Professional Translation Services serving ASEAN & East Asia

Saturday, May 30, 2009

GMS Spotlight. Staying ahead of the curve

By Eric Richard,

VP, Engineering,
Idiom Technologies, Inc.,
Waltham, Massachusetts, U.S.A.

www.idiominc.com

Working in the translation and localization industry is like constantly working in a pressure cooker. Customers want to get more content translated into more languages with higher quality on faster schedules. And, while the volume of content is scaling up, the costs of translating that content cannot scale up at the same rates.

What makes this problem even more challenging is that this isn’t a short term issue; the amount of content that is going to be translated is going to increase again next year and the year after that and the year after that, for the foreseeable future.

Because of this, translation providers are constantly under pressure to find ways of eking that next round of efficiency out of their processes and cost out of their suppliers to meet the never-ending demands for more, more, more.

The first year a customer asks for a rate cut, it might be possible to squeeze your suppliers to get a better rate from them. But, you can only go back to that well so often before there is nothing left to squeeze.

The next year, you might be able to squeeze some efficiency out of your internal operations. Maybe you can cut a corner here or there to stay ahead of the curve. But, again, there are only so many corners to cut before you are really hurting your ability to deliver quality results.

So, what happens when you run out of corners to cut and low-hanging fruit to pick? How do you deal with the never-ending demands to do more for less? How can you get a non-linear improvement in your efficiencies to help get ahead of the curve?

THE ANSWER IS TECHNOLOGY.

In the 80’s, the technology solution of choice was translation memory (TM). By deploying TM solutions, translators could reuse their previous work and could suddenly process a higher volume of work than before.

Over the past years, translation memory has spread throughout the entire localization supply chain. Translators and LSP’s now use client-side TM in their translation workbenches to improve their efficiencies. And more and more enterprises are realizing that if they own their own TM, they can cut down on their costs and increase the quality and consistency in their translations.

The great news in all of this is that efficiency across the board has increased.

The tough part is that most of the low-hanging fruit in terms of gaining efficiencies may already be behind some early adopter companies. The reason? TM-based solutions are becoming more and more ubiquitous throughout the translation and localization supply chain. That said, however, there are still many companies out there who are ready to drive even more efficiency from the supply chain and, in some cases, start looking for ways to increase top line revenue opportunities.

Once early leaders recognized the value of TM, the search was on for the next big technology solution that could help them stay ahead of the curve. And the solution came in the form of applying workflow to the localization process; by automating previously manual steps, companies could achieve major increases in productivity and quality. Steps previously performed by a human could be performed by machines, reducing the likelihood of errors and freeing up those people to work on the hard problems that computers can’t solve.

Companies who have deployed workflow solutions into their localization processes regularly see immediate improvements. This rarely means reducing staff. Instead, it often means pushing through more content into more languages faster than before with the same staff.

For many organizations that have not yet deployed workflow solutions, this is a great opportunity to improve their efficiencies. Like TM, however, workflow has already crossed the chasm and is moving into the mainstream. Large localization organizations have already deployed workflow solutions and many have even gone through second round refinements to their systems to get most of the big wins already.

For those customers who have already deployed a workflow solution, the real question is "What’s next?" What is the next generation solution that is going to help them deal with the increases in content and keep their advantage in the market?

It is my belief that the next big wave is going to come by combining together the previous two solutions – translation memory and workflow – with another emerging technology: machine translation (MT).

Creating an integrated solution that provides the benefits of both translation memory and machine translation in the context of a workflow solution will provide companies with the ability to make headway into the content stack and start translating more and more content that was previously not even considered for translation.

There are many models in which these technologies can be mixed together.

The simplest, and least disruptive, model is to flow machine translation results into the exact same process that is used today. The result is a process that has been dubbed "machine assisted human translation". The process starts just as it would today with the content being leveraged against a translation memory and resulting in a variety of different types of matches (exact, fuzzy, etc.). But, before providing these results to the translator, this new process takes the most expensive segments – those that do not have a suitable fuzzy match from TM – and runs those segments through machine translation. The end result is that there is never a segment that needs to be translated from scratch; the translator will always have content to start from.

Obviously the devil is in the details here, and the real success of this model will be tied directly to the quality of the results from machine translation. If the machine translation engine results can provide a good starting point for translation, this approach has the ability to increase the productivity of translators.

On the flip side, the most radical model would be to combine machine translation and translation memory together but without any human translator or reviewer involved. The key to this approach is to take a serious look at an issue that is traditionally treated as sacrosanct: translation quality.

"It is my belief that the next big wave is going to come by combining together the previous two solutions-translation memory and workflow-with another emerging technology: machine translation"

In traditional translation processes, quality is non-negotiable. It is simply a non-starter to talk about translating your website, product documentation, software UI, or marketing collateral in anything other than a high quality process.

However, does this same requirement hold true of all of the content that you want to translate? Are there specific types of content for which the quality level is slightly less critical?

Specifically, are there types of content you would not normally translate, but for which the value of having a usable translation is more valuable than having no translation? For example, there may be types of content for which time-to-market of a reasonable translation is more important than taking the time to produce a high quality translation.

For content that fits into these categories, you might consider an approach like the one described above to produce what Jaap van der Meer of TAUS calls "fully automatic useful translation (FAUT)."

It is absolutely critical to understand that this is not proposing that we replace humans with machines for translation. Instead, this is looking at how we can use technology to solve a problem that is too expensive to have humans even try to solve today; this is digging into the enormous mass of content that isn’t even considered for translation today because it would be cost prohibitive to do using traditional means.

The best part of combining machine translation and translation memory with workflow is that the workflow can be used to determine which content should use which processes. The traditional content for which high quality is imperative can go down one path while content that has other requirements can go down another path.

"Translation memory and workflow are by no means mainstream at this point"

You might think that this is science fiction or years from reality, but the visionary companies in the localization industry are already deploying solutions just like this to help them deal with their translation problems today. They see this approach as a fundamental part of how they will address the issue of the volume of content that needs to be translated.

This solution is in the midst of crossing the chasm from the early adopters to the mainstream market. While translation memory and workflow are by no means mainstream at this point, some of the early adopters of content globalization and localization technologies are already looking for the next advantage, a way to keep up with steadily increasing demands. Clearly, these companies should strongly consider integrating machine translation into the mix.

ABOUT IDIOM® TECHNOLOGIES, INC.

Idiom® Technologies is the leading independent supplier of SaaS and on-premise software solutions that enable our customers and partners to accelerate the translation and localization process so content rapidly reaches markets worldwide. Unlike other companies serving this market, Idiom offers freedom of choice by embracing relevant industry standards, supporting popular content lifecycle solutions and partnering with the industry’s leading language service providers.

As a result, WorldServer™ GMS solutions are fast becoming an industry standard, allowing customers to expand their international market reach while reducing costs and improving quality. WorldServer is used every day by organizations possessing many of the most recognizable global brands to more efficiently create and manage multilingual websites (e.g., AOL, eBay and Continental), localize software applications (e.g., Adobe, Beckman Coulter and Motorola) and streamline translation and localization of corporate and product documentation (e.g., Autodesk, Cisco and Business Objects).

Idiom is headquartered in Waltham, Massachusetts, with offices throughout North America and in Europe. WorldServer solutions are also available through the company’s Global Partner Network™. For more information, please visit www.idiominc.com.

ABOUT ERIC RICHARD - VP, ENGINEERING, IDIOM TECHNOLOGIES

Eric Richard joined Idiom from Chicago-based SPSS, where he served as Chief Architect. Previously, he wore several hats as co-founder, Vice President of Engineering, and Chief Technology Officer at NetGenesis (acquired by SPSS), where he directed the company's technology development.

In 2001, Eric was a finalist in the Ernst & Young New England Entrepreneur of the Year Awards. He is a graduate of the Massachusetts Institute of Technology.

ClientSide News Magazine - www.clientsidenews.com

Corporate Blog of Elite - Professional Translation Services serving ASEAN & East Asia

Friday, May 29, 2009

The Need for Translation Services In a Global Economy

By Shohreh Fleming

The pace of economic migration in the global economy is quickening and the voice of the world is becoming more cosmopolitan. As businesses, corporations and government bodies expand, the requirement for them to communicate with their evolving populous in a meaningful way becomes all the more pertinent, not to mention challenging and exciting.

If a country is to prosper it needs to engage with its population. Where a system for dialogue exists to serve a multi-cultural society, there will also exist a harmony of expression and application.

The birth of the internet has made a seemingly endless stream of information available to anyone who has access to a computer with a web connection. The internet is also vital for business. It creates a new arena where companies are able to showcase their particular products and services in fresh and innovative ways to new and diverse audiences.

From launching an ad campaign in Polish to translating an e-mail from a client in Moscow; the challenge of utilising the internet to widen the appeal of your company must be met, if growth and success are to form any part of an organisations agenda.

The presence of translation service providers on the net is yet another boon to the global economy; giving voice to ideas, plans and proposals the world over providing a much needed platform for far-reaching and meaningful communications with the rest of the planet. Aiming to create a bridge upon which ideas and communications can cross without obstruction, to pave the way for free, open and creative communications without boundaries or obstacles.

The advantages of language services on the broader global economy are perhaps not immediately apparent. But if a company, whether large or small, is to get involved with it's public in a meaningful way it needs to approach them in a manner that is conducive to them, not to marginalise them and therein reduce their presence in the world economy.

Companies express their cultural sensitivity by providing their diversifying customer base with materials tailored to suit their own languages. Indeed, for a company not to offer this kind of information would be short-sighted and not make good business sense.

The aim of any company or corporation is to offer the same high quality product or service to its expanding customer base. If that base consists largely of individuals from many different countries, then a solution needs to be found.

For example, a company sees a gap in a potential market for a new or existing product; the problem presented to them is simple; extending the message that their company wishes to express in a culturally sensitive and appropriate manner.

It would not simply be enough to translate an ad campaign into Albanian, Bambara or Chechen without an understanding of the kind of world your message is going to be heard in. Enter the translation services provider.

With the advent of the global economy the role of the translator is an almost indispensable one. The need for businesses to communicate with their client base means that there will always be a requirement for a translator.

This means a lot of work for professional language services: a company setting up a South-East Asian headquarters would need to overhaul the bulk of its commercial as well as training materials without losing any of its corporate timbre. Such an undertaking is costly and important to execute correctly from beginning to end.

Simply translating material would not be sufficient; a translator would need to know what kind of message the company is anxious to convey to their new client base and set the tone for the promotion or corporate identity. Certain symbols or ideas that are seen as the norm to one country may be highly offensive to another. In this manner, cultural sensitivity is extremely important.

So, the role of a translation services provider is not simply to transpose a set of texts or materials from one language to the next, but also to engage with the culture who speaks that language. In partnership with the business or government body, this creates an invaluable link to its population in order that it may speak openly and clearly to all.

About the author:

Shohreh Fleming is CEO and co-founder of Prestige Network Ltd., and Translation Services-UK.com.

Corporate Blog of Elite - Professional Translation Services serving ASEAN & East Asia

Thursday, May 28, 2009

At Arm’s Length or Close to the Vest? The Optimal Relationship between Clients and Vendors

By Anil Singh-Molares,
EchoMundi, LLC,
Bellevue, WA, U.S.A.

Anil[at]Echomundi.com
http://www.echomundi.com/

The relationships between vendors and clients go through their ebbs and flows (more insourcing, followed by more outsourcing, followed by…). As predictable as the swings of a pendulum, all of us - clients and vendors - go through our normal gyrations back and forth. And it is all in an attempt to find that elusive, but allegedly perfect, middle ground - but where is it? And beyond the question of where to place work (inside or outside), the question is more about the right tenor of vendor-client relationships--at arm’s length or close to the vest? The answer, I will argue, is both, in the right proportion.

While running vendor relations at Microsoft in the mid-1990s I developed that company’s "strategic localization partner" program. Through this program, many of the constituent companies of what would later become BGS, the Mendez group, and Lionbridge were developed with significant input from Microsoft: Opera, Translingua, Meta, Gecap etc… With only one notable exception, all of these companies were very successful in the nineties. The hallmarks of the "strategic partner" program that we established were designed to lower the barriers between vendor and clients by emphasizing common teams and objectives, and an approach of "if they succeed, we win" rather than "if we make the vendors fail, we win (as our jobs will be more secure)." We emphasized very tight communication links, virtual teams, frequent trips and training to each other’s sites, as well as bonuses and other incentives, such as guaranteed profitability in some cases. The proverbial "us and them" did not exist - rather we all belonged to the same team.

Did it work? Yes and No. Some Microsoft divisions embraced the concept, some rejected it. But the concept of vendors as extensions of client teams (rather than simple providers to them) did begin to take hold. And this approach yielded many notable achievements, particularly in the consumer space, for instance with the creation of Microsoft’s encyclopedia, where we had dedicated vendor teams worldwide for a period of 7 years. In this context, it is noteworthy that in those instances where we pushed the "close to the vest" concept, both clients and vendors achieved their common objectives: good quality at reasonable cost for the client, and increased profitability for the vendors. Similarly, instances where the "at arm’s length" concept was used invariably resulted in higher costs and lower quality for the client and significantly reduced profitability for the vendors. In addition, the "arm’s length" approach also produced considerable churn in the vendor base of those groups using this approach, as vendors left in frustration or were ousted in favour of the "next best thing."

Where are we today?

Before joining Microsoft in 1991 I ran a translation company in the Boston area. Now at the helm of Echomundi LLC, an International Services company, I find the contrast between my experience as both a vendor and a client instructive and informative in answering the question of how close vendors and clients have or should become:

What has changed:

* The industry is far more mature and professional. Localization as a discipline, rising wages and respect for language specialists, growing sophistication in tools and approach are all readily apparent. The process has been streamlined and codified to a large extent. Various CMS, TM and Project Management tools have also helped reduce costs and increased consistency.

* As a rule, there also appears to be more frequent contact between clients and vendors, more training sessions, conferences, meetings, trips etc.

* However, many types of interactions between clients and vendors now seem principally driven more by increasing efforts to "measure and quantify" quality, productivity etc. In this context, the notion of "Service" is now largely defined as success in meeting the client’s metrics on a job to job basis (as indeed we are all measured one job at a time), and not as much on creative problem solving, flexibility, adaptability, transparency and innovation.

What remains the same:

There are certain limitations inherent in the client-vendor relationship that cannot be overcome. That is, when one party pays the bills it has the right to set expectations of service as it deems fit. Conversely, once they have accepted the terms and conditions of a particular client, service companies have an obligation to respond to their client’s requirements to the best of their ability. This fundamental axiom is unchangeable.

And now to return to our basic question: At Arm’s Length or Close to the Vest?

The "Arm’s Length" approach in its ideal application has obvious benefits: each party treats the other as a professional entity, there are clear expectations and deliverables, an optimized use of technology, and tightly controlled costs and profit margins. The downside, however, is glaring: if you as the client don’t develop strong and lasting relationships with your vendors, they won’t be your vendors for long (either because you will tire of them or they will tire of you). By maintaining too much distance from your vendors, they are never motivated to really integrate with your approach. In short, they can become clinical and dispassionate (if not unmotivated and indifferent). One additional drawback of this approach is that it also easily lends itself to bureaucracy run amok, where it can become more important to "follow the rules" than to "get the job done" - surely self-defeating.

The "Close to the Vest" approach in its ideal form seeks to eliminate the barriers between clients and vendors. Through frequent interaction, joint training, and team building the vendor becomes an extension of the client’s team. Both parties share the pains and rewards of individual projects. Both put themselves on the line to a greater degree in innovative problem solving and troubleshooting. And by building relationships for the long haul, the investments that each party makes in the other are more resilient. The downside to this approach, however, can be possible "subjectivity" in measuring work and an unwillingness of one party to honestly hold the other party accountable when mistakes occur.

The Ideal

Really what we all (clients and vendors) want is a combination of both the "Arm’s Length" and "Close to the Vest" approaches - that is, deliverables and costs that can easily, objectively and professionally be measured on the one hand, combined with cordial personal relationships, which are essential for effective problem-solving, on the other. This "middle ground" will vary according to the individual requirements of clients and the capacity of the vendors that they select to meet those requirements, but it is clearly a combination of the benefits of both approaches.

Clients and Vendors that hew to this joint approach will find increasing satisfaction in their relationships on all levels: quality, cost, profitability and service. In this context all of us should strive to be "understanding professionals" rather than exclusively one thing or the other.

Born in Holland and raised in Europe and the United States, Anil Singh-Molares is a global citizen, a global entrepreneur and businessperson. From 1991-2003, Anil worked as a Senior Director at Microsoft Corporation, where he implemented Microsoft’s "strategic localization partner" program. Since leaving the software giant, he founded and serves as CEO of EchoMundi LLC, a rapidly growing international services firm that helps corporations do business abroad. He can be reached at Anil@echomundi.com.

Copyright © 2006 Anil Singh-Molares. All rights reserved.

This article was originally published in GALAxy newsletter:
www.gala-global.org/GALAxy-newsletter.html

Corporate Blog of Elite - Professional Translation Services serving ASEAN & East Asia

Wednesday, May 27, 2009

Translations.com – Alchemy Merger Story

By Tony O’Dowd,
CEO and President - Alchemy Software Development

By Phil Shawe,
Co-founder of TransPerfect,
President and CEO of Translations.com

By Keith Becoski, ClientSide News

www.translations.com

CSN: Tony, I saw it mentioned that the purchase process for Alchemy was a competitive situation and that Translations.com was the high bidder. Was there anything else driving the Board’s decision besides maximizing their investment?

TONY: There were a number of factors that drove this decision from our side. For starters, Translations.com is one of a few localization service providers that invest heavily in technology solutions. It was also important to us that we brought something complementary to the table. While Alchemy is a market leader in delivering next-generation TM technology to over 80% of the world’s leading technology companies, Translations.com boasts one of the most widely-adopted workfow platforms in GlobalLink. Since there’s little cross-over in functionality, integrating these two technologies will be rapid from a development perspective, yet powerful for our combined clients. Lastly, Translations.com’s track record of executing successful industry mergers, retaining virtually 100% of staff and clients, and supporting incoming entrepreneurs as they continue to operate their divisions autonomously, also helped us to solidify our decision to merge.

CSN: Phil, what was it about Alchemy that made Translations.com stretch a bit fnancial-ly to make this merger a reality?

PHIL: First and foremost, our mergers are about the people. With Tony, co-founder Enda McDonnell, and the rest of the Alchemy team, we saw a talented group of localization technology veterans who shared our focus on innovation, growth, and client satisfaction. Beyond the wealth of technology talent, Alchemy’s proven and proftable business model is unique among the localization industry’s technology providers. While Alchemy’s leadership in the Visual Localization Tool market is well-established, it gave us extra comfort that we’ve relied on Alchemy technology internally for over five years and have first-hand experience with how effectively CATALYST streamlines the localization process. Lastly, it’s not only Alchemy’s past achievements that impressed us, but also its prospects for the future. We’re very excited to be building on Alchemy’s success and investing in future Alchemy software product offerings.

CSN: Tony, you’ve stated that you intend to stay on with the business post-close. As a shareholder of Alchemy, who has now seen a return from that investment, why stay aboard?

TONY: I’m way too young to think about simply hanging up my hat. What would I do? So the motivation for me in doing this merger was more about opportunity than it was about exiting and doing something else. While I may not have always enjoyed all of the administrative tasks associated with running a company, I have been in the localization industry for 22 years and I’ve always enjoyed it immensely. So for me, the decision to stay on and to be part of driving the growth and development of one of the world’s premier players in this industry is an easy one. And as Phil said, it’s all about the people. My due diligence about the people I’d be working with, as well as the spirit of the merger discussions themselves, led me to believe that this is an interesting and talented group of people for me to join up with.

CSN: And why do you feel this move is right for Alchemy clients?

TONY: Again, Translations.com and Alchemy can combine our R&D spend and deliver more innovative technologies for our clients. Translations.com is a proftable, private company with a very healthy Balance Sheet. In other words, our clients can be confdent that when they are making an investment in technology, they are doing so with a partner who has consistently been fnancially stable. Not motivated by meeting quarterly numbers for the public markets, Translations.com has the advantage of being long-term focused and, as part of our transaction, has pledged long-term investment in Alchemy R&D. Additionally, the combination of our technology with the GlobalLink GMS product suite will enable our clients to achieve greater levels of effciency and scalability in their localization processes. I also believe Translations. com’s post-merger history of retaining employees, management, and clients also makes this the right move for our clients.

CSN: OK, but you’ve failed to touch on the issue on everyone’s mind, what about the loss of independence?

TONY: Our clients value innovation more than independence. Alchemy will operate as an independent division within Translations.com and will continue to develop, distribute, and support our own products. Additionally, the senior management team, such as me and Enda McDonnell, will remain in our existing roles, continuing to exercise our leadership and vision over Alchemy CATALYST and Alchemy Language Exchange. Unlike recent localization industry acquisitions which resulted in large-scale layoffs, we shall be investing in and expanding the development efforts at Alchemy and launching new and exciting technologies later in the year.

CSN: Generally, though, the technology in this industry does seem to be getting gobbled up by the service providers. Who benefts from this?

TONY: Speaking about the Translations.com/Achemy deal, our clients are the ultimate benefciaries of this merger. Technology is playing an increasingly important role in the optimization and effciency of our clients’ localization processes. Even small and medium sized companies see growth opportunities in overseas markets. To take advantage of these growth opportunities they need to localize quickly, cost-effectively, and with high quality. Technology will drive these effciencies making localization more accessible to a wider range of companies and enterprises.

Combining these technology advantages with a full service offering will suit some of our clients. However, we are mindful of the fact that choice is important to many of our clients and that is why Alchemy will remain a fully independent division within Translations.com and our tools will continue to be service provider agnostic.

Because we don’t have overlapping technology, our clients do not need to be concerned about which product lines will be supported in the future, and which will be killed off. Stability, security and a defned roadmap for future development for our combined software offerings will also work to the beneft of our clients.

CSN: What do the language service providers need to know about this and what do the end clients need to know?

TONY: Probably both groups need to know the same things. Alchemy has developed CATALYST into the optimization tool of choice for the localization process, and this development has served all who manage localization, whether they are an LSP or an end client. So what all localization stakeholders need to know is that Alchemy and Translations.com intend to work together collectively to continue investing in and driving the evolution of CATALYST and Alchemy Language Exchange, which are not captive and are used in conjunction with other LSPs’ services.

PHIL: We also feel that increased competition in the localization technology sector will drive more innovation, and this transaction is likely to result in increased competition.

CSN: Phil, how will this merger differ from the SDL/ Idiom merger which is leaving a perceived lack of independence and choice?

PHIL: Translations.com has a reputation of merging with companies and retaining virtually 100% of the entrepreneurial skills and enthusiasm of the existing teams and management. This has proven to be a very successful strategy. While I don’t know that it’s accurate to say that this approach to M&A is unique, it certainly does differ from the approach of SDL, the obvious comparison here given their recent and past technology acquisitions. In fairness, they are a public company with a requirement to operate and to consolidate acquired businesses in a way that makes sense to investors. As a private company, Translations.com is free to take a more long-term approach, and we see the value in supporting entrepreneurs and their businesses.

Furthermore, the Alchemy/Translations.com merger differs from – again the natural, but not entirely analogous, comparison – SDL/Idiom, in that this merger has not manifested a direct contradiction of a promise. Many clients and partners asked Idiom directly if they intended to sell the company to a service provider. Idiom sold their solutions on a promise to remain independent. Alchemy made no such promise because, without the same access to confdential partner information that is inherent in the way WorldServer functions, there was never any reason for CATALYST to be sold with a pledge of independence.

CSN: How has the recent SDL/Idiom merger affected Translations.com?

PHIL: As far as companies performing services through an Idiom platform, Translations.com is probably among the largest in the world. However, you never saw a public partnership announced. One reason is that Idiom competes directly with our GlobalLink suite of products. However, another reason was that we felt we couldn’t predict the future actions of venture capitalists that controlled Idiom, and envisioned the potential of them selling out to a competing LSP.

Now, of course we’re concerned that SDL has, in effect, purchased our pricing information and other knowledge we once considered confdential, because it is stored on Idiom servers. As there is nothing legally preventing SDL from making use of this information to compete for service revenues, we expect them to cross-sell aggressively into those accounts.

When you step back and think about it, Idiom was losing over $5 million a year and SDL has competing and overlapping technology, so why buy the company for over $20 million? It may be that the real value in the deal for SDL shareholders is simply the future ability to cross-sell more services through 1) the built-in dependency and high-switching costs associated with being a technology vendor and 2) the access to once-confdential proprietary competitor information.

Note that there is nothing “wrong” with what SDL is doing by pursuing this strategy. Quite the contrary, having spent $20+ million of their shareholders’ money, they now have a fduciary obligation to maximize that value, and make the most of their new-found client relationships and competitor information.

After the Idiom deal, we feel how we’ve always felt about SDL: their technology is primarily about three things; a Trojan Horse with which to establish diffcult-to-break relationships to better sell services, an image necessary to fetch them a higher valuation in the public markets (i.e. a software vs. services valuation), and a vehicle that they’ve quite cleverly used to get competitors to help fnance their R&D and operations.

In summary, we respected SDL as a tough competitor before they bought Idiom, and we expect them to continue to be a tough competitor. As always, we look forward to the challenge of going head-to-head with them in the marketplace, on both services and technology.

CSN: So Tony, what’s the “real story” in terms of value to the market place? How and why will this be a positive alliance for the industry?

TONY: The ‘real story’ is about offering choice. Our clients want to manage their localization content more effciently across multiple localization service providers. A solution that is vendor agnostic, using web based architecture and built on open standards that offers enterprise level scalability is key to their continued growth. This is where Translations.com and Alchemy have invested heavily over the past few years.

CSN: Phil, what does this merger mean in terms of your competitive position?

PHIL: Over the past several years, Translations.com has been fortunate enough to be one of the fastest growing players in the localization industry. Enterprise localization clients are increasingly aware of the value we bring to the table. With the addition of a market leader such as Alchemy, we expect to see this trend continue.

ClientSide News Magazine - www.clientsidenews.com

Corporate Blog of Elite - Professional Translation Services serving ASEAN & East Asia

Tuesday, May 26, 2009

The evolution of localization tools

By Michael Trent,
Lingobit Technologies

Some time ago, only few people knew about software localization tools, but now such tools have become an essential part of software development process. This article tells about transformation of localization software from simple tools developed in-house to powerful software suites that support multiple platforms and languages, provide advanced functionality and make software localization affordable to any company.

First steps

Localization revolves around combining language and technology to produce a product that can cross language and cultural barriers. Initially, software companies considered localization as an afterthought. When the original application was released in English and developers went on vacation, translators were put to work to produce a German, French, Chinese, etc. version. Initially, translators just changed text strings directly in source code, which was time-consuming and an error-prone process. It required translators to understand programming language and review huge amount of source code to translate few lines of text.

Locating translatable text embedded in software source code was very difficult and source code localization made code updates and version management a nightmare. As a result, localization at that time used to be very expensive in both time and money. It often produced unsatisfactory results and introduced new bugs in software.

First localization tools that appeared on the market were no more than simple utilities to simplify some parts of this process by locating text strings and managing code updates. They were limited in functionality and were mostly developed for in-house use and, in most cases, for some particular product. However, for all these difficulties, even those first localization tools allowed developers to reduce localization costs significantly.

The shift of computer software use away from centralized corporate and academic environments to usersT desks called for a shift in products features and functionality. Desktop computer users needed software that would enable them to do their work more efficiently and software also had to be in their local language. Releasing software in multiple languages became necessary not only for big software developers such as Microsoft or IBM, but also for smaller software companies. This triggered development of the first commercial localization tools.

First commercial localization tools used binary localization of executable files, rather than localization of the source code because this approach separated localization from software development. Translators were no longer required to know programming languages and many technical complexities were hidden from translators. Binary localization led to a considerable reduction in number of errors caused by localization and it made possible to easily sync translations when the software updates were released.

Localization vs. CAT tools

Companies that developed Computer Aided Translation (CAT) tools also tried to enter software localization market but most of them failed because they are designed for a different purpose. In CAT systems, output is a translated text, whereas in case of localization tools it is only an intermediary stage. The objective of localization is to adapt the product for local markets. This means not only translation of text, but also resizing dialogs, changing images and multiple other things. To do so, localization engineers get a copy of the software, extract translatable text from multiple files, do the translation, merge the translated files with the software build and produce localized copies of the application.

One of the major strengths of CAT systems is a translation memory but it is only partially useful in software localization for several reasons. Translation Memory database from one product cannot be reused in other products and, what is more, even in the same application same text in is often translated differently.

Riding the dot-com wave, localization tools evolved and by the end of the 1990s took over and implemented CAT tool functionality. Currently, traditional CAT tools no longer play a significant role in the localization industry.

Product-centric localization

Products developed today utilize multiple technologies and combine managed and unmanaged code, web components and even code targeting different operating systems. In large projects, there are hundreds of files that require localization and old tools that use by-file localization and target specific platforms are no longer up to a job. New crop of software localization products add support for folder-based localization, multiple development platforms and unify all localization efforts by supporting translation of help files and online documentation.

Folder-based localization tool

When a project has hundreds of localizable files in different directories, it becomes very difficult to manage without using folder-based localization. Tools that support folder-based localization automatically track new, removed and changed files, synchronize translation between files and keep project structure intact.

When multiple people work on the development of a large application, itTs difficult for localization engineers to track what files with localizable text are added and removed from the project. It used to be time-consuming and error-prone work but tools with support for folder-based localization automate this process by detecting new files, determining whether they contain text for translation and then adding them to the project.

Support for multiple formats

One of the specialties that characterize the localization industry today is support for multiple development platforms. In the past, most applications were developed using only one platform, but over time, products became more complex. Many products today contain both legacy code and new code in different programming languages. WhatTs more, as more products move into the Web, with its multitude of languages support for different platforms, this becomes even more important.

Localization on mobile devices

There are more mobile devices than computers in the world and many products have mobile version. While most people who work on computer have at least basic knowledge of English, majority of mobile phone users do not speak English at all. Support for .NET Compact Edition, Windows CE and Java Mobile Edition is standard in modern localization tools.

Help and documentation

Some software localization products added support for localization of documentation, websites and help. While CAT tools are better suited for translation of large amount of text, localization tools are better at translating text in structured form. WhatTs more, using localization tools for help and documentation allows companies to standardize on one product and lower support cost.

Conclusion

Over a short period, localization tools have gone a long way from simple utilities for in-house localization teams to complex product-centric systems, providing tools for the entire localization process. Technologies such as binary localization and translation memory dramatically increased localization efficiency. WhatTs more, modern localization tools compete in documentation and web content translation space with CAT systems, offering the developer a unified environment for entire software product localization.

ClientSide News Magazine - www.clientsidenews.com

Corporate Blog of Elite - Professional Translation Services serving ASEAN & East Asia

Monday, May 25, 2009

Translation Buyers' Views on Technology Independence

By Ben Sargent,
Senior Analyst,
Common Sense Advisory

In late mid-2008, Common Sense Advisory asked buyers of translation services for their views on technology independence among their software and language vendors. Over half the 30-plus respondents hailed from North America; 35 percent were from Europe; the balance were scattered across that amorphous continent known as "Rest of World."

Our first question asked, "How important is it for your technology vendor to be a different company than the firm that provides translation services?" About 60 percent said technology independence was "somewhat important" or "very important."


Source: Common Sense Advisory, Inc.

Stated more bluntly, we were asking buyers what they felt about using technology tied to a specific language services provider (LSP), such as Lionbridge, Sajan, SDL, or Translations.com. Given the high proportion of "very important" responses coupled with zero buyers stating "very unimportant," the balance of opinion among buyers tilts radically toward concern on this topic, even though 39 percent said it was "not important."

Next, we asked buyers if a guarantee of independence from the vendor would influence their purchasing decision. Over 80 percent indicated that it would.


Source: Common Sense Advisory, Inc.

Expect non-service vendors to take advantage of this buying criterion, in both marketing platitudes and in the more hand-to-hand combat of direct selling. However, not all buyers will take such promises at face value. Last year, two notable independent software companies, Alchemy and Idiom, were swallowed by large LSPs (Translations.com and SDL, respectively). When we asked how recent mergers and acquisitions (M&A) had affected their views regarding vendor independence, 60 percent of buyers told us industry consolidation had raised their skepticism about any vendor's ability to remain independent over time.

Other reactions included 25 percent who said they were pushed to explore internal development options; 35 percent who set off to look for new independent vendors; and 45 percent for whom it triggered an exploration of open source solutions. Only 16 percent say the consolidation did not alter their views on vendor independence.


Source: Common Sense Advisory, Inc.

But apparently many companies did not need M&A activities to trigger their interest in open source. When we asked if it was likely their company would use open-source software for translation automation projects, nearly three out of four said they are "somewhat likely" or "very likely" to do so. This receptiveness could bode well for an open-source Global Sight — if Welocalize succeeds in mobilizing a community and eliciting a sense of ownership beyond itself.


Source: Common Sense Advisory, Inc.

Fewer companies are developing their own solutions for translation automation. Half said they were not, 30 percent said they were, and 20 percent claimed to be thinking about it.

Translation Buyers' Views on Technology Independence

Source: Common Sense Advisory, Inc.

So, our survey comes down to two conflicting datapoints:

* Our research on translation management systems (TMS) shows that most high-scoring systems are offered by language service providers, not by independent software vendors (ISVs). Suppliers such as Lionbridge, Sajan, SDL, and Translations.com are not only LSPs, but leading proponents of TMS in the openly available enterprise or captive "house" categories (house systems are available only through service agreements with those LSPs).

* However, buyers unequivocally tell us they worry about vendor independence and that it affects buying decisions.

This cognitive dissonance explains the difficult selling environment that LSPs find themselves in when pushing their proprietary technology approaches. And why unaffiliated software vendors have clay feet when it comes to the question of independence. Across, Beetext, and Kilgray have no financial ties to LSPs — yet. Maybe these new players will be the ones who finally turn the corner and prove that ISVs can survive in this service-oriented marketplace. But over the last decade, LSPs have harvested pretty much every leading software vendor in the space — more than 10 companies in all. Common Sense Advisory anticipates that acquisition by an LSP is still the most likely "exit strategy" for any globalization software vendor (GSV) operating today.

Published - April 2009

ClientSide News Magazine - www.clientsidenews.com

Corporate Blog of Elite - Professional Translation Services serving ASEAN & East Asia

Sunday, May 24, 2009

The translator’s point of view: goodbye quality, hello Quality!

By Estelle Renard

estelle [at] traducteurs-av . org

Languages & the Media 2008 - 7th International Conference on Language Transfer in Audiovisual Media

As presented by Estelle Renard on behalf of the ATAA

Last year, the sensation at the French box office was not a Hollywood blockbuster, but a small comedy about language differences and the prejudices and bonds they produce. Bienvenue chez les Chtis was a huge success and over half the French population went to see it. This film, relying as it does on language and linguistic jokes, should have been lost in translation. It was not. Thanks to the competence of the English translator and the director’s attention to it, the subtitles were so good that a Guardian journalist suggested that this tour de force deserved the creation of a whole new Oscar’s category for subtitlers. It is because it was so well translated that this film has had the chance of an international career.

If this story proves something, it is not the refinement of the French people's tastes, but the value of the work of audiovisual translators.

And indeed,

- it it is not only that without translation, an audiovisual product will not cross the borders of the country where it was created,

- nor that without a good translation, the program will be aired, but not appreciated as it should be and sometimes, not even understood.

- Translation is even more than that, it gives an added value to what we call a “product”, if we want to use the language of business.

This story is also interesting, because the comedy of cultural differences and especially those embodied in language is the ultimate challenge for an audiovisual translator. It demonstrates that what we do is something that is, essentially, not quantifiable. This 'something' that cannot be quantified is also at the heart, the very core of the industry in which we work. Creativity and efficiency cannot be measured or quantified in industrial and business language.

So how can we evaluate something that is not quantifiable? This question seems relevant, but in our industry, it leads us down the wrong path. In this sector, all companies, whatever their size, boast about the high quality translations they provide. At the same time, they boast that they can achieve that quality for a price defying all the odds, shrinking year after year. My question is : what is behind that boast? I would like to demonstrate how quality, as defined by the industry, always results in a cut in the rate paid to the translator. Why is this the case?

The key words of global translation companies are:

- Standardization / globalization

- Productivity

- Technology

Let us see how each of them works in regard to audiovisual translation and if they are a means to achieve efficiency. Can they achieve quality?

Standardization

The issue here is not technical standardization such as in file or video formats, which obviously aid the circulation of audiovisual programs. I am talking about the standardization of intellectual work.

The use of templates provides an eloquent example of the confusion between quality and cost cutting. The main (and only) advantage of a template is that spotting has to be done only once, no matter how many languages the program is translated into. When using a template, translators have to fit their subtitles into spotting that was designed for another language.

- English template : Bad Girl (8 characters)

- Translation in polish : Niegrzeczna dziewczynka (22 char)

In the example above, the Polish words need a lot more time to read than the English. Using a template, this extra time is not available. The template cannot be changed. It is obviously a bad idea to provide the same template for languages that are so different. Quality spotting is adapted to each language, not the contrary. Templates are the exact opposite of what would ensure a smooth and enjoyable experience for the viewer.

Therefore, standardization is a way to save money but not to produce a good translation. The only thing it can deliver is productivity.

What does productivity mean for a translator?

The translator is an individual, not a company. For him, there is no economy of scale. Higher volume does not mean higher profits. Program for program, he will not make more profit if he translates 10 films than if he translates just one. He will earn the same for each film and his profits will not increase the more films he translates.

Productivity has a meaning from an industrial point of view but not for the translator.

Perhaps technology can help the translator. What can it do for him?

Well, not much. Technology is a means, a tool. Subtitling software for instance is an excellent tool, but it is like a car: you can have the most technologically advanced car in the world but if you don't know where you're going, you will just go nowhere more quickly. It is true that software allows translators to work in more comfortable conditions, but it cannot help them to produce better translations.

Let us assume that technology allows us to work faster. It could then be argued that it helps the translator to do a better job: they are paid the same and work faster. This means they can reinvest the time gained in reviewing their translation many times. But the point is, for audiovisual translators, technology has always meant a dramatic drop in rates and in the time allocated for each job. In France, the rates are a third of what they were 10 years ago. Has any employee in any other sector seen their salary cut by 70% in ten years? If we don't react, the same will happen in dubbing, with the rapid growth of virtual dubbing software.

In this conference, we have seen many amazing machines and softwares but I know of something even more amazing: the human brain. A machine transcodes, the brain of a translator takes a sentence in its context and transfers it to another language. Languages are not just words strung together, they are inextricably linked with a culture and are constantly evolving. They are the flesh of a civilization, and at the core of the very essence of humanity.

In a nutshell, standardisation, globalization, productivity and blind trust in the wonders of technology are the criteria of the industry, but they cannot be applied to the work of the mind, and therefore not to translation.

If we are here today questioning whether or not quality can still be achieved, it is because of global companies such as SDI, Softitler and others and the blindness of networks regarding what are ultimately their own interests. The question of “quality” (with a small q) is the elegant screen behind which these global companies make big profits. Here, the issue is not that translation costs too much, it is how to make the most money out of it, providing the biggest possible profit for their shareholders. This may seem obvious but I strongly believe that we should not see this situation from their point of view. These companies are the cancer that is eating this industry alive. Why use such a shocking term? Because the way they run their business puts the whole industry in danger.

Quality cannot be achieved without a system of values. What is valued here? Not the viewers and certainly not the translators. Recently, SDI Media Group placed an advert inviting young translators to move to the Philippines for a year. There, the company would provide them with a computer, an internet connection and lots of paid-per-minute programs. Scuba diving lessons and weekend trips were also on the agenda, but not at the company's expense. They considered the opportunity so exciting that they did not think that stating the rates paid was necessary. It is an insight into the way these companies envision the trade of the audiovisual translator. Do they think it is a hobby?

These companies create an environment where companies can only compete to pay the lowest rates, where the smaller companies eventually disappear. As a result, the subtitles are for the most part, appalling. How is it possible to blame the translators? They simply deliver a quality reflecting the rate they are paid. “If you want to pay peanuts, hire monkeys” says the proverb. This policy is hastening the end of the very business model they helped to create because consumers also want to reduce their costs, or even not pay at all. And why should they? Why buy a DVD with a translation no better than a fansubbed version? It is so much easier to download it from home, for free.

What is to be done?

It seems obvious that we have to escape this business model, this vicious circle. The role of the translator has to be re-evaluated and recognized. He is the one who conveys and gives meaning to the whole process of language transfer in the media. It is imperative that he should have the right tools to work with. To do a good job, a competent and dedicated translator simply needs two things:

- time

- money

Time. It is the only thing that can allow a translator to go through all the steps that guarantee a good translation. One of them is proofreading, for instance by a fellow translator: through this crucial step, subtitles or dubbing can be considerably enhanced.

Money. Translators should always be paid by the subtitle or word. They do not make socks. They should not be paid by the kilogram or, in this case, the minute. It is not a mechanical process repeated again and again as if on a production line. Each sentence, each subtitle is different, is a new adventure. Being paid per subtitle or word is a way to have their work properly recognized and appreciated.

This is all wishful thinking of course. It will not happen like this.

Translators must take action to gain the self respect that the industry does not give them.

The first step is to say no.

Case study: SDI office in France in 2003.

There were 30 translators working full time. Not only for that office, but in that office: we knew each other. When we learned that SDI was going to cut our rates for the third time, all the translators working there agreed to leave the company. Overnight 28 out of the 30 translators were gone.

SDI was, at the time, my only client. I did not work for 4 months afterwards, but what I gained was priceless. I gained self respect, respect for my trade and respect for the viewers/consumers. Those who have done something like this just once in their lives know how good it feels. You can look at yourself in the mirror with a big smile on your face.

Of course, if one person says no, it does not mean much to a company. But if a lot of people say no, then it starts to be a problem.

So the second step is: unite!

ATAA (French Audiovisual Translators Association) was founded two years ago, in June 2006. We were able to create an initially small network that continues to grow today. The so-called individualism of the translator has been proved to be fiction.

We now have 160 members and a mailing list of more than 500 translators.

The first achievement of the Ataa was to share information: a tremendous amount of information is exchanged through our forum and during our meetings. This simple service has made a huge difference. Now we all know what is going on in other companies, how much the other translators are paid and we can organize ourselves and act accordingly.

We also meet a lot: we take every opportunity to organise meetings, and simply get to know each other. Because what we discovered was: it is a small step from meeting in the flesh, to having the guts to say no.

Beyond this national association, we are trying to organize ourselves internationally. Thanks to the great initiative taken by our Scandinavian colleagues, we started an International League of Subtitlers that continues to grow. This international network has allowed us to meet and to compare working conditions. In the not too distant future, we hope to take positive action together.

estelle [at] traducteurs-av . org

Published - April 2009

Corporate Blog of Elite - Professional Translation Services serving ASEAN & East Asia

Saturday, May 23, 2009

Where Do Translators Fit into Machine Translation?

By Alex Gross

http://language.home.sprynet.com/
alexilen@sprynet.com

Original and Supplementary Questions
Submitted to the MT Summit III Conference,

Washington, 1991

Here are the original questions for this panel as submitted to the speakers:

1. At the last MT Summit, Martin Kay stated that there should be "greater attention to empirical studies of translation so that computational linguists will have a better idea of what really goes on in translation and develop tools that will be more useful for the end user." Does this mean that there has been insufficient input into MT processes by translators interested in MT? Does it mean that MT developers have failed to study what translating actually entails and how translators go about their task? If either of these is true, then to what extent and why? New answers and insights for the MT profession could arise from hearing what human translators with an interest in the development of MT have to say about these matters. It may well turn out that translators are the very people best qualified to determine what form their tools should take, since they are the end users.

2. Is there a specifically "human" component in the translation process which MT experts have overlooked? Is it reasonable for theoreticians to envision setting up predictable and generic vocabularies of clearly defined terms, or could they be overlooking a deep-seated human tendency towards some degree of ambiguity—indeed, in those many cases where not all the facts are known, an inescapably human reliance on it? Are there any viable MT approaches to duplicate what human translators can provide in such cases, namely the ability to bridge this ambiguity gap and improvise personalized, customized case-specific subtleties of vocabulary, depending on client or purpose? Could this in fact be a major element of the entire translation process? Alternately, are there some more boring "machine-like" aspects of translation where the computer can help the translator, such as style and consistency checking?

3. How can the knowledge of practicing translators best be integrated into current MT research and working systems? Is it to be assumed that they are best employed as prospective end-users working out the bugs in the system, or is there also a place for them during the initial planning phases of such systems? Can they perhaps as users be the primary developers of the system?

4. Many human translators, when told of the quest to have machines take over all aspects of translation, immediately reply that this is impossible and start providing specific instances which they claim a machine system could never handle. Are such reactions merely the final nerve spasms of a doomed class of technicians awaiting superannuation, or are these translators in fact enunciating specific instances of a general law as yet not fully articulated?

Since we now hear claims suggesting that FAHQT is creeping in again through the back door, it seems important to ask whether there has in fact ever been sufficient basic mathematical research, much less algorithmic underpinnings, by the MT Community to determine whether FAHQT, or anything close to it, can be achieved by any combination of electronic stratagems (transfer, AI, neural nets, Markov models, etc.).

Must translators forever stand exposed on the firing line and present their minds and bodies to a broadside of claims that the next round of computer advances will annihilate them as a profession? Is this problem truly solvable in logical terms, or is it in fact an intractable, undecidable, or provably unsolvable question in terms of "Computable Numbers" as set out by Turing, based on the work of Hilbert and Goedel? A reasonable answer to this question could save boards of directors and/or government agencies a great deal of time and money.

SUPPLEMENTAL QUESTIONS:

It was also envisioned that a list of Supplemental Questions would be prepared and distributed not only to the speakers but everyone attending our panel, even though not all of these questions could be raised during the session, so as to deepen our discussion and provide a lasting record of these issues.

FAHQT: Pro and Con

Consider the following observation on FAHQT: "The ideal notion of fully automatic high quality translation (FAHQT) is still lurking behind the machine translation paradigm: it is something that MT projects want to reach." (1) Is this a true or a false observation?

Is FAHQT merely a matter of time and continued research, a direct and inevitable result of a perfectly asymptotic process?

Will FAHQT ever be available on a held-held calculator-sized computer? If not, then why not?

To what extent is the belief in the feasibility of FAHQT a form of religion or perhaps akin to a belief that a perpetual motion device can be invented?

Technical Linguistic Questions

Let us suppose a writer has chosen to use Word C in a source text because s/he did not wish to use Word A or Word B, even though all three are shown as "synonyms." It turns out that all three of these words overlap and semantically interrelate quite differently in the target language. How can MT handle such an instance, fairly frequently found in legal and diplomatic usage?

Virtually all research in both conventional and computational linguistics has proceeded from the premise that language can be represented and mapped as a linear entity and is therefore eminently computable. What if it turns out that language in fact occupies a virtual space as a multi-dimensional construct, including several fractal dimensions, involving all manner of non-linear turbulence, chaos, and Butterfly Effects?

Post-Editors and Puppeteers

Let's assume you saw an ad for an Automatic Electronic Puppeteer that guaranteed to create and produce endless puppet plays in your own living room. There would be no need for a puppeteer to run the puppets and no need for you even to script the plays, though you would have the freedom to intervene in the action and change the plot as you wished. Since the price was acceptable, you ordered this system, but when it arrived, you found that it required endless installation work and calls to the manufacturers to get it working. But even then, you discovered that the number of plays provided was in fact quite limited, your plot change options even more so, and that the movements of the puppets were jerky and unnatural. When you complained, you were referred to fine print in the docs telling you that to make the program work better, you would have to do one of two things: 1) master an extremely complex programming language or 2) hire a specially trained puppeteer to help you out with your special needs and to be on hand during your productions to make the puppets move more naturally. Does this description bear any resemblance to the way MT has functioned and been promoted in recent years?

A Practical Example

Despite many presentations on linguistic, electronic and philosophical aspects of MT at this conference, one side of translation has nonetheless gone unexplored. It has to do with how larger translation projects actually arise and are handled by the profession. The following story shows the world of human translation at close to its worst, and it might be imagined at first glance that MT could easily do a much better job and simply take over in such situations, which are far from atypical in the world of translation. But, as we shall see, such appearances may be deceptive. To our story:

A French electrical firm was recently involved in a hostile take-over bid and law suit with its American counterpart. Large numbers of boxes and drawers full of documents all had to be translated into English by an almost impossible deadline. Supervision of this work was entrusted to a paralegal assistant in the French company's New York law firm. This person had no previous knowledge of translation. The documents ran the gamut from highly technical electrical texts and patents, records of previous law suits, company correspondence, advertisements, product documentation, speeches by the Company's directors, etc.

Almost every French-to-English translator in the NYC area was asked to take part. All translators were required to work at the law firm's offices so as to preserve confidentiality. Mere translation students worked side by side with newly accredited professionals and journeymen with long years of experience. The more able quickly became aware that much of the material was far too difficult for their less experienced colleagues. No consistent attempt was made to create or distribute glossaries. Wildly differing wages were paid to translators, with little connection to their ability. Several translation agencies were caught up in a feverish battle to handle most of the work and desperately competed to find translators.

No one knows the quality of the final product, but it cannot have been routinely high. Some translators and agencies have still not been fully paid. As the deadline drew closer, more and more boxes of documents appeared. And as the final blow, the opposing company's law firm also came onto the scene with boxes of its own documents that needed translation. But these newcomers imposed one nearly impossible condition, also for reasons of confidentiality: no one who had translated for the first law firm would be permitted to translate for them.

Now let us consider this true-life tale, which occurred just three months ago, and see how—or whether—MT could have handled things better, as is sometimes claimed. Let's be generous and remove one enormous obstacle at the start by assuming that all these cases of documents were in fact in machine-readable form (which, of course, they weren't). Even if we accord MT this ample handicap, there are still a number of problems it would have had trouble coping with:

1. How could a sufficient number of competent post-editors be found or trained before the deadline?

2. How could a sufficiently large and accurate MT dictionary be compiled before the deadline? Doesn't creating such a dictionary require finishing the job first and then saving it for the next job, in the hope that it will be similar ?

3. The simpler Mom & Pop store & smaller agency structure of the human translation world was nonetheless able to field at least some response to this challenge because of its large slack capacity. Would an enormously powerful and expensive mainframe computer have the same slack capacity, i.e., could it be kept inactive for long periods of time until such emergencies occurred? If so, how would this be reflected in the prices charged for its services?

4. How would MT companies have dealt with the secrecy requirement, that translation must be done in the law firm's office?

5. How would an MT Company comply with the demand of the second law firm, that the same post-editors not be used, and still land the job?

6. Supposing the job proved so enormous that two MT firms had to be hired—assuming they used different systems, different glossaries, different post-editors, how could they have collaborated without creating even more work and confusion?

Larger Philosophical Questions

Is it in any final sense a reasonable assumption, as many believe, that progress in MT can be gradual and cumulative in scope until it finally comes to a complete mastery of the problem? In other words, is there a numerical process by which one first masters 3% of all knowledge and vocabulary building processes with 85% accuracy, then 5% with 90% accuracy, and so on until one reaches 99% with 99% accuracy? Is this the whole story of the relationship between knowledge and language, or are there possibly other factors involved, making it possible for reality to manifest itself from several unexpected angles at once. In other words, are we dealing with language as a linear entity when it is in fact a multi-dimensional one?

Einstein maintained that he didn't believe God was playing dice with the universe. Is it possible that by using AI rule-firing techniques with their built-in certainty and confidence values, computational linguists are playing dice with the meaning of the that universe?

It would be possible to design a set of "Turing Tests" to gauge the performance of various MT systems as compared with human translation skills. The point of such a process, as with all Turing Tests, would be to determine if human referees could tell the difference between human and machine output. All necessary safeguards, handicaps, alternate referees, and double blind procedures could be devised, provided the will to take part in such tests actually existed. True definitions for cost, speed, accuracy, and post-editing needs might all have at least a chance of being estimated as a result of such tests. What are the chances of their taking place some time in the near future?

"Computerization is the first stage of the industrial revolution that hasn't made work simpler." Does this statement, paraphrased from a book by a Harvard Business School professor, (2) have any relevance for MT? Is it correct to state that several current MT systems actually add one or more levels of difficulty to the translation process before making it any easier?

While translators may not be able to articulate precisely what kind of interface for translation they most desire, they can certainly state with great certainty what they do NOT want. What they do not want is an interface that is any of the following:

harder to learn and use than conventional translation;
more likely to make mistakes than the above;
lending less prestige than the above;
less well paid than the above.

Are these also concerns for MT developers?

What real work has been done in the AI field in terms of treating translation as a Knowledge Domain and translators as Domain Experts and pairing them off with Knowledge Engineers? What qualifications were sought in either the DE's or the KE's?

Are MT developers using the words "asymptote" and "asymptotic" in their correct mathematical sense, or are they rather using them as buzzwords to impart a false air of mathematical precision to their work? Is the curve their would-be asymptote steadily approaching a representation of FAHQT or something reasonably similar, or could it just turn out to be the edge of a semanto-linguistic Butterfly Effect drawing them inexorably into what Shannon and Weaver recognized as entropy, perhaps even into true Chaos?

Must not all translation, including MT, be recognized as a subset of two far larger sets, namely writing and human mediation? In the first case, does it not therefore become pointless to maintain that there are no accepted standards for what constitutes a "good translation," when of course there are also no accepted standards for what constitutes "good writing?" Or for that matter, no accepted standards for what constitutes "correct writing practices," since all major publications and publishing houses have their own in-house style manuals, with no two in total agreement, either here or in England. And is not translation also a specialized subset of a more generalized form of "mediation," merely employing two natural languages instead of one? In which case, may it belong to the same superset which includes "explaining company rules to new employees," public relations and advertising, or choosing exactly the right time to tell Uncle Louis you're marrying someone he disapproves of?

Are not the only real differences between foreign language translation and such upscale mediation that two languages are involved and the context is usually more limited? In either case (or in both together), what happens if all the complexities that can arise from superset activities descend into the subset and also become "translation problems?" at any time? How does MT deal with either of these cases?

Does the following reflection by Wittgenstein apply to MT: "A sentence is given me in code together with the key. Then of course in one way everything required for understanding the sentence has been given me. And yet I should answer the question `Do you understand this sentence?': No, not yet; I must first decode it. And only when e.g. I had translated it into English would I say `Now I understand it.'

"If now we raise the question `At what moment of translating do I understand the sentence? we shall get a glimpse into the nature of what is called `understanding.'" To take Wittgenstein's example one step further, if MT is used, at what moment of translation does what person or entity understand the sentence? When does the system understand it? How about the hasty post-editor? And what about the translation's target audience, the client? Can we be sure that understanding has taken place at any of these moments? And if understanding has not taken place, has translation?

Practical Suggestions for the Future

1. The process of consultation and cooperation between working translators and MT specialists which has begun here today should be extended into the future through the appointment of Translators in Residence in university and corporate settings, continued lectures and workshops dealing with these themes on a national and international basis, and greater consultation between them in all matters of mutual concern.

2. In the past, many legislative titles for training and coordinating workers have gone unused during each Congressional session in the Department of Labor, HEW, and Commerce. If there truly is a need for retraining translators to use MT and CAT products, it behooves system developers—and might even benefit them financially—to find out if such funding titles can be used to help train translators in the use of truly viable MT systems.

3. It should be the role of an organization such as MT Summit III to launch a campaign aimed at helping people everywhere to understand what human translation and machine translation can and cannot do so as to counter a growing trend towards fast-word language consumption and use.

4. Concomitantly, those present at this Conference should make their will known on an international scale that there is no place in the MT Community for those who falsify the facts about the capabilities of either MT or human translators. The fact that foreign language courses, both live and recorded, have been deceitfully marketed for decades should not be used as an excuse to do the same with MT. I have appended a brief Code of Ethics document for discussion of this matter.

5. Since AI and expert systems are on the lips of many as the next direction for MT, a useful first step in this direction might be the creation of a simple expert system which prospective clients might use to determine if their translation needs are best met by MT, human translation, or some combination of both. I would be pleased to take part in the design of such a program.

DRAFT CODE OF ETHICS:

1. No claims about existing or pending MT products should be made which indicate that MT can reduce the number of human translators or the total cost of translation work unless all costs for the MT project have been scrupulously revealed, including the total price for the system, fees or salaries for those running it, training costs for such workers, training costs for additional pre-editors or post-editors including those who fail at this task, and total costs of amortization over the full period of introducing such a system.

2. No claims should be made for any MT system in terms of "percentage of accuracy," unless this figure is also spelled out in terms of number of errors per page. Any unwillingness to recognize errors as errors shall be considered a violation of this condition, except in those cases where totally error-free work is not required or requested.

3. No claim should be made that any MT system produces "better-quality output" than human translators unless such a claim has been thoroughly quantified to the satisfaction of all parties. Any such claim should be regarded as merely anecdotal until proved otherwise.

4. Researchers and developers should devote serious study to the issue of whether their products might generate less sales resistance, public confusion, and resentment from translators if the name of the entire field were to be changed from "machine translation" or "computer translation" to "computer assisted language conversion."

5. The computer translation industry should bear the cost of setting up an equitably balanced committee of MT workers and translators to oversee the functioning of this Code of Ethics.

6. Since translation is an intrinsically international industry, this Code of Ethics must also be international in its scope, and any company violating its tenets on the premise that they are not valid in its country shall be considered in violation of this Code. Measures shall be taken to expose and punish habitual offenders.

Respectfully Submitted by
Alex Gross, Co-Director
Cross-Cultural Research Projects
alexilen@sprynet.com

NOTES:

(1) Kimmo Kettunen, in a letter to Computational Linguistics, vol. 12, No. 1, January-March, 1986

(2) (2) Shoshana Zuboff: In the Age of the Smart Machine: The Future of Work and Power, Basic Books, 1991.

Corporate Blog of Elite - Professional Translation Services serving ASEAN & East Asia

Friday, May 22, 2009

Machine Translation: Ingredients for Productive and Stable MT deployments - Part 3

By Mike Dillinger,
PhD & Laurie Gerber,
Translation Optimization Partners

This is the final part of the first in a new series of articles on how to achieve successful deployments of machine translation in various use cases. Different types of source documents and different uses for the translations lead to varying approaches to automation. In the first part of this article, we talked about why it is so important to automate translation of knowledge bases.

Pioneering companies have shown that automating translation is the best way to make product knowledge bases available to global markets. Customers consistently rate machine translated and English technical information as equally useful. A typical installation for automatic translation weaves together stored human translations that you already paid for and machine-translated new sentences to get the best of both approaches.

Steps to Success

Set your expectations. The documents in knowledge bases have distinctive characteristics when compared to other product support documentation, starting with the fact that they are written by engineers. These engineers may be experts in a technical domain, but they haven’t ever been trained in technical writing and are often not native speakers of English.

High-speed, high-volume translation simply cannot be perfect, no matter what mix of humans and machines we use. This is why emphasis in evaluation has shifted to measuring translation "usefulness", rather than absolute linguistic quality. The effective benchmark is no longer whether expert linguists detect the presence or absence of errors. The new, more practical criterion is whether non-expert customers find a translation to be valuable, in spite of its linguistic imperfections. We see time and time again that they most certainly do. You’ll confirm this with your own customers when you do beta testing of your installation.

Set realistic expectations for automatic translation: there will be many errors, but customers will find the translations useful anyway.

Start small. Start with only one language and focus on a single part of your content. Success is easier to achieve when you start with a single "beachhead" language. Starting small has little to do with machine translation and much more to do with simplifying change management: work out the details on a small scale before approaching a bigger project.

In our consulting practice, we’ve seen two main ways of deciding where to start: focusing on customer needs or on internal processes. For the customer-needs approach, your decision is guided by questions like: Which community of customers suffers most from the lack of local- language materials? Which community costs you the most in support calls? In translation expenses? Which has the least web content already translated? The decision is guided by the most important customer support issues.

For the internal-process approach, your decision is guided by questions like: Which languages are we most familiar with? Which do we have most translations for?

What languages are our staff strongest in? Which in-country groups collaborate best? The decision in this case is to build on your strengths.

Start small to build a robust, Scalable process.

Choose an MT vendor. The International Association for Machine Translation sponsors a Compendium of Translation Software that is updated regularly. In it, you can find companies large and small that have developed a range of products for translating many languages. You will see companies such as Language Weaver, Systran, ProMT, AppTek, SDL, and many others. How can you choose between them?

Linguistic quality of the translations is the first thing that many clients want to look at. Remember that you won’t offer to your customers what you see during initial testing. And even a careful linguistic analysis of translation output quality may not tell you much about whether the system can help you achieve your business goals. Evaluation of translation automation options is much more complex than having a translator check some sentences. You may want to hire a consultant to help with evaluation, while bringing your staff up to speed on the complexities of multilingual content.

For knowledge-base translation, scalability and performance are important issues to discuss with each vendor. Most vendors can meet your criteria for response time or throughput, but they may need very different hardware to do so.

You can narrow down or prioritize the list of vendors by using other criteria:

* Choose vendors who can translate the specific languages that you are interested in. If you want to translate into Turkish or Indonesian, you won’t have as many options as into Spanish or Chinese.

* Check that you have what the vendor needs. Some MT systems (from Language Weaver, for example) need a large collection of documents together with their translations. If you aren’t translating your documents by hand already, then you may not have enough data for this kind of system. Other MT systems (from Systran or ProMT, for example) can use this kind of data, but don’t require you to have it.

* Check how many other clients have used the product for knowledge base translation – to judge how much experience the vendor has with your specific use case. The best-known vendors have experience with dozens of different installations, so try to get information about the installations that are most similar to yours. Ask, too, for referrals to existing customers who can share their stories and help prepare you better for the road ahead. MT is changing rapidly, so you shouldn’t reject a product only because it’s new. But the way that these questions are addressed or dismissed will give some insight into how the vendor will respond to your issues.

* Think through how you will approach on-going improvements after your MT system is installed. If you want to actively engage in monitoring and improving translation quality, some MT vendors (Systran of ProMT, for example) offer a range of tools to help. Other MT vendors (Language Weaver, for example) will periodically gather your new human translations and use them to update the MT system for you, with some ability to correct errors on your own.

Of course, price and licensing terms will be important considerations. Be aware that each vendor calculates prices differently: they may take into account how many servers you need, how many language pairs (ex: English>Spanish and Spanish>English is one language pair), how many language directions (ex: English>Spanish and Spanish>English are two language directions), how many people will use the system, how many different use cases, additional tools you may need, the response times or throughput that you need, etc. Experience shows that the best approach is to make a detailed description of what you want to do and then ask for quotes.

Adapt the MT system to your specific needs before you go live. Whatever MT system you choose, you or the vendor (or both) will have to adapt it to your specific vocabulary and writing style. Just as human translators need extra training for new topics and new technical vocabulary, MT systems need to have the vocabulary in your documents to translate them well. Some vendors call this process of adapting the MT system to your specific needs training, others call it customization.

An MT system starts with a generic knowledge of generic English. Your knowledge base, on the other hand, has thousands of special words for your unique products as well as the jargon that your engineers and sales people have developed over many years. The goal is to bridge this linguistic gap between your organization’s writing and generic English.

Different vendors take different approaches to bridging this gap. Some MT systems ("statistical MT" – from Language Weaver, for example) take large amounts of your translated documents and feed them into tools that quickly build statistical models of your words and how they’re usually translated. If you don’t have a sizeable collection of translated documents, though, it’s difficult to build a good statistical MT system. All MT systems can make use of your existing terminology lists and glossaries with your special words and jargon. And many MT systems, from Systran or ProMT, for example can use your translated documents to extract dictionaries directly from translated documents. Hybrid MT systems, which are just emerging in the market, also build statistical models, to combine the best of both techniques. Hybrid MT systems are more practical when you don’t have a sizeable collection of translated documents to start from.

Go live. Do this in stages, starting with an internal test by the main stakeholders. Then move into "beta" testing with a password-protected site for a handful of real product users. Be sure to have a disclaimer that openly announces that the document is an automated translation and may contain errors. (At the same time, you will want to promote the availability of the content in the user’s language as a new benefit.) Actively seek out their feedback to identify specific problems, and address the ones that they cite most frequently. At this stage, your users may mention that there are errors in the translation; try to get them to identify specific words and/or sentences.

In knowledge-base deployments, a small proportion of the content (<10%) is widely read and the vast majority of the content is rarely read. The current best practice is to establish a threshold of popularity or minimum hit rate that will trigger human translation of the few most-popular articles for a better overall customer experience.

This is the time to do a reality check: offer a feedback box on each translated page. It is most helpful if you ask for the same feedback on your source-language pages for comparison. If the translated page is rated much lower than the original page, then the difference may signal a problem in translation.

Keep improving quality. Inevitably, products and jargon will change and you will identify recurring errors. Translation quality management is an on-going activity with two main parts: managing quality of the original documents and managing the parts of the MT system.

We’ll leave discussion of document quality management for a future article. When engineers respond to emergent problems with knowledge-base articles, it is not practical to impose stringent authoring guidelines. But you can encourage them to work from a standard terminology list (terms that the customers know, which may be different from terms that the engineers use). This will make the source-language documents easier to understand, and will improve the translations, as well.

For rule-based or hybrid MT systems, you will want to manage (or outsource management of) key components like the dictionary. As errors or changes arise, updating the dictionary will improve translation quality. For statistical MT systems, you will want to manage carefully any human translated content and "feed" it into the system. The more data you use, the better these systems get.

Repeat for another language. With the first language, you will work out the kinks in your process. Once you see how very appreciative the customers are for content in their own language, you can get to work on the next language. Now you know the drill, you know the tools, and you know what to look for. The next language will take you only 25% of the effort you put into deploying the first one.

Links

Will Burgett & Julie Chang (Intel). AMTA Waikiki, 2008. The Triple-Advantage Factor of MT: Cost, Time-to-Market, and FAUT.

Priscilla Knoble & Francis Tsang (Adobe). Hitting the Ground Running in New Markets: Do Your Global Business Processes Measure Up? LISA San Francisco, 2008.

Chris Wendt (Microsoft). AMTA Waikiki, 2008. Large-scale deployment of statistical machine translation: Example Microsoft.

Authors:

Mike Dillinger, PhD and Laurie Gerber are Translation Optimization Partners We are an independent consultancy specialized in translation processes and technologies. Both Principals are leaders in translation automation and past Presidents of the Association for Machine Translation in the Americas, with 40 years’ experience in creating and managing technical content, developing translation technologies, and deploying translation processes. We develop solutions in government and commercial environments to meet the needs of translation clients and content users. Our offices are in Silicon Valley and San Diego. Contact us for further information:

Mike Dillinger mike [at] mikedillinger . com

Laurie Gerber gerbl [at] pacbell . net

Mike needs more places to grind this axe: Authors and authoring are often treated as an unimportant afterthought, in spite of the central role of high-quality content in brand management, marketing, sales, training, customer satisfaction, customer support, operational communications, and everything else.

Published - April 2009

ClientSide News Magazine - www.clientsidenews.com

Corporate Blog of Elite - Professional Translation Services serving ASEAN & East Asia