Category Archives: Localization

Translation is expensive; why don’t we use Google Translate instead?

by Tasos Tzounis, Project Manager at Commit

We all had texts that needed translation at some point in our lives.

In those cases, certain questions have arisen:

  • How much will it cost?
  • Will it be good but also affordable?
  • Will I have it on time, at a low price and in excellent quality?

While searching for the best solution, there are various alternatives to choose from, in an effort to settle on either the most affordable one or the one meeting our needs. But do we have all the necessary information to end up with an informed decision?

Those who are familiar with the Internet and its capabilities know Google Translate. Google Translate is a Google service that provides a translation of words or sentences from and to almost all languages. You just type or paste your text in the appropriate field and then choose the source and target languages. It has become such a large part of our lives that we have all heard the following phrases in some wording or another: “I’ll look it up on Google Translate”; “why don’t you use Google Translate?”; “translating a simple text is very expensive, so I’ll do it myself, and with Google Translate I will pull it off”; “why do translators ask for so much money since there is Google Translate?”. If we explore the subject more closely, there is a large percentage of buyers believing that translators either use Google Translate or mistake Google Translate for translation memories. And the question remains: why pay for translators when there is Google Translate? Can it take the place of a professional translator?

In recent years, due to the reduction in cost and delivery time, considerable progress has been made in the training of translation engines, growing the demand for automatic translation. But can this become a reality? In fact, the translation quality of Google Translate has improved quite a lot, particularly in language combinations that are widely-spoken, such as French or English, and remarkably when the target language is English. But what happens with not so widely spoken languages or languages ​​with complex grammar and syntax? Greek, for example, uses cases, specific rules and demonstrates peculiarities that at this moment a computer cannot work out on its own. Also, in many languages one word has more than one meaning or changes its meaning depending on the syntax; and this is where the famous Google Translate falls short compared to a professional translator.

Many now realize that Google Translate is not the solution and that the automatic translation it provides cannot replace the human factor. Nevertheless, the issue of cost and time remains, and many claim that translation should be performed with Google Translate and then get edited by a translator. However, this solution also seems ineffective. Most of the times, for the reasons mentioned above, the translator ends up translating from scratch and, of course, being remunerated for translation and not editing services. The cost then is the same for the client and significant time has been needlessly spent with pointless experiments.

But what happens when the text to be translated is technical and contains legal, economic, or medical terminology? Can Google Translate detect the corresponding terms and render them properly in the target language in order to create a meaningful text that has cohesion and coherence? Can it inspire the same trust as a translator? In these texts, the terminology is specific and often provided by the client. In other cases, the translator has compiled a terminology library from previous projects. Google Translate doesn’t have the ability to integrate this terminology. Besides, most of the times it fails to render these terms correctly or understand if a word refers to a Product Name or a Trademark that doesn’t need to be translated. Therefore, with texts that require particular attention and baffle even an experienced translator, the use of Google Translate is lurking dangers. Medical reports or case studies do not leave room for mistakes. The use of Google Translate for cost reduction might not be prudent as the consequences of an error exceed the cost of a translation by a professional. In every transaction, there is trust that is built over time. So, when we have a technical text, we have to do research, assess translators and choose the right person for the job. Especially in cases where more than one text needs to be translated, and we have to reach out to a translator many times, we need to choose the most suitable one that will meet our needs; something that is impossible with Google Translate as we cannot trust it blindly.

Another drawback of this “all-in-one translation engine” is that it cannot follow any instructions provided. Technical texts are usually accompanied by several directives, such as the translation or not of measurement units, chemical compounds etc. In these cases, a specialized translator outweighs Google Translate for the following reason: the translator can also perform research while Google Translate memorizes terms and places them in the text without understanding their meaning or the outcome created by this “mishmash”.

However, the main issue of using Google Translate is confidentiality. Working with a translator, the customer ensures the privacy of their personal data through contracts. This is not the case with Google Translate since Google keeps the data collected in the event you choose to download, send or store the content of your file and has the right to use and reproduce your text. This is also clear in Google’s terms of service:

“When you upload, submit, store, send or receive content to or through our Services, you give Google (and those we work with) a worldwide licence to use, host, store, reproduce, modify, create derivative works (such as those resulting from translations, adaptations or other changes that we make so that your content works better with our Services), communicate, publish, publicly perform, publicly display and distribute such content.” Lastly, the use of Google Translate does not only affect the translation outcome in terms of quality, but it also raises copyright issues as it can be modified and republished.

Having explored the negative points of Google Translate, in my opinion it has one very positive aspect. It can be used as a dictionary to search for individual words as it provides a variety of interpretations. When searching for the translation of a term, it offers more than one rendering. Also, the translations of individual terms are correct, and surprisingly it seems to be more comprehensive than other online dictionaries. However, it cannot be used as a CAT TOOL or a translation memory, but it works perfectly as a multilingual online dictionary.

In conclusion, automatic translation is indeed free, but it has not yet succeeded in replacing the value and quality of a human translation. We will just have to wait and see what the future holds!

In-country Review: a must, a pain or both?

by Clio Schils, Chief Development Officer at Commit

When we look at the history of the process of translation and localization, primarily at the quality assurance step, we have come a very long way in the past 3 decades. I vividly remember a story from a good friend and now retired localization manager from a medical company. He mentioned to me that when he started running the translations for that company, the team literally had to “cut – with scissors – and paste” pieces of text for reuse into new updated versions of manuals. The content was then finalized with new to be translated text. The new text was translated and “more or less” reviewed back then as well, but one can imagine the challenges and risks of such a process.

The concept of Quality Assurance since then has been further developed, refined and optimized by industry stakeholders on client and vendor side, and the process of refinement is still ongoing: translation software programs have emerged and are still emerging, QA standards are being implemented, numerous commercial QA tools are being marketed and sold to those who understand that high quality is key. Still, in addition to all the tools and standards, there is one historical component in the process that is still there and offers the essential added value to any QA process, the human reviewer.

We all know that the essence of good translated output is a well-written source, the known “garbage in, garbage out” theory. However, for the sake of argument, let’s assume we have a translation that was based on a perfect source. We now move to the next step in the process, the review. Leaving aside the question “why we need a review in the first place, when we have a “perfect” translation, since it was based on a perfect source?” we go straight to the review step itself.

There are many criteria that co-define the type or depth of a review. As a rule of thumb, one could say that the higher the risk impact of a wrong translation, the more in-depth review is required. A mal-functioning vacuum-cleaner will not have the same impact as a wrong interpretation due to a bad translation of a patient’s medical-technical manual. In the latter case, a mistake in instructions could potentially have fatal consequences. Therefore, the in-country review is a must.

As per the example above, the in-country subject matter expert review is mandatory for highly regulated content “to the extent possible”. This step is conducted after the linguistic review by a subject matter expert. The emphasis lies on the technical aspects, functioning, use and terminology of the product rather than the linguistic elements.

Unfortunately, the in-country review step is not without challenges:

  1. The ideal subject matter reviewer is the in-country expert on client side. In most cases, these experts have other responsibilities and reviewing product content comes on top of their core responsibilities. It is a challenging act to balance.
  2. More and more “exotic” languages are required. Clients and buyers of translation services do not always have experts readily available in these countries.
  3. The limited availability of expert reviewers poses challenges on the overall TAT of a translation project and could endanger market release date of a client’s product.
  4. High turn-over among in-country reviewers of some companies, lead to longer lead times and potentially less reuse efficiencies due to differences of opinion regarding translations.

There are ways to ease the pain to some extent, some of which are:

  1. facilitate the process by providing specific proofreading guidelines and by providing validated “do-not-touch” technical glossaries. This will also be useful in cases of an instable reviewer pool.
  2. come up with other ways to execute this important step, i.e. use of specialized third party in-country review companies, use of the best specialized linguists who are being offered product training to master the features and function of the product.
  3. allow for reasonable time to execute a specific subject matter review task and document these pre-agreed lead times in a binding SLA, for example “up to 10k words, review time 3 working days”. When the generous deadline is not met, the project manager has the go-ahead to continue the process without any repercussions.

Finally, to summarize the answer to the question “In-country Review: a must, a pain or both?”. My answer is “both”, but the job needs to be done. Even today, and despite the challenges, an in-country review by a highly qualified subject matter expert offers a substantial contribution to the process. It will not only reflect on the overall quality of content but also on the company’s branding and reputation. Translated product documentation remains a very powerful marketing tool. It allows for deeper local market penetration thus bringing the product within reach of local end-users.

Plunet Summit 2018 – In review!

by Eftychia Tsilikidou, Project Coordinator at Commit

Plunet Summit 2018 took place on May 24th and 25th in an unexpectedly sunny Berlin with comfortable temperature creating a very pleasant atmosphere for this vibrant and live event!

The program was structured with presentations on Plunet features and Plunet users’ best practices, workshops, panel discussions, round tables and networking.

Plunet released its new version 7.3 giving its main focus on GDPR but they didn’t stop there. They showcased many new features that improve performance and boost automation and integration with major CAT tools.

The best practices track included two very interesting presentations by Dan Milczarski and Eugenia Echave. Dan presented the way Plunet Automation helped manage changes at CQ Fluency eliminating the naturally-present resistance when it comes to change. Following the ADKAR (Awareness – Desire – Knowledge – Ability – Reinforcement) methodology, the focus was to make all teams understand the need for a change, to appeal not only to the logic but, most importantly, to emotion, to make everyone aware of their roles, to provide knowledge AND skills to perform certain roles and to ensure change is maintained.

With great honesty and transparency Eugenia explained all the dilemmas and reasoning that lead their company to choose the merger path with a competitor and illustrated how a complete and organized system helped evaluate all the details and make the right decision at the right time.

The workshops track focused on certain Plunet functions and hidden “gems” through interesting games and exercises. Even for advanced users, the workshops proved to be very useful revealing unknown settings that could help save time and make a big difference in everyday management tasks.

The Summit closed with an interesting panel discussion about managing the extensive growth where representatives from different companies shared their experience and knowledge gained from the growth their companies achieved in the last years. It is very interesting to see that although we are all engaged in the same field and practically do the same thing, we all choose a very different approach and implement different ways to achieve our goals.

This is what makes translation a very interesting and motivating industry with many young and passionate people who are really committed to what they do!

A very big thank you to the entire Plunet team for the great event, the knowledge, care and hospitality!

Our experience at the ATD 2018

by Yuko Baba, Project Manager at Commit

The outside of the San Diego Convention Center was flooded with thousands of people from all over the world on Monday morning; and yes, we were one of them. Who could blame us for our excitement and anticipation! Even the regular attendees of ATD were surprised about whom ATD invited this year for the opening keynote – the 44th President of the United States, President Barak Obama. This 75th year anniversary of ATD had become a very special one for us.

As President Barak Obama walked onto the stage, the crowd cheered and gave him a standing ovation.  We were sharing the same room with the former president, and it was a big deal!! The attendees could not get enough of him as he gave the opening keynote. He spoke about learning, resilience, and value as he shared his upbringing, family and experience in the White House. One of the things he shared was to hold on to values that are tested and proven by our previous generations – values that do not change: values like “be honest”, “be hardworking”, “be kind”, “carry the weight”, “be responsible”, “be respectful”, and “be useful”. He shared that such values reflect our day-to-day interactions and the kinds of habits we form which transcend any issues or situations and they, as a consequence, become our baseline and foundation. “Those are things that will get you through hard times as well as good times”, he said. Those values will “sustain effort and ultimately give purpose to what we do” which will make us go above and beyond superficial benefits like getting paid. It is easy to put those values away and seek short term results, but with those values, we become successful in life. To say that he is a great speaker would be an understatement. It was a very in-depth, insightful and inspiring speech. To be honest, we wish he would speak longer!

This year’s ATD welcomed over 13,000 talent development professionals from all over the world as they offered more than 300 sessions with 202 exhibitors. Needless to say, all of the sessions offered were about talent development and its related fields; however, it was good information to be aware of, as we provide translation services to the talent development industry. Especially, with regards to the changes in the industry trends with the upcoming technologies of virtual reality and Artificial Intelligence – how the industry’s e-learning programs and the materials will be impacted – our industry will also have to make necessary adjustments to grow alongside our clients.  It was indeed a good learning opportunity to explore how we can use those new technologies to our advantage to improve our services. Also, through sessions like “Overcoming the Headache of Video Editing and Content Reviews” by Daniel Witterborn from TechSmith and “What’s Wrong with This Course – Quality Testing and Editing Strategies for Designers and Developers” by Hadiya Nuriddin from Focus Learning Solutions,  we had an opportunity to discover the challenges and difficulties the clients face developing an eLearning program. Also, it was interesting to know that most eLearning program developers and designers do not have a formal Quality Assurance in place.  This is something we can also consider when taking on an eLearning project to provide recommendations and offer solutions to our client. Over all, all of the sessions were very interesting and will be applied to our business practice.

Commit had a booth set up along with the talent development training companies, software companies, universities and fellow translation companies giving away lots of cool swag!  We had a good networking time with the people who came by our booth, who sat next to us during the sessions and lunch tables. We are grateful for those who came to visit us at our booth. We hope you had a wonderful and meaningful conference like we did!  We hope to see you next year in ATD 2019 in Washington DC!

Elia’s ND for Executives Catania – In review

This year, Elia’s Networking Days for Executives was held at The Romano Palace Luxury Hotel in picturesque Catania, Sicily. Commit was represented by our Chief Strategist Spyros Konidaris and our CEO Vasso Pouli.

The event featured two tracks, one on the Translation industry and company strategies and one on Financial strategies, and we attended both.

The first track was dedicated to the overall company strategy for LSPs and what the future has in store for the industry. During the first day, the two moderators, experienced and savy professionals, Kimon Fountoukidis from Argos and Dominique Hourant from TransPerfect laid down the main issues faced by today’s LSPs, including, but not limited to, organic growth and M&As, differentiating USPs, growth pathways, competition challenges, and many more. The second day was devoted to the attendees; several of them took the podium and opened up to share their personal experiences in many of the topics discussed the previous day. The track really took off with this exercise as sharing is really at the heart of this event and what provided the best value for all. After two full days, we left with many things to think about and apply to our company strategy.

The event also included a panel discussion with Iris Orriss from Facebook, Richard Brooks from K International, and Geert Vanderhaeghe from Lexitech. The discussion was representative of our industry as it included the opinions from both the buyer and the supplier side, especially with Geert being relatively new in the industry. Amazing takeaways here as well as the conclusion was that no matter what the size of the LSP, value is there to be added in providing services to the client.

The second track, Financial Strategies: The Golden Quest, was delivered – very successfully indeed – by Gráinne Maycock, VP of Sales at Sajan, and Robert Ganzerli, seasoned industry expert and former owner of Arancho Doc. Though rich in presentation content, the track very soon took the form of an open discussion and honest sharing of best practices, where P&L, EBIT(DA), accountability, monitoring, KPIs, budget, operating (whatever) and taxes suddenly seemed appealing and interesting. Corporate and financial strategy was at the heart of the track and reminding us that there is no size that fits all. So, it was indeed both a relief and a challenge to realize that we must make our own and make it our own! Ooh, and Minions were a very nice and fitting touch – those who were there know.

We spent 2 full days sharing knowledge, hearing different opinions and networking – we wonder what’s in store for the next edition of ELIA’s Networking Days for Executives next year!

What to keep in mind when assigning your first post-editing task

by Dimitra Kalantzi , Linguist at Commit

Maybe your business or translation agency is toying with the idea of experimenting with Machine Translation (MT) and post-editing. Or maybe, after careful thought and planning, you’ve developed your own in-house MT system or built a custom engine with the help of an MT provider and are now ready to assign your first post-editing tasks. However simple or daunting that endeavor might seem, here are some things you should bear in mind:

  1. Make sure the translators/post-editors you involve are already specialized in the particular field, familiar with your business or your end-client’s business and its texts, and willing to work on post-editing tasks. Involving people with no specialization in the specific field and no familiarity with your/your client’s texts, language style and terminology is bound to adversely affect your post-editing efforts. Ideally, the post-editors you rely on will be the same people you already work with, trust and appreciate for their good work.
  1. Forget any assumptions you might have about the suitability of texts for MT post-editing. For example, IT and consumer electronics are often among the verticals for which custom MT engines are built, and it’s usually taken for granted that software texts are suitable for post-editing purposes. However, this might not hold true for all your software texts or even for none at all, and should be judged on a case-by-case basis. For instance, some software texts contain many user interface (UI) strings that consist of a limited number of words (in some cases only 1 word) and are notoriously difficult to translate even for professional translators, especially when the target language is morphologically richer than the source language and there’s no context as is often the case, leading to a multitude of queries. It would seem that such texts are hardly suitable for post-editing or should, at the very least, be not prioritized for post-editing purposes.
  1. Define your MT and post-editing strategy. If your overall goal is to get the gist of your texts and you’re not concerned with style and grammar, then light post-editing might be right for you (but you’ll always need to clearly specify what constitutes an error to be post-edited and what falls outside the scope of post-editing, which might be tricky). If, on the other hand, you’re after high-quality translation and/or the output of your MT system is (still) poor, then full post-editing might be best for you. Also bear in mind that post-editing the MT output is not your only choice. In fact, instead of giving translators/post-editors the machine translated text, you can provide the source text as usual in the CAT tool of your choice and set the MT system to show a suggestion each time the translator opens a new segment for translation.
  1. Offer fair prices for post-editing. As a matter of fact, the issue of fair compensation and how post-editors should be remunerated for their work is still hotly debated. Some argue for a per-hour rate, others for a per-word rate. Some believe that post-editing always involves a reduced rate, for others it means a normal, or even increased translation rate. It all depends on the type of post-editing used (light vs full, normal post-editing vs translation suggestions), the quality of the MT output and its post-editability, the suitability of a particular text for post-editing, the language pair involved, etc. And, of course, translators/post-editors should be paid extra for providing further services, such as giving detailed feedback for a post-editing task.
  1. Last but not least, if you’re a translation agency, you should always have the approval of your end-client before using MT and post-editing to translate their texts. It also goes without saying that if you’ve signed an agreement with a client which forbids the use of any kind of MT or if the use of MT is expressly forbidden in the purchase order accompanying a job you receive from a client, you should comply with the terms and conditions you’ve accepted and should not make use of MT.

Post-editing MT output is by no means a straightforward endeavor and this post has barely touched the tip of the iceberg. Let go of our assumptions, find out as much as you can, involve everyone in the new workflow and ask for their honest feedback, be ready to experiment and change your plans accordingly, and let the adventure begin!

Successful Localization Client Management Realignment Practices

by David Serra, Senior Business Development Manager at Commit

A localization client-vendor relationship is one of the most important facets of client management. Translation Programs for clients are always evolving in and out of a client’s organization. And of course, the goal is a strong business partnership that lasts the test of time and naturally leads to same account revenue growth.

The initial goal when taking on an account in transition is to create an enterprise translation program based on KPIs from localization industry standards. For example, clients with a multiple-vendor model without centralized systems tend not to have integrated terminology management and little to no Translation Memory maintenance. For the sake of a client’s content, that content needs to be leveraged across all translation vendors. Without this example, client translated content can contain inconsistencies, which could result in increased costs and poor translation quality. This is often referred to in the localization industry as one of the steps of taking your clients through a localization maturity model and continuum. Client Management always needs to be prepared with a plethora of solutions, with cyclical business review meetings, glossary maintenance, value-add linguist participation with terminology management and on-site client-vendor localization strategy meetings. Good communication between key stakeholders is essential to a productive collaboration and business partnership.

Below are some examples of how to realign client localization accounts:

Step 1: Define critical challenges – Assess how to produce high quality translations for specific market needs

Step 2: Avoid defensive retort or rationalizations – Identify only issues needing fixes.

Step 3: Agree that any account realignment is not done in one meeting – Outline goals and schedule follow-up sessions as a systematic assessment of the account.

Step 4: Analyze project resources – Determine areas for improvement (People, Tools, and Processes).

Step 5: Develop transfer of knowledge – This includes TMs, glossaries, style guides, and quality review of legacy translations.

These initiatives are part of a wider industry trend from providing services to providing solutions. There is always a strategic need for translation quality to contend in competitive markets and localization service providers need a multi-layered understanding of client’s business needs and to provide solutions tailored to those specific localization business needs. Communication is one of the building blocks of a fruitful business partnership, along with assembling the best team to deliver localization services coupled with the right business objectives, the right tools and the best localization business practices.

Does technology threaten translation?

by Dina Kessaniotou, Project Coordinator at Commit

The question of whether technology threatens translation depends on many different factors, and, basically, on how people conceive its purpose. The answer is indissolubly tied to how effectively the involved parties can leverage its advantages, identify its disadvantages and set the limits.

If we take a step back and consider when technology first impacted the translation process, we will be able to see the big benefits it has brought about: the broad use of the Internet, even if not a translation-specific tool, resulted to a tremendous change in translation, compared to the old-fashioned, paper-based ways, in terms of quantity, speed and quality in search. There was an exponential increase in the volumes of information available to linguists. Search became much easier and much more effective, as huge amounts of data, with multiple possibilities of customization, were made instantly and directly accessible.

At a later stage, translation-specific technology, namely the CAT tools, offered many valuable advantages to all players of the translation production cycle. Linguists were able to accumulate their knowledge and previous research effort, store it and organize it in a way they could easily retrieve and reuse it in the future. They could therefore eliminate repetitive work, increase their speed, reduce turnaround times for their clients and, of course, keep consistency – one of the most painful tasks in many types of content. This means savings in time and costs for all involved parties in the translation process. This also means ability to focus on brand new content. As there are still huge volumes of untranslated content, clients will normally be more willing to push this content into production in the near future. This is what happened during the last few years and will probably continue to happen, given that major providers in different fields of specialization have realized the importance of localization. Based on the above, translation technology is clearly a faster way to growth.

The natural evolution of CAT tools was the development of Machine Translation systems. At this point, and especially at the earliest stages of the development, the usefulness of technology started to be questioned. Many linguists thought it really threatened translation as it aimed to replace the human brain. In fact, there is still no machine that can catch all nuances and intangible elements of a language and adapt them in a different language. Even the more “flat” texts evoke specific feelings and emotions that should be properly conceived and transferred. So there is no way for machines to “threaten” translation. This doesn’t mean that the MT technology can’t be fruitful, especially if linguists are constantly involved in the MT development. Instead of being skeptical about machines, we should rather make them work for us. What might need to change is the way linguists offer their knowledge. Some years ago, it might have been difficult to perceive how CAT tools would increase efficiency and profitability, not only for clients but also for linguists. Nowadays, a considerable part of linguists cannot imagine their lives without tools.

The fundamental purpose of technology is to be continuously aligned with the challenges of the market and contribute dynamically to the linguists’ efforts for high quality services – and not just to cut costs by delivering automated results in one step. The only thing that can downgrade its usefulness is the lack of understanding of its real mission. When used effectively, technology can bring exclusively positive results and is really a valuable and profitable investment for all involved parties.

ELIA Together 2018 – In review!

by Effie Salourou, Customer Operations Manager at Commit

ELIA Together, the premium event that brings together language service companies and freelance linguists took place last week and we couldn’t have been more excited! You see, it was hosted in our hometown, Athens, and we got to welcome and meet old and new business partners, colleagues and friends.

The venue

The event was hosted at the Megaron Athens International Conference Centre (MAICC) that is undoubtedly a stunning, state-of-the-art venue for conferences and events. With three different halls covering the three different tracks of the conference (Specialisation, Trends and Technology), there was a session for each taste!

The food

You can’t go wrong with Greek food! The menu on both days included fresh salads, mouth-watering appetizers, typical dishes for meat lovers, lots of options for vegetarians, and luscious desserts!

The program

The theme for the third edition of Together was Specialise to Excel and had 31 different sessions. Here are some of the sessions that we managed to attend:

  • Óscar Jiménez Serrano gave the keynote speech on Technology disruption in translation and interpreting mentioning a lot of successful examples (and some not so successful ones) from his personal career.
  • Wolfgang Steinhauer’s session had a very intriguing title as he promised to show us how to drastically increase our productivity in order to manage to translate 10.000 words per day! His method and point of view was very interesting, and this is something that we will definitely investigate further.
  • Another informative session was the one presented by Sarah Henter, which was an introduction to clinical trials. She focused on what makes the linguistic work on clinical trials so special, what kind of texts and target audiences there are and what knowledge linguists need to acquire in order to efficiently work in this area.
  • Josephine Burmester and Jessica Mann gave a presentation on Marketing localization and the complexities of this field. They gave very vivid examples taken from the German advertising industry and showed us how something global can become local (or not!).
  • Daniela Zambrini focused on the purposes of Simplified Technical English, illustrating the structure of the ASD-STE100 Specification and its advantages for translators and technical authors. This session was quite interactive since at the end we had to re-write sentences according to the Specification.
  • If you wanted to learn more about patent translation, you had to attend Timothy Hodge’s presentation called “You don’t need to be a rocket scientist to translate patents”. Showing interesting facts and examples from our everyday life, he gave us an insight on the life of a patent translator and also gave us some tricks for finding and using the right terminology for translating a patent document.
  • This year, Commit presented a session as well! Our CEO Vasso Pouli addressed an important point about specialisation: the huge value we can add by combining vertical, task and technology knowledge. She made an interesting point by showing how we can expand our localization services by adding new skills to our portfolio.

Our booth/our team

Commit had a booth and we got to showcase our new corporate image and marketing material. We got to meet and greet lots of familiar faces as well as new business contacts that we hope will lead to fruitful collaborations. We would like to thank everyone who visited our booth and of course the ELIA organization that made this conference possible. Ευχαριστώ!

Linguistic validation services in the Life Sciences localization industry

by Nicola Kotoulia, Project Coordinator at Commit

Among the multiple specialized localization services available in the Life Sciences sector we also come across those referred to as Cognitive debriefing, Backtranslation & Reconciliation and Readability testing. How familiar are you with these methods? What does each mean, why is it required and what does it entail?

Translation errors can change the meaning of important content in clinical trial settings resulting in medical complications or the rejection of an entire clinical research project. Ambiguity in translated health questionnaires or instruments can mean that items or questions can be interpreted in more than one way, jeopardizing patient safety and clinical trial data integrity. Unclear and hard to use translated drug leaflets mean that users may not be able to take safe and accurate decisions about their medicines.

In order to help avoid such hazards, IRBs, medical ethical committees, regulatory authorities and applicable legislation require that validation methods in accordance with FDA and ISPOR (International Society for Pharmacoeconomics and Outcomes Research) guidelines are put in place for translated documentation, such as Patient Reported Outcomes (PROs), Clinician Reported Outcomes (ClinROs), Quality of Life (QOL) questionnaires and package leaflets (PL) of medicinal products.

Cognitive debriefing (also known as pilot testing) is a qualitative method for assessing respondents’ interpretation of an assessment, using a small sample of patients. It helps determine if the respondents understand the questionnaire the same as the original would be understood and tests the level of comprehension of a translation by the target audience. The goal is to ensure that data collected from PROs can be comparable across various language groups used during trials. Steps of the process include:

  • Developing a debriefing protocol tailored to the target questionnaires/instruments, subject pool, mode of administration, anticipated problem items etc.
  • Recruiting respondents including in-country professionals experienced in interviewing techniques and patients that match the target population.
  • Conducting the interview (in person or otherwise) during which respondents complete the questionnaire/instrument and answer questions to explain their understanding of each question or item. They restate in their own words what they think each translated item means. This way the interviewer discovers errors and difficulties and locates points that are confusing or misunderstood.
  • Generating a report with demographic and medical details of the interviewees, a detailed account of patients’ understanding of all items, including information about the number of subjects interviewed, their age, time for completing the task and any difficulties that came up. It may also include investigator recommendations or solutions for resolving confusion or difficulties.
  • Review and finalization during which a project manager checks the reports completeness, and ensures that the detected problems are addressed by making revisions as needed for clear, precise and well understood final translations.
  • Creation of summary report where the service provider details the methodology used, as well as the results of the cognitive debriefing.

Backtranslation and reconciliation is a very effective and stringent process that provides additional quality and accuracy assurance for sensitive content, such as Informed Consent Forms (ICFs), questionnaires, surveys and PROs used in clinical trials. It is a process for checking the faithfulness of the target text against the source, focusing mainly on the conceptual and cultural equivalence and less on the linguistic equivalence.

In a back translation, the translated content (forward translation) is translated back into the original language by a separate independent translator. The back translator must be a native speaker of the source language and have excellent command of the target language. He/She should stick more closely to the source that he/she would for a regular translation to accurately reflect the forward translator’s choices, without attempting to explain or clarify confusing statements or to produce a “polished” output.

The next step, “reconciliation”, refers to the process of noting any differences in meaning between the two source versions. The original text is compared to the back translated text and any discrepancies are recorded in a discrepancy report. Discrepancies may be due to ambiguity in the source text, errors introduced by the forward translator, or back translation errors. The reconciler flags issues such as differences in meaning, inconsistent/incorrect terminology, unsuitable register, missing/added information, ambiguities or errors in the backtranslation. Several back and forth between the linguists may be needed to reconcile the versions so that edits and adjustments are made as needed to optimize the final translation.

Readability testing in the fields of pharmaceutics qualifies that the medical information contained in the drug leaflet is usable by potential users of the medication, that is, that they can understand and act upon the information provided. It is a critical step in the process of designing product literature.

Since 2005, manufacturers of medicinal products are legally required to have their patient information leaflets (PILs) readability-tested in order to acquire product approval. According to Directive 2004/27/EC, these leaflets should be “legible, clear and easy to use”, and the manufacturer has to deliver a readability test report to the authorities.

Readability testing may be carried out by the sponsor or CRO, or a language service provider undertaking the localization of the documentation. The process steps can vary, but stages may include:

  • Preparation of the PL, during which the text of the leaflet is carefully edited and checked, spelling and grammatical errors are corrected, and sentences are rephrased to ensure compliance with the appropriate EMA template.
  • Drafting of questionnaires with questions covering the most important details of the product and its use. These questions that must be answered correctly by any user to ensure correct use of the product.
  • Pilot testing for assessing the prototype in terms of clarity, simplicity, safety, non-ambiguity, etc. Results are used to further revise the leaflet.
  • Actual readability testing conducted using subjects of different ages who are native speakers of the language of the leaflet. Participants are interviewed on key questions about the product. They should be able to answer most questions correctly and no question should consistently cause problems. The goal is to achieve a 90% correctness in the responces.
  • Generating reports that detail the test result based on which final edits are made.

The above processes provide an additional safety net for clients in the clinical and pharmaceutical industry helping them meet regulatory requirements and allowing them to focus on their registration and marketing preparation plans.