How to negotiate quality expectations with clients

Need translations? Try Smartcat for free!

Translation is an inherently subjective process which means, by default, the quality of translated content is subjective as well. This poses an issue for language service providers attempting to guarantee a standard for quality services to their customers.

An LSP can’t just say, “We provide quality translations,” because what does that really mean? You and your customer may have drastically different ideas.

There are components that come closer to being objective than others: does this translated sentence convey the same meaning as the original sentence? Are there grammatical errors? But as Web-Translations explains,

“Just because a translation accurately conveys the intended meaning of the original source text does not necessarily make it good quality — a quality translation is more than just maintaining meaning — it has to meet the defined specifications and be fit for purpose.”

This dilemma is not new. Since the beginning of the industry, translation providers and buyers have needed to negotiate which specifications they will use to measure the quality of their work. But the rising dominance of machine translation is complicating this process even more.

Straight machine translated content without human editing can serve as a fast, cheap means to translate content that would previously have been outside of a customer’s budget. As it becomes viable to translate new and larger amounts of content, we are shifting to more flexible spectrum of quality standards.

Ultimately, the quality that you want is the quality that your customer is happy with. While the customer is the final arbiter of quality, an LSP can often guide these requirements and encourage specific quality assessment processes.

Setting expectations with your customers

Discuss which metric to use

There are many models out there designed to measure the quality of a translation, most of which are based on error typology and largely ignore style. Specific models are popular among particular industries or content types. For instance, the SAE J2450 metric was developed by the Society of Automotive Engineers (SAE) and the LISA QA metric was created specifically for the hardware and software industries.

The Multidimensional Quality Metrics is a framework that assists with the structuring and application of quality assessment. MQM is not a metric, “but rather provides a comprehensive catalog of quality issue types, with standardized names and definitions, that can be used to describe particular metrics for specific tasks.”

Similarly, the TAUS Dynamic Quality Framework (DQF) was specifically designed to handle multiple levels of quality standards for machine translation processes.

For some projects, it may be more useful to measure quality based on context rather than errors in the translation output. Mihail Vlad, VP Machine Learning Solutions of SDL, offers several situation-based alternatives:

  • Quality for post editing is measured by the translator being more productive.

  • Quality for multilingual eDiscovery is measured by the accuracy of identifying the right documents.

  • Quality for multilingual text analytics is measured by the effectiveness of the analyst in identifying the relevant information.

  • Quality for multilingual chat is measured by the feedback rating of the end customer.

LSPs can educate their clients on ways of measuring quality that can provide them more value, such as lower costs or higher end-user satisfaction levels.

Agree on internal and external review processes

In internal project-based evaluations, editors review and grade translators as part of an editing/proofreading task of a project. They may use a scoring system they’ve created from scratch or one derived from an existing model. Project managers can keep track and set standards based on a translator’s average score.

Some CAT tools and translation memories have one or more of the metrics integrated into them and can be used to grade a translation.

Clients may also run their own project-based evaluations. In many cases, clients will trust the LSP implicitly based on their references and reputation. However, larger companies may use local distributors — such as local resellers or team members at their local branch — or hire third party reviewers to audit the quality of an LSP’s work.

When working with external reviewers, it’s important that the agreed-upon expectations between an LSP and their client need to extend to them, as well. There are two common problems that can arise when reviewers are not on the same page. First, I’ve seen local resellers have introduced errors into a translation, because they are not language experts. Second, especially when dealing with post-editing work on lower priority content, an LSP and its client may have agreed on a lower standard of translation but not informed the reviewer. This can cause problems even when the LSP is providing the services a client asked for.

Negotiate standards per project based on content type and price

Not all content needs to be translated and edited to the meet the highest quality standard. Using machine translation, massive amounts of content can be translated faster than ever before, but the quality of these translations can vary.

In other words, it may be good enough when low-priority content, such as website comments or forum posts, are understandable without being edited for fluency and grammar.

Especially when you use machine translation with various levels of post-editing, you can work on the multiple types of content for the same client using different pricing tiers and quality expectations.

Other internal quality assessments

While project-based assessments are where LSPs negotiate with their clients, other types of assessments matter in maintaining high-caliber services.

Initial assessments

When recruiting a new translator, LSPs will require a test translation to measure the translator’s general competence. Companies may use a simple pass/fail threshold or create a more complicated sliding scale to distinguish their preferred vendors. The assessment is graded using an internally created or industry standard metric. It may be helpful to use one of the same metrics you use on client-based work.

In a previous article on machine translation and post-editing, we discussed how to measure the quality of a machine translation engine, which is another type of non-project-based assessment.

Random assessments

It can be helpful to periodically submit sample finished translations for evaluation to an external editor or a different editing team. If you consistently work with the same editors and your clients are not reviewing your work, then you have no benchmark through which to gauge the quality of your work. LSPs can use random assessments to double check the work of editors as well as translators and ensure that your quality standard is consistently being met.

In summary, the industry’s relationship to quality is evolving alongside improvements to technology and changing customer needs. As Common Sense Advisory reported,

“One-size fits all quality models are insufficient to meet the variety of needs companies face today. A flexible model will allow you to tune processes without having to retool them every time.”