Here are a few thoughts around the Agile translation mindset as it relates to the Localization industry. If you read about Agile Management, often you hear that agile is not a process or a standard, but rather a mindset organized around 4 core values:
- people over process;
- working product over documentation;
- customer collaboration over contract negotiation; and
- responding to change over following detailed plans.
The agile principles that were created out of the the software development industry have become more applicable in other business situations because it is not very descriptive in the nature of its application. Agile principles are more about organizing people around a set of values to facilitate rapid delivery and frequent and meaningful feedback. So, could agile translate into Agile Translation mindset?
1. Agile as opposed to Waterfall
Despite Agile rapidly becoming an ubiquitous term in business management, there are many different approaches to agile management and not every project or business situation lends itself to agile principles. But the rise of agile in industries other than software development has proven that agile is a mindset that can deliver meaningful innovation, especially when needs are changing.
That is not to say that the Translation and Localization industry welcomes rapid changing needs. Traditional translation, editing and proofing models often are created most efficient around a final source state where everything is “locked-in” for translation. However, one challenge in the localization and translation industry is that translation is often seen as an after-service and companies typically do not engage with translation until everything has been developed. The traditional waterfall (sequential or engineering design process) thinking also affects localization and translation, from software that is uniquely developed around English context to interfaces that are not developed for language switching. There are a lot of early decisions in development that might have an affect on the success, implementation and turnaround time during the localization process.
2. Need for Rapid Development and Testing in Agile Translation
Particular to software and systems, if the system is required to support other languages, localization vendors and the end users in those particular markets should be considered stakeholders in the process. The role of the product owner is to take into account the needs of not only the end user, or internal user, but also keeping in mind stakeholders that will have to work with the product or develop it further. This is not in contrast with agile principles. Despite the fact that agile doesn’t promote detailed planning in favor of agility, doesn’t mean that future needs should be ignored. Instead, the need for localization may be a need that fits within the product vision. There are various ways to prioritize work in agile and creating for instance a minimal viable product might at some point include a localization component that can be tested on small scale before moving further.
Another aspect of localization is relevancy. What good is a system if developed for a particular market, but it doesn’t support common practices in other markets? This could also be a question of standardization across practices, so it’s not always a problem of development. But localization invariably will cause systems to break down if not planned for relevancy. If a particular solution is developed around a common problem, does the problem easily scale to other markets? At what point into the process does context become a problem? In the case of website development, the layout may need some tweaking and that could be a problem that can be solved at a later time. But what about eCommerce, content management? At what point do you decide how to plan for your system to support other encodings and languages and when do you decide to test?
3. Translation Memory Technology in Agile Translation
One challenge in rapid delivery of translations is how to deal with consistency of language. Consistency and quality is at the heart of translation management. Agile principles do not mandate doing poor work quickly. Instead, the work has to be meaningful, or valuable and qualitatively viable, in order to test.
How does that work in Translation Technology? The translation and localization industry has tried to solve translation workflows across teams by putting shared Translation Memory assets into the cloud for everyone to access. One standard policy for these types of software is to update translation assets as soon as it is delivered by a team member. This allows for everyone in the team to have the most recent translations available to leverage in future iterations.
However, that does create problems when one team member introduces changes to terminology that needs to be reviewed and approved, and subsequently implemented into other products. The management of Translation Memory and Termbase (Glossary) assets can be challenging and needs to be well-planned with specific roles and responsibilities or otherwise the translation becomes a problematic repository of inconsistent and confusing terminology. As a matter of prioritization, it’s common for localization to focus mostly on language that deals with the operational and navigational elements because of the confusion inconsistent terminology can lead to an ineffective experience. Translation by extension could be therefore a part of the UX development team in determining need.
4. Updating Translations
We’ve written before about the challenges of updating translations to new needs. Agile thinking can actually help to break down updates into iterations that are well managed around providing rapid translations, with quality in mind (selecting the right translators) that doesn’t necessarily affect the future state negatively (inconsistent terminology). Data hygiene is important, especially in experimental and rapid pace development. The challenge of updating past materials to meet new needs is the way by which you deploy these updates. Do you develop and re-translate or do you patch up translations? What if the new content is not easy to extract, update and return to the system? Do we learn from making smaller updates in an iterative process or doing all the updates at once?
Here’s one small example that came to mind about learning from the update process. While it is not related to software, you can imagine how complex updates could be organized in different ways. In updating annual brochures for Open Enrollment, most of the changes were consistent around the brochures for each market (State) with few exceptions. There are different approaches as to how you prioritize these updates.
- You could decide to update one version and then apply that version to all the other State versions, but that requires you to identify the changes between the different State versions, including changes that were made before. The benefit is that you are working with the latest design and you can create a fairly relevant product more quickly.
- The other option is to process each file individually and make updates, but that can be costly in typesetting each version and applying repetitive updates to each version. However, it might be a more accurate process in case you are worried about missing any specific updates that were done to particular files.
- Or you could update the old versions with new content and apply the new design. Most of the time, the last option makes more sense unless the updates are substantial and the differences between each version is relatively minor. One thing that was beneficial in updating each version one-by-one is that systematic implementation of roughly the same updates to each version also required “testing” (quality assurance) of each version as it came through the pipeline. Any issues not previously caught could be back tracked to earlier versions and any issues found during testing makes the update of the next version more accurate, making the overall result better. But that does require for every product to be waiting (waste) for the whole project to be completed before release.
Also taken from software development principles, the Global Communication Maturity Model (GCMM) was evolved out of the Capability Maturity Model. Our model was created as an attempt to give organizations a way to fit and rate the level of Localization and Translation pratices into their existing processes. The goal of the GCMM was to give organizations a rough idea about their own practices and how it influences the translation and localization process, and what areas (we call them Readiness Areas) they could improve. How does this relate to Agile? In CMMI (Capability Maturity Model Integration), the goal is more to define system improvements (often referred to WHAT) whereas Agile (often in alliance with scrum methodology) is more about the improvement of the agile process itself (HOW are we going to do things). In Localization terms, an organization can think about what processes (or technology) that can be internalized to solve the localization problem. With localization standards becoming more integrated into software (For instance the use of XLIFF standards, like WPML for WordPress) and further integration of automated systems like Translation Management Software or Project Management software, the organization is able to icorporate more localization practices earlier on into the process.
Agile Principles in Translation
There isn’t a lot of substantial information out on Agile Translations. The Agile manifesto, although almost celebrating 20 years now, has evolved within software development and into other fields now as well, but implementation is not a guaranteed formula for success and larger teams require more process and documentation than smaller teams. One major role in agile management is to eliminate the impediments that stifle agile practices. It is also true that Agile benefits greatly from the constraints that are unique to agile, mostly the limited focus on small iterations (typically 2 weeks) that necessitate quick, but valuable and viable outcomes.
Localization and Translation has often been responsible for creating their own workflows for dealing with outside technological innovations. Innovations in content management have led to integrated standards (XML, XLIFF for instance) with localization and translation features that enable a more globalized development cycle (for that, you need Global Competence). However, because localization deals with workflows that are specifically designed to make the outcomes more predictable, organized and of higher quality, these integrations have proven to be quite difficult (see our blog about testing XLIFF).
If organizations are looking to be more efficient in the development of their global systems using agile principles, it would be beneficial for solution owners and development teams to engage with localization professionals to understand how it can incorporate the localization workflow into their agile practices. This could be an exciting development for the Localization industry to further enhance the need for expertise and problem solving in earlier stages of development and help agile teams be more efficient with their resources. It also challenges the Localization industry to think more holistically about the applications of translation software, its translation talent and automation solutions in various situations. The conversation around translation automation has long been a contentious debate on the effect of machine translation on the need for human professional translation and it’s not clear whether or not the two sides necessarily need to bridge that gap (Read about my thoughts on translation productivity and machine translation). Maybe it’s more important to understand how the localization industry can benefit from machine translation and automated workflows to free up development resources while at the same time looking for agility in translation teams to deliver high quality in more iterative processes with well-established Translation Memory policies and processes.