Tarjama turns to AWS machine learning to scale and innovate faster in language translation

DUBAI, UAE –  2021 – Tarjama, a smart language solutions company, today announced that it is migrating the vast majority of its machine learning (ML) environment to Amazon Web Services (AWS). This enables Tarjama to take full advantage of Amazon SageMaker, a fully managed machine learning service for building, training, and deploying ML models to provide relevant real-time language technology for translators. The move will accelerate the growth of the company’s language technology offering and its ability to rapidly scale its services with greater flexibility to experiment with ML models.  

Tarjama’s growing range of language technologies supporting the translation of more than 55 languages includes advanced neural machine Translation (NM), translation management system (TMS), optical character recognition (OCR), auto-subtitling, and more. In late 2020, Tarjama started using  Amazon SageMaker to provide deep learning production models for its machine translation solution. This resulted in higher scalability where the same team was able to train three times as many models as before and respond faster to client demands. Amazon  SageMaker empowered the Tarjama team to spin up ML projects as fast and often as needed, with no limitations on the number of training jobs and experiments. It also provided Tarjama’s ML team with an integrated development environment through an intuitive and intelligent interface. 

The Tarjama ML team can now train deep learning production models for additional natural language processing (NLP) tasks and more candidate models for machine translation. The impact of the enhanced models was seen in the linguistics department, where linguists’  productivity  increased an average of 30% in three months – from 250 words per hour to 350 words per hour.  Managed Spot Training in Amazon SageMaker provides Tarjama with another significant benefit, allowing it to take advantage of unused compute capacity on AWS, resulting in a 70% decrease in the costs spent on training ML models, compared to on-demand instances.  

Prior to using Amazon SageMaker, Tarjama relied exclusively on its in-house data center for data training and model experimentation, which involved a very long and complex process, required excessive time and cost for server maintenance, and limited scalability due to the inability to concurrently run several training jobs.   

Abdallah Nasir, Machine Learning Engineer at Tarjama, said, “Amazon SageMaker has accelerated our team’s productivity in the entire lifecycle of the ML projects – from data preparation to model creation and training. Compared to managing and using a local server with limited GPUs, Amazon SageMaker enables us to work more efficiently as a team, delivering high-quality results at scale, while assuring us that we are getting highest levels of security.” 

Vinod Krishnan, Head of Middle East and North Africa at AWS, said, “Tarjama has been at the forefront of using AWS Cloud technologies to develop leading translation services in the Middle East. Today, its team is taking advantage of the benefits of machine learning by using Amazon SageMaker to accelerate growth of their language technologies and provide flexibility to scale faster and experiment more often. We look forward to continuing to support Tarjama as it builds more innovative language solutions for its regional and global customers.” 

About Tarjama: Tarjama is a leading smart language solutions company helping organizations scale rapidly with multilingual content of every format and language. Leveraging its line-up of innovative language technology along with its network of expert linguists, Tarjama delivers language solutions that meet international standards of quality, speed, and cost-efficiency. To find out more about Tarjama, visit www.tarjama.com 

Similar articles