Unbabel Launches COMET for Machine Translation Evaluation
Unbabel, providers of a translation platform for multilingual customer service at scale, today released COMET (Crosslingual Optimized Metric for Evaluation of Translation), an open-source neural framework and metric for machine translation (MT) evaluation.
COMET stands to replace Unbabel's METEOR (Metric for Evaluation of Translation With Explicit ORdering) and BLEU (Bilingual Evaluation Understudy). It captures the meaning similarity between texts with enough granularity to accurately predict human experts' translation quality judgments. It takes advantage of recent breakthroughs in large-scale cross-lingual neural language modeling.
"We are launching COMET as an open-source, ready-to-use, trained model because it can greatly help drive and accelerate MT research and development to levels of accuracy not seen before. We believe that COMET should be adopted as a new standard measure for assessing the quality of MT systems across multiple languages," said Alon Lavie, vice president of language technologies at Unbabel, co-creator of METEOR and consulting professor at Carnegie Mellon University, in a statement. "Unbabel is deeply committed to maintaining its leadership in this space and removing the misconception that MT means low quality when it comes to translation."