Robust MT Evaluation with Sentence-level Multilingual Augmentation
Robust MT Evaluation with Sentence-level Multilingual Augmentation
論文.icon
Date
2022-02-01
Abstract
Automatic translations with critical errors may lead to misinterpretations and pose several risks for the user. As such, it is important that Machine Translation (MT) Evaluation systems are robust to these errors in order to increase the reliability and safety of Machine Translation systems. Here we introduce SMAUG a novel Sentence-level Multilingual AUGmentation approach for generating translations with critical errors and apply this approach to create a test set to evaluate the robustness of MT metrics to these errors. We show that current State-of-the-Art metrics are improving their capability to distinguish translations with and without critical errors and to penalize the first accordingly. We also show that metrics tend to struggle with errors related to named entities and numbers and that there is a high variance in the robustness of current methods to translations with critical errors.
どんなもの?
先行研究と比べてどこがすごい?
技術や手法のキモはどこ?
どうやって有効だと検証した?
議論はある?
次に読むべき論文は?
Authors