A Study of BERT for Context-Aware Neural Machine Translation

Xueqing Wu (University of Illinois Urbana-Champaign); Yingce Xia (Microsoft Research Asia)*; Jinhua Zhu (University of Science and Technology of China); Lijun Wu (Microsoft Research); Shufang Xie (Microsoft Research Asia); Tao Qin (Microsoft Research Asia)

Abstract

Context-aware neural machine translation (NMT), which targets at translating sentences with contextual information, has attracted much attention recently. A key problem for context-aware NMT is to effectively encode and aggregate the contextual information. BERT has been proven to be an effective feature extractor in natural language understanding tasks, but it has not been well studied in context-aware NMT. In this work, we conduct a study about leveraging BERT to encode the contextual information for NMT, and explore three commonly used methods to aggregate the contextual features. We conduct experiments on five translation tasks and find that concatenating all contextual sequences as a longer one and then encoding it by BERT obtains the best translation results. Specifically, we achieved state-of-the-art BLEU scores on several widely investigated tasks, including IWSLT'14 German-English, News Commentary v11 English-German translation and OpenSubtitle English-Russian translation.