Computer Science Graduate Seminar

Monday, July 6, 2020, 11:00am

Alignment-Based Neural Networks for Machine Translation

  • Location:
  • Speaker: Tamer Alkhouli, M.Sc.



After more than a decade of phrase-based systems dominating the scene of machine translation, neural machine translation has emerged as the new machine translation paradigm. Not only does state-of-the-art neural machine translation demonstrate superior performance compared to conventional phrase-based systems, but it also presents an elegant end-to-end model that captures complex dependencies between source and target words. Neural machine translation offers a simpler modeling pipeline, making its adoption appealing both for practical and scientific reasons. Concepts like word alignment, which is a core component of phrase-based systems, are no longer required in neural machine translation. While this simplicity is viewed as an advantage, disregarding word alignment can come at the cost of having less controllable translation. Phrase-based systems generate translation composed of word sequences that also occur in the training data. On the other hand, neural machine translation is more flexible to generate translation without exact correspondence in the training data. This aspect enables such models to generate more fluent output, but it also makes translation free of pre-defined constraints. The lack of an explicit word alignment makes it potentially harder to relate generated target words to the source words. With the wider deployment of neural machine translation in commercial products, the demand is increasing for giving users more control over generated translation, such as enforcing or excluding translation of certain terms.

This dissertation aims to take a step towards addressing controllability in neural machine translation. We introduce alignment as a latent variable to neural network models, and describe an alignment-based framework for neural machine translation. The models are inspired by conventional IBM and hidden Markov models that are used to generate word alignment for phrase-based systems. However, our models derive from recent neural network architectures that are able to capture more complex dependencies. In this sense, this work can be viewed as an attempt to bridge the gap between conventional statistical machine translation and neural machine translation. We demonstrate that introducing alignment explicitly maintains neural machine translation performance, while making the models more explainable by improving the alignment quality. We show that such improved alignment can be beneficial for real tasks, where the user desires to influence the translation output. We also introduce recurrent neural networks to phrase-based systems in two different ways. We propose a method to integrate complex recurrent models, which capture long-range context, into the phrase-based framework, which considers short context only. We also use neural networks to rescore phrase-based translation candidates, and evaluate that in comparison to the direct integration approach.


The computer science lecturers invite interested people to join.