Source and Target Contributions to NMT Predictions
This is a post for the paper
Analyzing the Source and Target Contributions to Predictions in Neural Machine Translation.
In NMT, the generation of a target token is based on two types of context: the source and the prefix of the target sentence.
We show how to evaluate the relative contributions of source and target to NMT predictions and find that:
- models suffering from exposure bias are more prone to over-relying on target history (and hence to hallucinating) than
the ones where the exposure bias is mitigated;
- models trained with more data rely on the source more and do it more confidently;
- the training process is non-monotonic with several distinct stages.