Advancing Neural Machine Translation in Speed and Accuracy
Language is a crowning achievement of human civilization and a defining characteristic of human intelligence. There are over 5000 languages spoken in the world. Most people speak only a handful of languages, so translation is critical to overcoming barriers in communication. Unfortunately, translation is laborious and challenging, and cannot be done manually at scale. As a result, machine translation (MT) has been an active area of research in artificial intelligence.
This research focuses on methods to improve MT. It incorporates the idea of neural machine translation, which builds a neural network to process languages. Typically, the network model is given access to a database of text in some source languages and manually translated target languages.
Though powerful, MT still has unsolved problems. An inherent one in its training process is that there are multiple equally valid ways to translate the same sentence but the system only has access to one correct translation and would consider any other valid translations wrong. I will be investigating ways to solve it by building up new models and proving them.
Message to Sponsor
- Major: Computer Science
- Sponsor: Chen Fund
- Mentor: Jitendra Malik