Multi-task Learning in Translating English Language into Arabic Language
Machine learning techniques usually require a large number of training samples to achieve maximum benefit. In this case, limited training samples are not enough to learn models; recently there has been a growing interest in machine learning methods that can exploit knowledge from such other tasks to improve performance. Multi-task learning was proposed to solve this problem. Multi-task learning is a machine learning paradigm for learning a number tasks simultaneously, exploiting commonalities between them. When there are relations between the tasks to learn, it can be advantageous to learn all these tasks simultaneously instead of learning each task independently. In this paper, we propose translate language from source language to target language using Multi-task learning, for our need building a relation extraction system between the words in the texts, we applied related tasks ( part-of-speech , chunking and named entity recognition) and train it's in parallel on annotated data using hidden markov model, Experiments of text translation task show that our proposed work can improve the performance of a translation task with the help of other related tasks.