Research on Machine Translation Model Based on Neural Network

Author(s):  
Zhuoran Han ◽  
Shenghong Li
2020 ◽  
Vol 1646 ◽  
pp. 012143
Author(s):  
Jiatong Liu ◽  
Zhonghao Wang ◽  
Yanchang Cui ◽  
Minghong Fan ◽  
Beizhan Wang

PLoS ONE ◽  
2020 ◽  
Vol 15 (11) ◽  
pp. e0240663
Author(s):  
Beibei Ren

With the rapid development of big data and deep learning, breakthroughs have been made in phonetic and textual research, the two fundamental attributes of language. Language is an essential medium of information exchange in teaching activity. The aim is to promote the transformation of the training mode and content of translation major and the application of the translation service industry in various fields. Based on previous research, the SCN-LSTM (Skip Convolutional Network and Long Short Term Memory) translation model of deep learning neural network is constructed by learning and training the real dataset and the public PTB (Penn Treebank Dataset). The feasibility of the model’s performance, translation quality, and adaptability in practical teaching is analyzed to provide a theoretical basis for the research and application of the SCN-LSTM translation model in English teaching. The results show that the capability of the neural network for translation teaching is nearly one times higher than that of the traditional N-tuple translation model, and the fusion model performs much better than the single model, translation quality, and teaching effect. To be specific, the accuracy of the SCN-LSTM translation model based on deep learning neural network is 95.21%, the degree of translation confusion is reduced by 39.21% compared with that of the LSTM (Long Short Term Memory) model, and the adaptability is 0.4 times that of the N-tuple model. With the highest level of satisfaction in practical teaching evaluation, the SCN-LSTM translation model has achieved a favorable effect on the translation teaching of the English major. In summary, the performance and quality of the translation model are improved significantly by learning the language characteristics in translations by teachers and students, providing ideas for applying machine translation in professional translation teaching.


2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Yanbo Zhang

Under the current artificial intelligence boom, machine translation is a research direction of natural language processing, which has important scientific research value and practical value. In practical applications, the variability of language, the limited capability of representing semantic information, and the scarcity of parallel corpus resources all constrain machine translation towards practicality and popularization. In this paper, we conduct deep mining of source language text data to express complex, high-level, and abstract semantic information using an appropriate text data representation model; then, for machine translation tasks with a large amount of parallel corpus, I use the capability of annotated datasets to build a more effective migration learning-based end-to-end neural network machine translation model on a supervised algorithm; then, for machine translation tasks with parallel corpus data resource-poor language machine translation tasks, migration learning techniques are used to prevent the overfitting problem of neural networks during training and to improve the generalization ability of end-to-end neural network machine translation models under low-resource conditions. Finally, for language translation tasks where the parallel corpus is extremely scarce but monolingual corpus is sufficient, the research focuses on unsupervised machine translation techniques, which will be a future research trend.


2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Wenxia Pan

English machine translation is a natural language processing research direction that has important scientific research value and practical value in the current artificial intelligence boom. The variability of language, the limited ability to express semantic information, and the lack of parallel corpus resources all limit the usefulness and popularity of English machine translation in practical applications. The self-attention mechanism has received a lot of attention in English machine translation tasks because of its highly parallelizable computing ability, which reduces the model’s training time and allows it to capture the semantic relevance of all words in the context. The efficiency of the self-attention mechanism, however, differs from that of recurrent neural networks because it ignores the position and structure information between context words. The English machine translation model based on the self-attention mechanism uses sine and cosine position coding to represent the absolute position information of words in order to enable the model to use position information between words. This method, on the other hand, can reflect relative distance but does not provide directionality. As a result, a new model of English machine translation is proposed, which is based on the logarithmic position representation method and the self-attention mechanism. This model retains the distance and directional information between words, as well as the efficiency of the self-attention mechanism. Experiments show that the nonstrict phrase extraction method can effectively extract phrase translation pairs from the n-best word alignment results and that the extraction constraint strategy can improve translation quality even further. Nonstrict phrase extraction methods and n-best alignment results can significantly improve the quality of translation translations when compared to traditional phrase extraction methods based on single alignment.


2021 ◽  
Vol 2021 ◽  
pp. 1-8
Author(s):  
Haidong Ban ◽  
Jing Ning

With the rapid development of Internet technology and the development of economic globalization, international exchanges in various fields have become increasingly active, and the need for communication between languages has become increasingly clear. As an effective tool, automatic translation can perform equivalent translation between different languages while preserving the original semantics. This is very important in practice. This paper focuses on the Chinese-English machine translation model based on deep neural networks. In this paper, we use the end-to-end encoder and decoder framework to create a neural machine translation model, the machine automatically learns its function, and the data is converted into word vectors in a distributed method and can be directly through the neural network perform the mapping between the source language and the target language. Research experiments show that, by adding part of the voice information to verify the effectiveness of the model performance improvement, the performance of the translation model can be improved. With the superimposition of the number of network layers from two to four, the improvement ratios of each model are 5.90%, 6.1%, 6.0%, and 7.0%, respectively. Among them, the model with an independent recurrent neural network as the network structure has the largest improvement rate and a higher improvement rate, so the system has high availability.


2021 ◽  
Vol 2021 ◽  
pp. 1-7
Author(s):  
Zhiwang Xu ◽  
Huibin Qin ◽  
Yongzhu Hua

In recent years, machine translation based on neural networks has become the mainstream method in the field of machine translation, but there are still challenges of insufficient parallel corpus and sparse data in the field of low resource translation. Existing machine translation models are usually trained on word-granularity segmentation datasets. However, different segmentation granularities contain different grammatical and semantic features and information. Only considering word granularity will restrict the efficient training of neural machine translation systems. Aiming at the problem of data sparseness caused by the lack of Uyghur-Chinese parallel corpus and complex Uyghur morphology, this paper proposes a multistrategy segmentation granular training method for syllables, marked syllable, words, and syllable word fusion and targets traditional recurrent neural networks and convolutional neural networks; the disadvantage of the network is to build a Transformer Uyghur-Chinese Neural Machine Translation model based entirely on the multihead self-attention mechanism. In CCMT2019, dimension results on Uyghur-Chinese bilingual datasets show that the effect of multiple translation granularity training method is significantly better than the rest of granularity segmentation translation systems, while the Transformer model can obtain higher BLEU value than Uyghur-Chinese translation model based on Self-Attention-RNN.


2006 ◽  
Vol 32 (4) ◽  
pp. 527-549 ◽  
Author(s):  
José B. Mariño ◽  
Rafael E. Banchs ◽  
Josep M. Crego ◽  
Adrià de Gispert ◽  
Patrik Lambert ◽  
...  

This article describes in detail an n-gram approach to statistical machine translation. This approach consists of a log-linear combination of a translation model based on n-grams of bilingual units, which are referred to as tuples, along with four specific feature functions. Translation performance, which happens to be in the state of the art, is demonstrated with Spanish-to-English and English-to-Spanish translations of the European Parliament Plenary Sessions (EPPS).


Sign in / Sign up

Export Citation Format

Share Document