scholarly journals A Study on the Intelligent Translation Model for English Incorporating Neural Network Migration Learning

2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Yanbo Zhang

Under the current artificial intelligence boom, machine translation is a research direction of natural language processing, which has important scientific research value and practical value. In practical applications, the variability of language, the limited capability of representing semantic information, and the scarcity of parallel corpus resources all constrain machine translation towards practicality and popularization. In this paper, we conduct deep mining of source language text data to express complex, high-level, and abstract semantic information using an appropriate text data representation model; then, for machine translation tasks with a large amount of parallel corpus, I use the capability of annotated datasets to build a more effective migration learning-based end-to-end neural network machine translation model on a supervised algorithm; then, for machine translation tasks with parallel corpus data resource-poor language machine translation tasks, migration learning techniques are used to prevent the overfitting problem of neural networks during training and to improve the generalization ability of end-to-end neural network machine translation models under low-resource conditions. Finally, for language translation tasks where the parallel corpus is extremely scarce but monolingual corpus is sufficient, the research focuses on unsupervised machine translation techniques, which will be a future research trend.

2020 ◽  
Vol 29 (07n08) ◽  
pp. 2040005
Author(s):  
Zhen Li ◽  
Dan Qu ◽  
Yanxia Li ◽  
Chaojie Xie ◽  
Qi Chen

Deep learning technology promotes the development of neural network machine translation (NMT). End-to-End (E2E) has become the mainstream in NMT. It uses word vectors as the initial value of the input layer. The effect of word vector model directly affects the accuracy of E2E-NMT. Researchers have proposed many approaches to learn word representations and have achieved significant results. However, the drawbacks of these methods still limit the performance of E2E-NMT systems. This paper focuses on the word embedding technology and proposes the PW-CBOW word vector model which can present better semantic information. We apply these word vector models on IWSLT14 German-English, WMT14 English-German, WMT14 English-French corporas. The results evaluate the performance of the PW-CBOW model. In the latest E2E-NMT systems, the PW-CBOW word vector model can improve the performance.


PLoS ONE ◽  
2020 ◽  
Vol 15 (11) ◽  
pp. e0240663
Author(s):  
Beibei Ren

With the rapid development of big data and deep learning, breakthroughs have been made in phonetic and textual research, the two fundamental attributes of language. Language is an essential medium of information exchange in teaching activity. The aim is to promote the transformation of the training mode and content of translation major and the application of the translation service industry in various fields. Based on previous research, the SCN-LSTM (Skip Convolutional Network and Long Short Term Memory) translation model of deep learning neural network is constructed by learning and training the real dataset and the public PTB (Penn Treebank Dataset). The feasibility of the model’s performance, translation quality, and adaptability in practical teaching is analyzed to provide a theoretical basis for the research and application of the SCN-LSTM translation model in English teaching. The results show that the capability of the neural network for translation teaching is nearly one times higher than that of the traditional N-tuple translation model, and the fusion model performs much better than the single model, translation quality, and teaching effect. To be specific, the accuracy of the SCN-LSTM translation model based on deep learning neural network is 95.21%, the degree of translation confusion is reduced by 39.21% compared with that of the LSTM (Long Short Term Memory) model, and the adaptability is 0.4 times that of the N-tuple model. With the highest level of satisfaction in practical teaching evaluation, the SCN-LSTM translation model has achieved a favorable effect on the translation teaching of the English major. In summary, the performance and quality of the translation model are improved significantly by learning the language characteristics in translations by teachers and students, providing ideas for applying machine translation in professional translation teaching.


2021 ◽  
Vol 11 ◽  
Author(s):  
Shunyao Luan ◽  
Xudong Xue ◽  
Yi Ding ◽  
Wei Wei ◽  
Benpeng Zhu

PurposeAccurate segmentation of liver and liver tumors is critical for radiotherapy. Liver tumor segmentation, however, remains a difficult and relevant problem in the field of medical image processing because of the various factors like complex and variable location, size, and shape of liver tumors, low contrast between tumors and normal tissues, and blurred or difficult-to-define lesion boundaries. In this paper, we proposed a neural network (S-Net) that can incorporate attention mechanisms to end-to-end segmentation of liver tumors from CT images.MethodsFirst, this study adopted a classical coding-decoding structure to realize end-to-end segmentation. Next, we introduced an attention mechanism between the contraction path and the expansion path so that the network could encode a longer range of semantic information in the local features and find the corresponding relationship between different channels. Then, we introduced long-hop connections between the layers of the contraction path and the expansion path, so that the semantic information extracted in both paths could be fused. Finally, the application of closed operation was used to dissipate the narrow interruptions and long, thin divide. This eliminated small cavities and produced a noise reduction effect.ResultsIn this paper, we used the MICCAI 2017 liver tumor segmentation (LiTS) challenge dataset, 3DIRCADb dataset and doctors’ manual contours of Hubei Cancer Hospital dataset to test the network architecture. We calculated the Dice Global (DG) score, Dice per Case (DC) score, volumetric overlap error (VOE), average symmetric surface distance (ASSD), and root mean square error (RMSE) to evaluate the accuracy of the architecture for liver tumor segmentation. The segmentation DG for tumor was found to be 0.7555, DC was 0.613, VOE was 0.413, ASSD was 1.186 and RMSE was 1.804. For a small tumor, DG was 0.3246 and DC was 0.3082. For a large tumor, DG was 0.7819 and DC was 0.7632.ConclusionS-Net obtained more semantic information with the introduction of an attention mechanism and long jump connection. Experimental results showed that this method effectively improved the effect of tumor recognition in CT images and could be applied to assist doctors in clinical treatment.


Author(s):  
Jing Wu ◽  
◽  
Hongxu Hou ◽  
Feilong Bao ◽  
Yupeng Jiang

Mongolian and Chinese statistical machine translation (SMT) system has its limitation because of the complex Mongolian morphology, scarce resource of parallel corpus and the significant syntax differences. To address these problems, we propose a template-based machine translation (TBMT) system and combine it with the SMT system to achieve a better translation performance. The TBMT model we proposed includes a template extraction model and a template translation model. In the template extraction model, we present a novel method of aligning and abstracting static words from bilingual parallel corpus to extract templates automatically. In the template translation model, our specially designed method of filtering out the low quality matches can enhance the translation performance. Moreover, we apply lemmatization and Latinization to address data sparsity and do the fuzzy match. Experimentally, the coverage of TBMT system is over 50%. The combined SMT system translates all the other uncovered source sentences. The TBMT system outperforms the baselines of phrase-based and hierarchical phrase-based SMT systems for +3.08 and +1.40 BLEU points. The combined system of TBMT and SMT systems also performs better than the baselines of +2.49 and +0.81 BLEU points.


2020 ◽  
pp. 1-11
Author(s):  
Zheng Guo ◽  
Zhu Jifeng

In recent years, with the development of Internet and intelligent technology, Japanese translation teaching has gradually explored a new teaching mode. Under the guidance of natural language processing and intelligent machine translation, machine translation based on statistical model has gradually become one of the primary auxiliary tools in Japanese translation teaching. In order to solve the problems of small scale, slow speed and incomplete field in the traditional parallel corpus machine translation, this paper constructs a Japanese translation teaching corpus based on the bilingual non parallel data model, and uses this corpus to train Japanese translation teaching machine translation model Moses to get better auxiliary effect. In the process of construction, for non parallel corpus, we use the translation retrieval framework based on word graph representation to extract parallel sentence pairs from the corpus, and then build a translation retrieval model based on Bilingual non parallel data. The experimental results of training Moses translation model with Japanese translation corpus show that the bilingual nonparallel data model constructed in this paper has good translation retrieval performance. Compared with the existing algorithm, the Bleu value extracted in the parallel sentence pair is increased by 2.58. In addition, the retrieval method based on the structure of translation option words graph proposed in this paper is time efficient and has better performance and efficiency in assisting Japanese translation teaching.


Author(s):  
Hao Zheng ◽  
Yong Cheng ◽  
Yang Liu

While neural machine translation (NMT) has made remarkable progress in translating a handful of high-resource language pairs recently, parallel corpora are not always available for many zero-resource language pairs. To deal with this problem, we propose an approach to zero-resource NMT via maximum expected likelihood estimation. The basic idea is to maximize the expectation with respect to a pivot-to-source translation model for the intended source-to-target model on a pivot-target parallel corpus. To approximate the expectation, we propose two methods to connect the pivot-to-source and source-to-target models. Experiments on two zero-resource language pairs show that the proposed approach yields substantial gains over baseline methods. We also observe that when trained jointly with the source-to-target model, the pivot-to-source translation model also obtains improvements over independent training.


2021 ◽  
Vol 2021 ◽  
pp. 1-8
Author(s):  
Haidong Ban ◽  
Jing Ning

With the rapid development of Internet technology and the development of economic globalization, international exchanges in various fields have become increasingly active, and the need for communication between languages has become increasingly clear. As an effective tool, automatic translation can perform equivalent translation between different languages while preserving the original semantics. This is very important in practice. This paper focuses on the Chinese-English machine translation model based on deep neural networks. In this paper, we use the end-to-end encoder and decoder framework to create a neural machine translation model, the machine automatically learns its function, and the data is converted into word vectors in a distributed method and can be directly through the neural network perform the mapping between the source language and the target language. Research experiments show that, by adding part of the voice information to verify the effectiveness of the model performance improvement, the performance of the translation model can be improved. With the superimposition of the number of network layers from two to four, the improvement ratios of each model are 5.90%, 6.1%, 6.0%, and 7.0%, respectively. Among them, the model with an independent recurrent neural network as the network structure has the largest improvement rate and a higher improvement rate, so the system has high availability.


2021 ◽  
Vol 2021 ◽  
pp. 1-7
Author(s):  
Zhiwang Xu ◽  
Huibin Qin ◽  
Yongzhu Hua

In recent years, machine translation based on neural networks has become the mainstream method in the field of machine translation, but there are still challenges of insufficient parallel corpus and sparse data in the field of low resource translation. Existing machine translation models are usually trained on word-granularity segmentation datasets. However, different segmentation granularities contain different grammatical and semantic features and information. Only considering word granularity will restrict the efficient training of neural machine translation systems. Aiming at the problem of data sparseness caused by the lack of Uyghur-Chinese parallel corpus and complex Uyghur morphology, this paper proposes a multistrategy segmentation granular training method for syllables, marked syllable, words, and syllable word fusion and targets traditional recurrent neural networks and convolutional neural networks; the disadvantage of the network is to build a Transformer Uyghur-Chinese Neural Machine Translation model based entirely on the multihead self-attention mechanism. In CCMT2019, dimension results on Uyghur-Chinese bilingual datasets show that the effect of multiple translation granularity training method is significantly better than the rest of granularity segmentation translation systems, while the Transformer model can obtain higher BLEU value than Uyghur-Chinese translation model based on Self-Attention-RNN.


2018 ◽  
Vol 6 ◽  
pp. 225-240 ◽  
Author(s):  
Eliyahu Kiperwasser ◽  
Miguel Ballesteros

Neural encoder-decoder models of machine translation have achieved impressive results, while learning linguistic knowledge of both the source and target languages in an implicit end-to-end manner. We propose a framework in which our model begins learning syntax and translation interleaved, gradually putting more focus on translation. Using this approach, we achieve considerable improvements in terms of BLEU score on relatively large parallel corpus (WMT14 English to German) and a low-resource (WIT German to English) setup.


Sign in / Sign up

Export Citation Format

Share Document