scholarly journals The Construction of Machine Translation Model and Its Application in English Grammar Error Detection

2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Fei Long

In order to solve the problems of low accuracy, recall rate, and F1 value of traditional English grammar error detection methods, a new machine translation model is constructed and applied to English grammar error detection. In the encoder-decoder framework, the machine translation model is constructed through the steps of word vector generation, encoder language model construction, decoder language model construction, word alignment, output module, and so on. On this basis, the machine translation model is trained to detect English grammatical errors through dependency analysis and alternative word generation. Experimental results show that the accuracy, recall rate, and F1 value of the proposed method are higher than those of the experimental comparison method for detecting English grammatical errors such as articles, prepositions, nouns, verbs, and subject-verb agreement, indicating that the proposed method is of high practical value.

2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Chen Hongli

In order to solve the problems of low correction accuracy and long correction time in the traditional English grammar error correction system, an English grammar error correction system based on deep learning is designed in this paper. This method analyzes the business requirements and functions of the English grammar error correction system and then designs the overall architecture of the system according to the analysis results, including English grammar error correction module, service access module, and feedback filtering module. The multilayer feedforward neural network is used to construct the language model to judge whether the language sequence is a normal sentence, so as to complete the correction of English grammatical errors. The experimental results show that the designed system has high accuracy and fast speed in correcting English grammatical errors.


Complexity ◽  
2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Yanping Ye

At the level of English resource vocabulary, due to the lack of vocabulary alignment structure, the translation of neural machine translation has the problem of unfaithfulness. This paper proposes a framework that integrates vocabulary alignment structure for neural machine translation at the vocabulary level. Under the proposed framework, the neural machine translation decoder receives external vocabulary alignment information during each step of the decoding process to further alleviate the problem of missing vocabulary alignment structure. Specifically, this article uses the word alignment structure of statistical machine translation as the external vocabulary alignment information and introduces it into the decoding step of neural machine translation. The model is mainly based on neural machine translation, and the statistical machine translation vocabulary alignment structure is integrated on the basis of neural networks and continuous expression of words. In the model decoding stage, the statistical machine translation system provides appropriate vocabulary alignment information based on the decoding information of the neural machine translation and recommends vocabulary based on the vocabulary alignment information to guide the neural machine translation decoder to more accurately estimate its vocabulary in the target language. From the aspects of data processing methods and machine translation technology, experiments are carried out to compare the data processing methods based on language model and sentence similarity and the effectiveness of machine translation models based on fusion principles. Comparative experiment results show that the data processing method based on language model and sentence similarity effectively guarantees data quality and indirectly improves the algorithm performance of machine translation model; the translation effect of neural machine translation model integrated with statistical machine translation vocabulary alignment structure is compared with other models.


2010 ◽  
Vol 93 (1) ◽  
pp. 17-26 ◽  
Author(s):  
Yvette Graham

Sulis: An Open Source Transfer Decoder for Deep Syntactic Statistical Machine Translation In this paper, we describe an open source transfer decoder for Deep Syntactic Transfer-Based Statistical Machine Translation. Transfer decoding involves the application of transfer rules to a SL structure. The N-best TL structures are found via a beam search of TL hypothesis structures which are ranked via a log-linear combination of feature scores, such as translation model and dependency-based language model.


Author(s):  
Herry Sujaini

The statistical machine translation (SMT) is widely used by researchers and practitioners in recent years. SMT works with quality that is determined by several important factors, two of which are language and translation model. Research on improving the translation model has been done quite a lot, but the problem of optimizing the language model for use on machine translators has not received much attention. On translator machines, language models usually use trigram models as standard. In this paper, we conducted experiments with four strategies to analyze the role of the language model used in the Indonesian-Javanese translation machine and show improvement compared to the baseline system with the standard language model. The results of this research indicate that the use of 3-gram language models is highly recommended in SMT.


2020 ◽  
Vol 4 (3) ◽  
pp. 519
Author(s):  
Permata Permata ◽  
Zaenal Abidin

In this research, automatic translation of the Lampung dialect into Indonesian was carried out using the statistical machine translation (SMT) approach. Translation of the Lampung language to Indonesian can be done by using a dictionary. Another alternative is to use the Lampung parallel body corpus and its translation in Indonesian with the SMT approach. The SMT approach is carried out in several phases. Starting from the pre-processing phase which is the initial stage to prepare a parallel corpus. Then proceed with the training phase, namely the parallel corpus processing phase to obtain a language model and translation model. Then the testing phase, and ends with the evaluation phase. SMT testing uses 25 single sentences without out-of-vocabulary (OOV), 25 single sentences with OOV, 25 compound sentences without OOV and 25 compound sentences with OOV. The results of testing the translation of Lampung sentences into Indonesian shows the accuracy of the Bilingual Evaluation Undestudy (BLEU) obtained is 77.07% in 25 single sentences without out-of-vocabulary (OOV), 72.29% in 25 single sentences with OOV, 79.84% at 25 compound sentences without OOV and 80.84% at 25 compound sentences with OOV.


2020 ◽  
Vol 12 (12) ◽  
pp. 215
Author(s):  
Wenbo Zhang ◽  
Xiao Li ◽  
Yating Yang ◽  
Rui Dong ◽  
Gongxu Luo

Recently, the pretraining of models has been successfully applied to unsupervised and semi-supervised neural machine translation. A cross-lingual language model uses a pretrained masked language model to initialize the encoder and decoder of the translation model, which greatly improves the translation quality. However, because of a mismatch in the number of layers, the pretrained model can only initialize part of the decoder’s parameters. In this paper, we use a layer-wise coordination transformer and a consistent pretraining translation transformer instead of a vanilla transformer as the translation model. The former has only an encoder, and the latter has an encoder and a decoder, but the encoder and decoder have exactly the same parameters. Both models can guarantee that all parameters in the translation model can be initialized by the pretrained model. Experiments on the Chinese–English and English–German datasets show that compared with the vanilla transformer baseline, our models achieve better performance with fewer parameters when the parallel corpus is small.


2017 ◽  
Vol 26 (1) ◽  
pp. 65-72 ◽  
Author(s):  
Jinsong Su ◽  
Zhihao Wang ◽  
Qingqiang Wu ◽  
Junfeng Yao ◽  
Fei Long ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document