scholarly journals Design and Application of English Grammar Error Correction System Based on Deep Learning

2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Chen Hongli

In order to solve the problems of low correction accuracy and long correction time in the traditional English grammar error correction system, an English grammar error correction system based on deep learning is designed in this paper. This method analyzes the business requirements and functions of the English grammar error correction system and then designs the overall architecture of the system according to the analysis results, including English grammar error correction module, service access module, and feedback filtering module. The multilayer feedforward neural network is used to construct the language model to judge whether the language sequence is a normal sentence, so as to complete the correction of English grammatical errors. The experimental results show that the designed system has high accuracy and fast speed in correcting English grammatical errors.

2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Fei Long

In order to solve the problems of low accuracy, recall rate, and F1 value of traditional English grammar error detection methods, a new machine translation model is constructed and applied to English grammar error detection. In the encoder-decoder framework, the machine translation model is constructed through the steps of word vector generation, encoder language model construction, decoder language model construction, word alignment, output module, and so on. On this basis, the machine translation model is trained to detect English grammatical errors through dependency analysis and alternative word generation. Experimental results show that the accuracy, recall rate, and F1 value of the proposed method are higher than those of the experimental comparison method for detecting English grammatical errors such as articles, prepositions, nouns, verbs, and subject-verb agreement, indicating that the proposed method is of high practical value.


Author(s):  
Zhijie Lin ◽  
Kaiyang Lin ◽  
Shiling Chen ◽  
Linlin Li ◽  
Zhou Zhao

End-to-End deep learning approaches for Automatic Speech Recognition (ASR) has been a new trend. In those approaches, starting active in many areas, language model can be considered as an important and effective method for semantic error correction. Many existing systems use one language model. In this paper, however, multiple language models (LMs) are applied into decoding. One LM is used for selecting appropriate answers and others, considering both context and grammar, for further decision. Experiment on a general location-based dataset show the effectiveness of our method.


2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Yining Du

With the development of neural networks in deep learning, artificial intelligence machine learning has become the main focus of researchers. In College English grammar detection, oral grammar is the most error rate content. So, this paper optimizes MLP based on GA in the deep learning neural network and then studies the intelligent image correction of College English spoken grammar. The main direction is to discuss and analyze GA-MLP-NN algorithm technology first and then predict the error correction model of spoken language grammar by combining the optimized algorithm. The results show that GA-MLP-NN provides excellent accuracy for the prediction of the whole syntax error correction model. Then, the paper studies the deep learning technology to build an intelligent image error correction model of College English spoken grammar. The results show that the effect of intelligent correction of spoken grammar is very fast and accurate.


AI ◽  
2021 ◽  
Vol 2 (1) ◽  
pp. 1-16
Author(s):  
Juan Cruz-Benito ◽  
Sanjay Vishwakarma ◽  
Francisco Martin-Fernandez ◽  
Ismael Faro

In recent years, the use of deep learning in language models has gained much attention. Some research projects claim that they can generate text that can be interpreted as human writing, enabling new possibilities in many application areas. Among the different areas related to language processing, one of the most notable in applying this type of modeling is programming languages. For years, the machine learning community has been researching this software engineering area, pursuing goals like applying different approaches to auto-complete, generate, fix, or evaluate code programmed by humans. Considering the increasing popularity of the deep learning-enabled language models approach, we found a lack of empirical papers that compare different deep learning architectures to create and use language models based on programming code. This paper compares different neural network architectures like Average Stochastic Gradient Descent (ASGD) Weight-Dropped LSTMs (AWD-LSTMs), AWD-Quasi-Recurrent Neural Networks (QRNNs), and Transformer while using transfer learning and different forms of tokenization to see how they behave in building language models using a Python dataset for code generation and filling mask tasks. Considering the results, we discuss each approach’s different strengths and weaknesses and what gaps we found to evaluate the language models or to apply them in a real programming context.


2020 ◽  
Vol 14 (4) ◽  
pp. 471-484
Author(s):  
Suraj Shetiya ◽  
Saravanan Thirumuruganathan ◽  
Nick Koudas ◽  
Gautam Das

Accurate selectivity estimation for string predicates is a long-standing research challenge in databases. Supporting pattern matching on strings (such as prefix, substring, and suffix) makes this problem much more challenging, thereby necessitating a dedicated study. Traditional approaches often build pruned summary data structures such as tries followed by selectivity estimation using statistical correlations. However, this produces insufficiently accurate cardinality estimates resulting in the selection of sub-optimal plans by the query optimizer. Recently proposed deep learning based approaches leverage techniques from natural language processing such as embeddings to encode the strings and use it to train a model. While this is an improvement over traditional approaches, there is a large scope for improvement. We propose Astrid, a framework for string selectivity estimation that synthesizes ideas from traditional and deep learning based approaches. We make two complementary contributions. First, we propose an embedding algorithm that is query-type (prefix, substring, and suffix) and selectivity aware. Consider three strings 'ab', 'abc' and 'abd' whose prefix frequencies are 1000, 800 and 100 respectively. Our approach would ensure that the embedding for 'ab' is closer to 'abc' than 'abd'. Second, we describe how neural language models could be used for selectivity estimation. While they work well for prefix queries, their performance for substring queries is sub-optimal. We modify the objective function of the neural language model so that it could be used for estimating selectivities of pattern matching queries. We also propose a novel and efficient algorithm for optimizing the new objective function. We conduct extensive experiments over benchmark datasets and show that our proposed approaches achieve state-of-the-art results.


2022 ◽  
Vol 12 ◽  
Author(s):  
Minghui Du ◽  
Yiqun Qian

The study aims to explore the roles of Massive Open Online Courses (MOOCs) based on deep learning in college students’ English grammar teaching. The data are collected using a survey. After the experimental data are analyzed, it is found that students have a low sense of happiness and satisfaction and are unwilling to practice oral English and learn language points in English learning. They think that college English learning only meets the needs of CET-4 and CET-6 and does not take it as the ultimate learning goal. After the necessity and problems in English grammar teaching are discussed, the advantages of flipped classrooms of MOOCs are discussed in English grammar teaching. A teaching platform is constructed to study the foreign language teaching mode under MOOCs, and classroom teaching is combined with the advantages of MOOCs following the principle of “teaching students according to their personalities” to improve the listening, speaking, reading, writing, and translation skills of foreign language majors. The results show that high-quality online teaching resources and the deep learning-based teaching environment can provide a variety of interactive tools, by which students can communicate with their peers and teachers online. Sharing open online communication, classroom discussion, and situational simulation can enhance teachers’ deep learning ability, like the ability to communication and transfer thoughts. Constructivism with interaction as the core can help students grasp new knowledge easily. Extensive communication and interaction are important ways for learning and thinking. The new model provides students with profound learning experience, expands the teaching resources of MOOCs around the world, and maximizes the interaction between online and offline teachers and students, making knowledge widely rooted in the campus and realizing the combination of online resources and campus classroom teaching. Students can learn the knowledge through autonomous learning and discussion before class, which greatly broadens the learning time and space. In the classroom and after class, the internalization and sublimation of knowledge are completed through group cooperation, inquiry learning, scenario simulation, display, and evaluation, promoting students to know about new knowledge and highlighting the dominant position of students.


Author(s):  
Martin Tomlinson ◽  
Cen Jung Tjhai ◽  
Marcel A. Ambroze ◽  
Mohammed Ahmed ◽  
Mubarak Jibril

Radiology ◽  
2021 ◽  
Author(s):  
Sophie You ◽  
Evan M. Masutani ◽  
Marcus T. Alley ◽  
Shreyas S. Vasanawala ◽  
Pam R. Taub ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document