scholarly journals An Approach Based on Multilevel Convolution for Sentence-Level Element Extraction of Legal Text

2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Zhe Chen ◽  
Hongli Zhang ◽  
Lin Ye ◽  
Shang Li

In the judicial field, with the increase of legal text data, the extraction of legal text elements plays a more and more important role. In this paper, we propose a sentence-level model of legal text element extraction based on the structure of multilabel text classification. Our proposed model contains an encoder and an improved decoder. The encoder applies multilevel convolutional neural networks (CNN) and Long Short-Term Memory (LSTM) as feature extraction networks to extract local neighborhood and context information from legal text, and a decoder applies LSTM with multiattention and full connection layer with an improved initialization method to decode and generate label sequences. To our best knowledge, it is one of the first attempts to apply a multilabel classification algorithm for element extraction of legal text. In order to verify the effectiveness of our model, we conduct experiments not only on three real legal text datasets but also on a general multilabel text classification dataset.The experimental results demonstrate that our proposed model outperforms baseline models on legal text datasets, and our model is competitive to baseline models on the general text multilabel classification dataset, which indicates that our proposed model is useful for multilabel classification tasks of ordinary texts and legal texts with an uncertain number of characters in words and short lengths.

2021 ◽  
Author(s):  
Benjamin Clavié ◽  
Marc Alphonsus

We aim to highlight an interesting trend to contribute to the ongoing debate around advances within legal Natural Language Processing. Recently, the focus for most legal text classification tasks has shifted towards large pre-trained deep learning models such as BERT. In this paper, we show that a more traditional approach based on Support Vector Machine classifiers reaches competitive performance with deep learning models. We also highlight that error reduction obtained by using specialised BERT-based models over baselines is noticeably smaller in the legal domain when compared to general language tasks. We discuss some hypotheses for these results to support future discussions.


2020 ◽  
Vol 34 (08) ◽  
pp. 13332-13337
Author(s):  
Neil Mallinar ◽  
Abhishek Shah ◽  
Tin Kam Ho ◽  
Rajendra Ugrani ◽  
Ayush Gupta

Real-world text classification tasks often require many labeled training examples that are expensive to obtain. Recent advancements in machine teaching, specifically the data programming paradigm, facilitate the creation of training data sets quickly via a general framework for building weak models, also known as labeling functions, and denoising them through ensemble learning techniques. We present a fast, simple data programming method for augmenting text data sets by generating neighborhood-based weak models with minimal supervision. Furthermore, our method employs an iterative procedure to identify sparsely distributed examples from large volumes of unlabeled data. The iterative data programming techniques improve newer weak models as more labeled data is confirmed with human-in-loop. We show empirical results on sentence classification tasks, including those from a task of improving intent recognition in conversational agents.


2019 ◽  
Vol 28 (3) ◽  
pp. 395-411
Author(s):  
Charles Chang ◽  
Michael Masterson

Political scientists often wish to classify documents based on their content to measure variables, such as the ideology of political speeches or whether documents describe a Militarized Interstate Dispute. Simple classifiers often serve well in these tasks. However, if words occurring early in a document alter the meaning of words occurring later in the document, using a more complicated model that can incorporate these time-dependent relationships can increase classification accuracy. Long short-term memory (LSTM) models are a type of neural network model designed to work with data that contains time dependencies. We investigate the conditions under which these models are useful for political science text classification tasks with applications to Chinese social media posts as well as US newspaper articles. We also provide guidance for the use of LSTM models.


2019 ◽  
Vol 27 (1) ◽  
pp. 81-88 ◽  
Author(s):  
Hans Moen ◽  
Kai Hakala ◽  
Laura-Maria Peltonen ◽  
Henry Suhonen ◽  
Filip Ginter ◽  
...  

Abstract Objective This study focuses on the task of automatically assigning standardized (topical) subject headings to free-text sentences in clinical nursing notes. The underlying motivation is to support nurses when they document patient care by developing a computer system that can assist in incorporating suitable subject headings that reflect the documented topics. Central in this study is performance evaluation of several text classification methods to assess the feasibility of developing such a system. Materials and Methods Seven text classification methods are evaluated using a corpus of approximately 0.5 million nursing notes (5.5 million sentences) with 676 unique headings extracted from a Finnish university hospital. Several of these methods are based on artificial neural networks. Evaluation is first done in an automatic manner for all methods, then a manual error analysis is done on a sample. Results We find that a method based on a bidirectional long short-term memory network performs best with an average recall of 0.5435 when allowed to suggest 1 subject heading per sentence and 0.8954 when allowed to suggest 10 subject headings per sentence. However, other methods achieve comparable results. The manual analysis indicates that the predictions are better than what the automatic evaluation suggests. Conclusions The results indicate that several of the tested methods perform well in suggesting the most appropriate subject headings on sentence level. Thus, we find it feasible to develop a text classification system that can support the use of standardized terminologies and save nurses time and effort on care documentation.


2021 ◽  
Vol 4 (1) ◽  
pp. 13 ◽  
Author(s):  
Mukul Jaggi ◽  
Priyanka Mandal ◽  
Shreya Narang ◽  
Usman Naseem ◽  
Matloob Khushi

Stock price prediction can be made more efficient by considering the price fluctuations and understanding people’s sentiments. A limited number of models understand financial jargon or have labelled datasets concerning stock price change. To overcome this challenge, we introduced FinALBERT, an ALBERT based model trained to handle financial domain text classification tasks by labelling Stocktwits text data based on stock price change. We collected Stocktwits data for over ten years for 25 different companies, including the major five FAANG (Facebook, Amazon, Apple, Netflix, Google). These datasets were labelled with three labelling techniques based on stock price changes. Our proposed model FinALBERT is fine-tuned with these labels to achieve optimal results. We experimented with the labelled dataset by training it on traditional machine learning, BERT, and FinBERT models, which helped us understand how these labels behaved with different model architectures. Our labelling method’s competitive advantage is that it can help analyse the historical data effectively, and the mathematical function can be easily customised to predict stock movement.


2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Takwa Mohamed ◽  
Sabah Sayed ◽  
Akram Salah ◽  
Essam H. Houssein

Viral progress remains a major deterrent in the viability of antiviral drugs. The ability to anticipate this development will provide assistance in the early detection of drug-resistant strains and may encourage antiviral drugs to be the most effective plan. In recent years, a deep learning model called the seq2seq neural network has emerged and has been widely used in natural language processing. In this research, we borrow this approach for predicting next generation sequences using the seq2seq LSTM neural network while considering these sequences as text data. We used hot single vectors to represent the sequences as input to the model; subsequently, it maintains the basic information position of each nucleotide in the sequences. Two RNA viruses sequence datasets are used to evaluate the proposed model which achieved encouraging results. The achieved results illustrate the potential for utilizing the LSTM neural network for DNA and RNA sequences in solving other sequencing issues in bioinformatics.


2021 ◽  
Vol 38 (6) ◽  
pp. 1809-1817
Author(s):  
Praveen Kumar Yechuri ◽  
Suguna Ramadass

The advent of social networking and the internet has resulted in a huge shift in how consumers express their loyalty and where firms acquire a reputation. Customers and businesses frequently leave comments, and entrepreneurs do the same. These write-ups may be useful to those with the ability to analyse them. However, analysing textual content without the use of computers and the associated tools is time-consuming and difficult. The goal of Sentiment Analysis (SA) is to discover client feedback, points of view, or complaints that describe the product in a more negative or optimistic light. You can expect this to be a result based on this data if you merely read and assess feedback or examine ratings. There was a time when only the use of standard techniques, such as linear regression and Support Vector Machines (SVM), was effective for the task of automatically discovering knowledge from written explanations, but the older approaches have now been mostly replaced by deep neural networks, and deep learning has gotten the job done. Convolution and compressing RNNs are useful for tasks like machine translation, caption creation, and language modelling, however they suffer from gradient disappearance or explosion issues with large words. This research uses a deep learning RNN for movie review sentiment prediction that is quite comparable to Long Short-Term Memory networks. A LSTM model was well suited for modelling long sequential data. Generally, sentence vectorization approaches are used to overcome the inconsistency of sentence form. We made an attempt to look into the effect of hyper parameters like dropout of layers, activation functions and we also tested the model with different neural network settings and showed results that have been presented in the various ways to take the data into account. IMDB is the official movie database which serves as the basis for all of the experimental studies in the proposed model.


Author(s):  
Zhenguo Yan ◽  
◽  
Yue Wu

Convolutional Neural Networks (CNNs) effectively extract local features from input data. However, CNN based on word embedding and convolution layers displays poor performance in text classification tasks when compared with traditional baseline methods. We address this problem and propose a model named NNGN that simplifies the convolution layer in the CNN by replacing it with a pooling layer that extracts n-gram embedding in a simpler way and obtains document representations via linear computation. We implement two settings in our model to extract n-gram features. In the first setting, which we refer to as seq-NNGN, we consider word order within each n-gram. In the second setting, BoW-NNGN, we do not consider word order. We compare the performance of these settings in different classification tasks with those of other models. The experimental results show that our proposed model achieves better performance than state-of-the-art models.


This work elaborates on the integration of the rudimentary Convolutional Neural Network (CNN) with Long Short-Term Memory (LSTM), resulting in a new paradigm in the well-explored field of image classification. LSTM is one kind of Recurrent Neural Network (RNN) which has the potential to memorize long-term dependencies. It was observed that LSTMs are able to complement the feature extraction ability of CNN when used in a layered order. LSTMs have the capacity to selectively remember patterns for a long duration of time and CNNs are able to extract the important features out of it. This LSTM-CNN layered structure, when used for image classification, has an edge over conventional CNN classifier. The model which has been proposed is based on the sets of Artificial Neural Network like Recurrent and Convolutional neural network; hence this model is robust and suitable to a wide spectrum of classification tasks. To validate these results, we have tested our model on two standard datasets. The results have been compared with other classifiers to establish the significance of our proposed model.


2020 ◽  
Author(s):  
Pathikkumar Patel ◽  
Bhargav Lad ◽  
Jinan Fiaidhi

During the last few years, RNN models have been extensively used and they have proven to be better for sequence and text data. RNNs have achieved state-of-the-art performance levels in several applications such as text classification, sequence to sequence modelling and time series forecasting. In this article we will review different Machine Learning and Deep Learning based approaches for text data and look at the results obtained from these methods. This work also explores the use of transfer learning in NLP and how it affects the performance of models on a specific application of sentiment analysis.


Sign in / Sign up

Export Citation Format

Share Document