A comparative study of automated legal text classification using random forests and deep learning

2022 ◽  
Vol 59 (2) ◽  
pp. 102798
Haihua Chen ◽  
Lei Wu ◽  
Jiangping Chen ◽  
Wei Lu ◽  
Junhua Ding
2021 ◽  
Benjamin Clavié ◽  
Marc Alphonsus

We aim to highlight an interesting trend to contribute to the ongoing debate around advances within legal Natural Language Processing. Recently, the focus for most legal text classification tasks has shifted towards large pre-trained deep learning models such as BERT. In this paper, we show that a more traditional approach based on Support Vector Machine classifiers reaches competitive performance with deep learning models. We also highlight that error reduction obtained by using specialised BERT-based models over baselines is noticeably smaller in the legal domain when compared to general language tasks. We discuss some hypotheses for these results to support future discussions.

2020 ◽  
Pathikkumar Patel ◽  
Bhargav Lad ◽  
Jinan Fiaidhi

During the last few years, RNN models have been extensively used and they have proven to be better for sequence and text data. RNNs have achieved state-of-the-art performance levels in several applications such as text classification, sequence to sequence modelling and time series forecasting. In this article we will review different Machine Learning and Deep Learning based approaches for text data and look at the results obtained from these methods. This work also explores the use of transfer learning in NLP and how it affects the performance of models on a specific application of sentiment analysis.

2021 ◽  
Vol 58 (3) ◽  
pp. 102481
Washington Cunha ◽  
Vítor Mangaravite ◽  
Christian Gomes ◽  
Sérgio Canuto ◽  
Elaine Resende ◽  

Sign in / Sign up

Export Citation Format

Share Document