compression model
Recently Published Documents


TOTAL DOCUMENTS

169
(FIVE YEARS 40)

H-INDEX

21
(FIVE YEARS 2)

2022 ◽  
Vol 13 (1) ◽  
pp. 0-0

This paper plans to develop a novel image compression model with four major phases. (i) Segmentation (ii) Feature Extraction (iii) ROI classification (iv) Compression. The image is segmented into two regions by Adaptive ACM. The result of ACM is the production of two regions, this model enables separate ROI classification phase. For performing this, the features corresponding to GLCM are extracted from the segmented parts. Further, they are subjected to classification via NN, in which new training algorithm is adopted. As a main novelty JA and WOA are merged together to form J-WOA with the aim of tuning the ACM (weighting factor and maximum iteration), and training algorithm of NN, where the weights are optimized. This model is referred as J-WOA-NN. This classification model exactly classifies the ROI regions. During the compression process, the ROI regions are handled by JPEG-LS algorithm and the non-ROI region are handled by wavelet-based lossy compression algorithm. Finally, the decompression model is carried out by adopting the same reverse process.


2021 ◽  
Vol 11 (21) ◽  
pp. 9910
Author(s):  
Yo-Han Park ◽  
Gyong-Ho Lee ◽  
Yong-Seok Choi ◽  
Kong-Joo Lee

Sentence compression is a natural language-processing task that produces a short paraphrase of an input sentence by deleting words from the input sentence while ensuring grammatical correctness and preserving meaningful core information. This study introduces a graph convolutional network (GCN) into a sentence compression task to encode syntactic information, such as dependency trees. As we upgrade the GCN to activate a directed edge, the compression model with the GCN layers can distinguish between parent and child nodes in a dependency tree when aggregating adjacent nodes. Furthermore, by increasing the number of GCN layers, the model can gradually collect high-order information of a dependency tree when propagating node information through the layers. We implement a sentence compression model for Korean and English, respectively. This model consists of three components: pre-trained BERT model, GCN layers, and a scoring layer. The scoring layer can determine whether a word should remain in a compressed sentence by relying on the word vector containing contextual and syntactic information encoded by BERT and GCN layers. To train and evaluate the proposed model, we used the Google sentence compression dataset for English and a Korean sentence compression corpus containing about 140,000 sentence pairs for Korean. The experimental results demonstrate that the proposed model achieves state-of-the-art performance for English. To the best of our knowledge, this sentence compression model based on the deep learning model trained with a large-scale corpus is the first attempt for Korean.


Sign in / Sign up

Export Citation Format

Share Document