decoder architecture
Recently Published Documents


TOTAL DOCUMENTS

394
(FIVE YEARS 152)

H-INDEX

22
(FIVE YEARS 4)

2022 ◽  
Vol 183 ◽  
pp. 228-239
Author(s):  
Zhuo Zheng ◽  
Yanfei Zhong ◽  
Shiqi Tian ◽  
Ailong Ma ◽  
Liangpei Zhang

Symmetry ◽  
2021 ◽  
Vol 14 (1) ◽  
pp. 1
Author(s):  
Wenxuan Zhao ◽  
Yaqin Zhao ◽  
Liqi Feng ◽  
Jiaxi Tang

The existing dehazing algorithms are problematic because of dense haze being unevenly distributed on the images, and the deep convolutional dehazing network relying too greatly on large-scale datasets. To solve these problems, this paper proposes a generative adversarial network based on the deep symmetric Encoder-Decoder architecture for removing dense haze. To restore the clear image, a four-layer down-sampling encoder is constructed to extract the semantic information lost due to the dense haze. At the same time, in the symmetric decoder module, an attention mechanism is introduced to adaptively assign weights to different pixels and channels, so as to deal with the uneven distribution of haze. Finally, the framework of the generative adversarial network is generated so that the model achieves a better training effect on small-scale datasets. The experimental results showed that the proposed dehazing network can not only effectively remove the unevenly distributed dense haze in the real scene image, but also achieve great performance in real-scene datasets with less training samples, and the evaluation indexes are better than other widely used contrast algorithms.


2021 ◽  
Vol 5 (3 (Under Construction)) ◽  
pp. 352-361
Author(s):  
Saadet Aytaç ARPACI ◽  
Songül VARLI
Keyword(s):  

2021 ◽  
Vol 8 (2) ◽  
pp. 303-315
Author(s):  
Jingyu Gong ◽  
Zhou Ye ◽  
Lizhuang Ma

AbstractA significant performance boost has been achieved in point cloud semantic segmentation by utilization of the encoder-decoder architecture and novel convolution operations for point clouds. However, co-occurrence relationships within a local region which can directly influence segmentation results are usually ignored by current works. In this paper, we propose a neighborhood co-occurrence matrix (NCM) to model local co-occurrence relationships in a point cloud. We generate target NCM and prediction NCM from semantic labels and a prediction map respectively. Then, Kullback-Leibler (KL) divergence is used to maximize the similarity between the target and prediction NCMs to learn the co-occurrence relationship. Moreover, for large scenes where the NCMs for a sampled point cloud and the whole scene differ greatly, we introduce a reverse form of KL divergence which can better handle the difference to supervise the prediction NCMs. We integrate our method into an existing backbone and conduct comprehensive experiments on three datasets: Semantic3D for outdoor space segmentation, and S3DIS and ScanNet v2 for indoor scene segmentation. Results indicate that our method can significantly improve upon the backbone and outperform many leading competitors.


2021 ◽  
pp. 1-10
Author(s):  
Zhiqiang Yu ◽  
Yuxin Huang ◽  
Junjun Guo

It has been shown that the performance of neural machine translation (NMT) drops starkly in low-resource conditions. Thai-Lao is a typical low-resource language pair of tiny parallel corpus, leading to suboptimal NMT performance on it. However, Thai and Lao have considerable similarities in linguistic morphology and have bilingual lexicon which is relatively easy to obtain. To use this feature, we first build a bilingual similarity lexicon composed of pairs of similar words. Then we propose a novel NMT architecture to leverage the similarity between Thai and Lao. Specifically, besides the prevailing sentence encoder, we introduce an extra similarity lexicon encoder into the conventional encoder-decoder architecture, by which the semantic information carried by the similarity lexicon can be represented. We further provide a simple mechanism in the decoder to balance the information representations delivered from the input sentence and the similarity lexicon. Our approach can fully exploit linguistic similarity carried by the similarity lexicon to improve translation quality. Experimental results demonstrate that our approach achieves significant improvements over the state-of-the-art Transformer baseline system and previous similar works.


2021 ◽  
Vol 13 (23) ◽  
pp. 4917
Author(s):  
Weichao Wu ◽  
Zhong Xie ◽  
Yongyang Xu ◽  
Ziyin Zeng ◽  
Jie Wan

Recently, unstructured 3D point clouds have been widely used in remote sensing application. However, inevitable is the appearance of an incomplete point cloud, primarily due to the angle of view and blocking limitations. Therefore, point cloud completion is an urgent problem in point cloud data applications. Most existing deep learning methods first generate rough frameworks through the global characteristics of incomplete point clouds, and then generate complete point clouds by refining the framework. However, such point clouds are undesirably biased toward average existing objects, meaning that the completion results lack local details. Thus, we propose a multi-view-based shape-preserving point completion network with an encoder–decoder architecture, termed a point projection network (PP-Net). PP-Net completes and optimizes the defective point cloud in a projection-to-shape manner in two stages. First, a new feature point extraction method is applied to the projection of a point cloud, to extract feature points in multiple directions. Second, more realistic complete point clouds with finer profiles are yielded by encoding and decoding the feature points from the first stage. Meanwhile, the projection loss in multiple directions and adversarial loss are combined to optimize the model parameters. Qualitative and quantitative experiments on the ShapeNet dataset indicate that our method achieves good results in learning-based point cloud shape completion methods in terms of chamfer distance (CD) error. Furthermore, PP-Net is robust to the deletion of multiple parts and different levels of incomplete data.


2021 ◽  
Vol 11 (23) ◽  
pp. 11111
Author(s):  
Yakun Wang ◽  
Yajun Du ◽  
Jinrong Hu ◽  
Xianyong Li ◽  
Xiaoliang Chen

The future emotion prediction of users on social media has been attracting increasing attention from academics. Previous studies on predicting future emotion have focused on the characteristics of individuals’ emotion changes; however, the role of the individual’s neighbors has not yet been thoroughly researched. To fill this gap, a surrounding-aware individual emotion prediction model (SAEP) based on a deep encoder–decoder architecture is proposed to predict individuals’ future emotions. In particular, two memory-based attention networks are constructed: The time-evolving attention network and the surrounding attention network to extract the features of the emotional changes of users and neighbors, respectively. Then, these features are incorporated into the emotion prediction task. In addition, a novel variant LSTM is introduced as the encoder of the proposed model, which can effectively extract complex patterns of users’ emotional changes from irregular time series. Extensive experimental results show that the proposed approach outperforms five alternative methods. The SAEP approach has improved by approximately 4.21–14.84% micro F1 on a dataset built from Twitter and 7.30–13.41% on a dataset built from Microblog. Further analyses validate the effectiveness of the proposed time-evolving context and surrounding context, as well as the factors that may affect the prediction results.


2021 ◽  
pp. 369-376
Author(s):  
Utkarsh Maheshwari ◽  
Piyush Goel ◽  
R Annie Uthra ◽  
Vinay Vasanth Patage ◽  
Sourabh Tiwari ◽  
...  

Sensors ◽  
2021 ◽  
Vol 21 (20) ◽  
pp. 6839
Author(s):  
Aisha Al-Mohannadi ◽  
Somaya Al-Maadeed ◽  
Omar Elharrouss ◽  
Kishor Kumar Sadasivuni

Cardiovascular diseases (CVDs) have shown a huge impact on the number of deaths in the world. Thus, common carotid artery (CCA) segmentation and intima-media thickness (IMT) measurements have been significantly implemented to perform early diagnosis of CVDs by analyzing IMT features. Using computer vision algorithms on CCA images is not widely used for this type of diagnosis, due to the complexity and the lack of dataset to do it. The advancement of deep learning techniques has made accurate early diagnosis from images possible. In this paper, a deep-learning-based approach is proposed to apply semantic segmentation for intima-media complex (IMC) and to calculate the cIMT measurement. In order to overcome the lack of large-scale datasets, an encoder-decoder-based model is proposed using multi-image inputs that can help achieve good learning for the model using different features. The obtained results were evaluated using different image segmentation metrics which demonstrate the effectiveness of the proposed architecture. In addition, IMT thickness is computed, and the experiment showed that the proposed model is robust and fully automated compared to the state-of-the-art work.


Sign in / Sign up

Export Citation Format

Share Document