scholarly journals MLVCNN: Multi-Loop-View Convolutional Neural Network for 3D Shape Retrieval

Author(s):  
Jianwen Jiang ◽  
Di Bao ◽  
Ziqiang Chen ◽  
Xibin Zhao ◽  
Yue Gao

3D shape retrieval has attracted much attention and become a hot topic in computer vision field recently.With the development of deep learning, 3D shape retrieval has also made great progress and many view-based methods have been introduced in recent years. However, how to represent 3D shapes better is still a challenging problem. At the same time, the intrinsic hierarchical associations among views still have not been well utilized. In order to tackle these problems, in this paper, we propose a multi-loop-view convolutional neural network (MLVCNN) framework for 3D shape retrieval. In this method, multiple groups of views are extracted from different loop directions first. Given these multiple loop views, the proposed MLVCNN framework introduces a hierarchical view-loop-shape architecture, i.e., the view level, the loop level, and the shape level, to conduct 3D shape representation from different scales. In the view-level, a convolutional neural network is first trained to extract view features. Then, the proposed Loop Normalization and LSTM are utilized for each loop of view to generate the loop-level features, which considering the intrinsic associations of the different views in the same loop. Finally, all the loop-level descriptors are combined into a shape-level descriptor for 3D shape representation, which is used for 3D shape retrieval. Our proposed method has been evaluated on the public 3D shape benchmark, i.e., ModelNet40. Experiments and comparisons with the state-of-the-art methods show that the proposed MLVCNN method can achieve significant performance improvement on 3D shape retrieval tasks. Our MLVCNN outperforms the state-of-the-art methods by the mAP of 4.84% in 3D shape retrieval task. We have also evaluated the performance of the proposed method on the 3D shape classification task where MLVCNN also achieves superior performance compared with recent methods.

2018 ◽  
Vol 4 (9) ◽  
pp. 107 ◽  
Author(s):  
Mohib Ullah ◽  
Ahmed Mohammed ◽  
Faouzi Alaya Cheikh

Articulation modeling, feature extraction, and classification are the important components of pedestrian segmentation. Usually, these components are modeled independently from each other and then combined in a sequential way. However, this approach is prone to poor segmentation if any individual component is weakly designed. To cope with this problem, we proposed a spatio-temporal convolutional neural network named PedNet which exploits temporal information for spatial segmentation. The backbone of the PedNet consists of an encoder–decoder network for downsampling and upsampling the feature maps, respectively. The input to the network is a set of three frames and the output is a binary mask of the segmented regions in the middle frame. Irrespective of classical deep models where the convolution layers are followed by a fully connected layer for classification, PedNet is a Fully Convolutional Network (FCN). It is trained end-to-end and the segmentation is achieved without the need of any pre- or post-processing. The main characteristic of PedNet is its unique design where it performs segmentation on a frame-by-frame basis but it uses the temporal information from the previous and the future frame for segmenting the pedestrian in the current frame. Moreover, to combine the low-level features with the high-level semantic information learned by the deeper layers, we used long-skip connections from the encoder to decoder network and concatenate the output of low-level layers with the higher level layers. This approach helps to get segmentation map with sharp boundaries. To show the potential benefits of temporal information, we also visualized different layers of the network. The visualization showed that the network learned different information from the consecutive frames and then combined the information optimally to segment the middle frame. We evaluated our approach on eight challenging datasets where humans are involved in different activities with severe articulation (football, road crossing, surveillance). The most common CamVid dataset which is used for calculating the performance of the segmentation algorithm is evaluated against seven state-of-the-art methods. The performance is shown on precision/recall, F 1 , F 2 , and mIoU. The qualitative and quantitative results show that PedNet achieves promising results against state-of-the-art methods with substantial improvement in terms of all the performance metrics.


2021 ◽  
Author(s):  
Muhammad Shahroz Nadeem ◽  
Sibt Hussain ◽  
Fatih Kurugollu

This paper solves the textual deblurring problem, In this paper we propose a new loss function, we provide empirical evaluation of the design choices based on which a memory friendly CNN model is proposed, that performs better then the state of the art CNN method.


Author(s):  
Yutong Feng ◽  
Yifan Feng ◽  
Haoxuan You ◽  
Xibin Zhao ◽  
Yue Gao

Mesh is an important and powerful type of data for 3D shapes and widely studied in the field of computer vision and computer graphics. Regarding the task of 3D shape representation, there have been extensive research efforts concentrating on how to represent 3D shapes well using volumetric grid, multi-view and point cloud. However, there is little effort on using mesh data in recent years, due to the complexity and irregularity of mesh data. In this paper, we propose a mesh neural network, named MeshNet, to learn 3D shape representation from mesh data. In this method, face-unit and feature splitting are introduced, and a general architecture with available and effective blocks are proposed. In this way, MeshNet is able to solve the complexity and irregularity problem of mesh and conduct 3D shape representation well. We have applied the proposed MeshNet method in the applications of 3D shape classification and retrieval. Experimental results and comparisons with the state-of-the-art methods demonstrate that the proposed MeshNet can achieve satisfying 3D shape classification and retrieval performance, which indicates the effectiveness of the proposed method on 3D shape representation.


2020 ◽  
Vol 10 (2) ◽  
pp. 84 ◽  
Author(s):  
Atif Mehmood ◽  
Muazzam Maqsood ◽  
Muzaffar Bashir ◽  
Yang Shuyuan

Alzheimer’s disease (AD) may cause damage to the memory cells permanently, which results in the form of dementia. The diagnosis of Alzheimer’s disease at an early stage is a problematic task for researchers. For this, machine learning and deep convolutional neural network (CNN) based approaches are readily available to solve various problems related to brain image data analysis. In clinical research, magnetic resonance imaging (MRI) is used to diagnose AD. For accurate classification of dementia stages, we need highly discriminative features obtained from MRI images. Recently advanced deep CNN-based models successfully proved their accuracy. However, due to a smaller number of image samples available in the datasets, there exist problems of over-fitting hindering the performance of deep learning approaches. In this research, we developed a Siamese convolutional neural network (SCNN) model inspired by VGG-16 (also called Oxford Net) to classify dementia stages. In our approach, we extend the insufficient and imbalanced data by using augmentation approaches. Experiments are performed on a publicly available dataset open access series of imaging studies (OASIS), by using the proposed approach, an excellent test accuracy of 99.05% is achieved for the classification of dementia stages. We compared our model with the state-of-the-art models and discovered that the proposed model outperformed the state-of-the-art models in terms of performance, efficiency, and accuracy.


Author(s):  
AprilPyone Maungmaung ◽  
Hitoshi Kiya

In this paper, we propose a novel method for protecting convolutional neural network models with a secret key set so that unauthorized users without the correct key set cannot access trained models. The method enables us to protect not only from copyright infringement but also the functionality of a model from unauthorized access without any noticeable overhead. We introduce three block-wise transformations with a secret key set to generate learnable transformed images: pixel shuffling, negative/positive transformation, and format-preserving Feistel-based encryption. Protected models are trained by using transformed images. The results of experiments with the CIFAR and ImageNet datasets show that the performance of a protected model was close to that of non-protected models when the key set was correct, while the accuracy severely dropped when an incorrect key set was given. The protected model was also demonstrated to be robust against various attacks. Compared with the state-of-the-art model protection with passports, the proposed method does not have any additional layers in the network, and therefore, there is no overhead during training and inference processes.


2020 ◽  
Author(s):  
Fábia Isabella Pires Enembreck ◽  
Erikson Freitas de Morais ◽  
Marcella Scoczynski Ribeiro Martins

Abstract The person re-identification problem addresses the task of identify if a person being watched by security cameras in surveillance environments has ever been in the scene. This problem is considered challenging, since the images obtained by cameras are subject to many variations, such as lighting, perspective and occlusions. This work aims to develop two robust approaches based on deep learning techniques for person re-identification, considering these variations. The first approach uses a Siamese neural network composed by two identical subnets. This model receives two input images that may or may not be from the same person. The second approach consists of a triplet neural network, with three identical subnets, which receives a reference image from a certain person, a second image from the same person and another image from a different person. Both approaches have identical subnets, composed by a convolutional neural network which extracts general characteristics from each image and an autoencoder model, responsible for addressing high variations that input images may undergo. To compare the developed networks, three datasets were used, and the accuracy and the CMC curve metrics were applied for the analysis. The experiments showed an improvement in the results with the use of the autoencoder in the subnets. Besides, Triplet Neural Network presented promising results in comparison with Siamese Neural Network and state-of-the-art methods.


Author(s):  
K. Rahmani ◽  
H. Mayer

In this paper we present a pipeline for high quality semantic segmentation of building facades using Structured Random Forest (SRF), Region Proposal Network (RPN) based on a Convolutional Neural Network (CNN) as well as rectangular fitting optimization. Our main contribution is that we employ features created by the RPN as channels in the SRF.We empirically show that this is very effective especially for doors and windows. Our pipeline is evaluated on two datasets where we outperform current state-of-the-art methods. Additionally, we quantify the contribution of the RPN and the rectangular fitting optimization on the accuracy of the result.


Author(s):  
Zhaoqun Li ◽  
Cheng Xu ◽  
Biao Leng

How to obtain the desirable representation of a 3D shape, which is discriminative across categories and polymerized within classes, is a significant challenge in 3D shape retrieval. Most existing 3D shape retrieval methods focus on capturing strong discriminative shape representation with softmax loss for the classification task, while the shape feature learning with metric loss is neglected for 3D shape retrieval. In this paper, we address this problem based on the intuition that the cosine distance of shape embeddings should be close enough within the same class and far away across categories. Since most of 3D shape retrieval tasks use cosine distance of shape features for measuring shape similarity, we propose a novel metric loss named angular triplet-center loss, which directly optimizes the cosine distances between the features. It inherits the triplet-center loss property to achieve larger inter-class distance and smaller intra-class distance simultaneously. Unlike previous metric loss utilized in 3D shape retrieval methods, where Euclidean distance is adopted and the margin design is difficult, the proposed method is more convenient to train feature embeddings and more suitable for 3D shape retrieval. Moreover, the angle margin is adopted to replace the cosine margin in order to provide more explicit discriminative constraints on an embedding space. Extensive experimental results on two popular 3D object retrieval benchmarks, ModelNet40 and ShapeNetCore 55, demonstrate the effectiveness of our proposed loss, and our method has achieved state-ofthe-art results on various 3D shape datasets.


2020 ◽  
pp. 147592172091837 ◽  
Author(s):  
Ruhua Wang ◽  
Chencho ◽  
Senjian An ◽  
Jun Li ◽  
Ling Li ◽  
...  

Convolutional neural networks have been widely employed for structural health monitoring and damage identification. The convolutional neural network is currently considered as the state-of-the-art method for structural damage identification due to its capabilities of efficient and robust feature learning in a hierarchical manner. It is a tendency to develop a convolutional neural network with a deeper architecture to gain a better performance. However, when the depth of the network increases to a certain level, the performance will degrade due to the gradient vanishing issue. Residual neural networks can avoid the problem of vanishing gradients by utilizing skip connections, which allows the information flowing to the next layer through identity mappings. In this article, a deep residual network framework is proposed for structural health monitoring of civil engineering structures. This framework is composed of purely residual blocks which operate as feature extractors and a fully connected layer as a regressor. It learns the damage-related features from the vibration characteristics such as mode shapes and maps them into the damage index labels, for example, stiffness reductions of structures. To evaluate the efficacy and robustness of the proposed framework, an intensive evaluation is conducted with both numerical and experimental studies. The comparison between the proposed approach and the state-of-the-art models, including a sparse autoencoder neural network, a shallow convolutional neural network and a convolutional neural network with the same structure but without skip connections, is conducted. In the numerical studies, a 7-storey steel frame is investigated. Four scenarios with considering measurement noise and finite element modelling errors in the data sets are studied. The proposed framework consistently outperforms the state-of-the-art models in all the scenarios, especially for the most challenging scenario, which includes both measurement noise and uncertainties. Experimental studies on a prestressed concrete bridge in the laboratory are conducted. The proposed framework demonstrates consistent damage prediction results on this beam with the state-of-the-art models.


Sign in / Sign up

Export Citation Format

Share Document