Convolutional-neural-network-based feature extraction for liver segmentation from CT images

Author(s):  
Mubashir Ahmad ◽  
Yuan Ding ◽  
Syed Furqan Qadri ◽  
Jian Yang
2021 ◽  
pp. 115406
Author(s):  
Amirhossein Aghamohammadi ◽  
Ramin Ranjbarzadeh ◽  
Fatemeh Naiemi ◽  
Marzieh Mogharrebi ◽  
Shadi Dorosti ◽  
...  

2021 ◽  
pp. 1-10
Author(s):  
Chien-Cheng Leea ◽  
Zhongjian Gao ◽  
Xiu-Chi Huanga

This paper proposes a Wi-Fi-based indoor human detection system using a deep convolutional neural network. The system detects different human states in various situations, including different environments and propagation paths. The main improvements proposed by the system is that there is no cameras overhead and no sensors are mounted. This system captures useful amplitude information from the channel state information and converts this information into an image-like two-dimensional matrix. Next, the two-dimensional matrix is used as an input to a deep convolutional neural network (CNN) to distinguish human states. In this work, a deep residual network (ResNet) architecture is used to perform human state classification with hierarchical topological feature extraction. Several combinations of datasets for different environments and propagation paths are used in this study. ResNet’s powerful inference simplifies feature extraction and improves the accuracy of human state classification. The experimental results show that the fine-tuned ResNet-18 model has good performance in indoor human detection, including people not present, people still, and people moving. Compared with traditional machine learning using handcrafted features, this method is simple and effective.


2021 ◽  
Vol 68 ◽  
pp. 102652
Author(s):  
Vahid Asadpour ◽  
Rex A. Parker ◽  
Patrick R. Mayock ◽  
Samuel E. Sampson ◽  
Wansu Chen ◽  
...  

2021 ◽  
Vol 21 (01) ◽  
pp. 2150005
Author(s):  
ARUN T NAIR ◽  
K. MUTHUVEL

Nowadays, analysis on retinal image exists as one of the challenging area for study. Numerous retinal diseases could be recognized by analyzing the variations taking place in retina. However, the main disadvantage among those studies is that, they do not have higher recognition accuracy. The proposed framework includes four phases namely, (i) Blood Vessel Segmentation (ii) Feature Extraction (iii) Optimal Feature Selection and (iv) Classification. Initially, the input fundus image is subjected to blood vessel segmentation from which two binary thresholded images (one from High Pass Filter (HPF) and other from top-hat reconstruction) are acquired. These two images are differentiated and the areas that are common to both are said to be the major vessels and the left over regions are fused to form vessel sub-image. These vessel sub-images are classified with Gaussian Mixture Model (GMM) classifier and the resultant is summed up with the major vessels to form the segmented blood vessels. The segmented images are subjected to feature extraction process, where the features like proposed Local Binary Pattern (LBP), Gray-Level Co-Occurrence Matrix (GLCM) and Gray Level Run Length Matrix (GLRM) are extracted. As the curse of dimensionality seems to be the greatest issue, it is important to select the appropriate features from the extracted one for classification. In this paper, a new improved optimization algorithm Moth Flame with New Distance Formulation (MF-NDF) is introduced for selecting the optimal features. Finally, the selected optimal features are subjected to Deep Convolutional Neural Network (DCNN) model for classification. Further, in order to make the precise diagnosis, the weights of DCNN are optimally tuned by the same optimization algorithm. The performance of the proposed algorithm will be compared against the conventional algorithms in terms of positive and negative measures.


Sign in / Sign up

Export Citation Format

Share Document