Multi-level feature fusion for fruit bearing branch keypoint detection

2021 ◽  
Vol 191 ◽  
pp. 106479
Author(s):  
Qixin Sun ◽  
Xiujuan Chai ◽  
Zhikang Zeng ◽  
Guomin Zhou ◽  
Tan Sun
2019 ◽  
Vol 55 (13) ◽  
pp. 742-745 ◽  
Author(s):  
Kang Yang ◽  
Huihui Song ◽  
Kaihua Zhang ◽  
Jiaqing Fan

Author(s):  
Ying-Xiang Hu ◽  
Rui-Sheng Jia ◽  
Yong-Chao Li ◽  
Qi Zhang ◽  
Hong-Mei Sun

Author(s):  
Arjun Benagatte Channegowda ◽  
H N Prakash

Providing security in biometrics is the major challenging task in the current situation. A lot of research work is going on in this area. Security can be more tightened by using complex security systems, like by using more than one biometric trait for recognition. In this paper multimodal biometric models are developed to improve the recognition rate of a person. The combination of physiological and behavioral biometrics characteristics is used in this work. Fingerprint and signature biometrics characteristics are used to develop a multimodal recognition system. Histograms of oriented gradients (HOG) features are extracted from biometric traits and for these feature fusions are applied at two levels. Features of fingerprint and signatures are fused using concatenation, sum, max, min, and product rule at multilevel stages, these features are used to train deep learning neural network model. In the proposed work, multi-level feature fusion for multimodal biometrics with a deep learning classifier is used and results are analyzed by a varying number of hidden neurons and hidden layers. Experiments are carried out on SDUMLA-HMT, machine learning and data mining lab, Shandong University fingerprint datasets, and MCYT signature biometric recognition group datasets, and encouraging results were obtained.


2021 ◽  
Vol 11 ◽  
Author(s):  
Haimei Li ◽  
Bing Liu ◽  
Yongtao Zhang ◽  
Chao Fu ◽  
Xiaowei Han ◽  
...  

Automatic segmentation of gastric tumor not only provides image-guided clinical diagnosis but also assists radiologists to read images and improve the diagnostic accuracy. However, due to the inhomogeneous intensity distribution of gastric tumors in CT scans, the ambiguous/missing boundaries, and the highly variable shapes of gastric tumors, it is quite challenging to develop an automatic solution. This study designs a novel 3D improved feature pyramidal network (3D IFPN) to automatically segment gastric tumors in computed tomography (CT) images. To meet the challenges of this extremely difficult task, the proposed 3D IFPN makes full use of the complementary information within the low and high layers of deep convolutional neural networks, which is equipped with three types of feature enhancement modules: 3D adaptive spatial feature fusion (ASFF) module, single-level feature refinement (SLFR) module, and multi-level feature refinement (MLFR) module. The 3D ASFF module adaptively suppresses the feature inconsistency in different levels and hence obtains the multi-level features with high feature invariance. Then, the SLFR module combines the adaptive features and previous multi-level features at each level to generate the multi-level refined features by skip connection and attention mechanism. The MLFR module adaptively recalibrates the channel-wise and spatial-wise responses by adding the attention operation, which improves the prediction capability of the network. Furthermore, a stage-wise deep supervision (SDS) mechanism and a hybrid loss function are also embedded to enhance the feature learning ability of the network. CT volumes dataset collected in three Chinese medical centers was used to evaluate the segmentation performance of the proposed 3D IFPN model. Experimental results indicate that our method outperforms state-of-the-art segmentation networks in gastric tumor segmentation. Moreover, to explore the generalization for other segmentation tasks, we also extend the proposed network to liver tumor segmentation in CT images of the MICCAI 2017 Liver Tumor Segmentation Challenge.


Sign in / Sign up

Export Citation Format

Share Document