Pedestrian Detection: Performance Comparison Using Multiple Convolutional Neural Networks

Author(s):  
Meenu Ajith ◽  
Aswathy Rajendra Kurup
Electronics ◽  
2021 ◽  
Vol 11 (1) ◽  
pp. 1
Author(s):  
Xu Chen ◽  
Lei Liu ◽  
Xin Tan

Nowadays, pedestrian detection is widely used in fields such as driving assistance and video surveillance with the progression of technology. However, although the research of single-modal visible pedestrian detection has been very mature, it is still not enough to meet the demand of pedestrian detection at all times. Thus, a multi-spectral pedestrian detection method via image fusion and convolutional neural networks is proposed in this paper. The infrared intensity distribution and visible appearance features are retained with a total variation model based on local structure transfer, and pedestrian detection is realized with the multi-spectral fusion results and the target detection network YOLOv3. The detection performance of the proposed method is evaluated and compared with the detection methods based on the other four pixel-level fusion algorithms and two fusion network architectures. The results attest that our method has superior detection performance, which can detect pedestrian targets robustly even in the case of harsh illumination conditions and cluttered backgrounds.


2020 ◽  
Vol 14 (10) ◽  
pp. 1319-1327 ◽  
Author(s):  
Pedro Augusto Pinho Ferraz ◽  
Bernardo Augusto Godinho de Oliveira ◽  
Flávia Magalhães Freitas Ferreira ◽  
Carlos Augusto Paiva da Silva Martins

2020 ◽  
Vol 10 (19) ◽  
pp. 6940 ◽  
Author(s):  
Vincenzo Taormina ◽  
Donato Cascio ◽  
Leonardo Abbene ◽  
Giuseppe Raso

The search for anti-nucleus antibodies (ANA) represents a fundamental step in the diagnosis of autoimmune diseases. The test considered the gold standard for ANA research is indirect immunofluorescence (IIF). The best substrate for ANA detection is provided by Human Epithelial type 2 (HEp-2) cells. The first phase of HEp-2 type image analysis involves the classification of fluorescence intensity in the positive/negative classes. However, the analysis of IIF images is difficult to perform and particularly dependent on the experience of the immunologist. For this reason, the interest of the scientific community in finding relevant technological solutions to the problem has been high. Deep learning, and in particular the Convolutional Neural Networks (CNNs), have demonstrated their effectiveness in the classification of biomedical images. In this work the efficacy of the CNN fine-tuning method applied to the problem of classification of fluorescence intensity in HEp-2 images was investigated. For this purpose, four of the best known pre-trained networks were analyzed (AlexNet, SqueezeNet, ResNet18, GoogLeNet). The classifying power of CNN was investigated with different training modalities; three levels of freezing weights and scratch. Performance analysis was conducted, in terms of area under the ROC (Receiver Operating Characteristic) curve (AUC) and accuracy, using a public database. The best result achieved an AUC equal to 98.6% and an accuracy of 93.9%, demonstrating an excellent ability to discriminate between the positive/negative fluorescence classes. For an effective performance comparison, the fine-tuning mode was compared to those in which CNNs are used as feature extractors, and the best configuration found was compared with other state-of-the-art works.


Author(s):  
René Hosch ◽  
Lennard Kroll ◽  
Felix Nensa ◽  
Sven Koitka

Purpose Detection and validation of the chest X-ray view position with use of convolutional neural networks to improve meta-information for data cleaning within a hospital data infrastructure. Material and Methods Within this paper we developed a convolutional neural network which automatically detects the anteroposterior and posteroanterior view position of a chest radiograph. We trained two different network architectures (VGG variant and ResNet-34) with data published by the RSNA (26 684 radiographs, class distribution 46 % AP, 54 % PA) and validated these on a self-compiled dataset with data from the University Hospital Essen (4507, radiographs, class distribution 55 % PA, 45 % AP) labeled by a human reader. For visualization and better understanding of the network predictions, a Grad-CAM was generated for each network decision. The network results were evaluated based on the accuracy, the area under the curve (AUC), and the F1-score against the human reader labels. Also a final performance comparison between model predictions and DICOM labels was performed. Results The ensemble models reached accuracy and F1-scores greater than 95 %. The AUC reaches more than 0.99 for the ensemble models. The Grad-CAMs provide insight as to which anatomical structures contributed to a decision by the networks which are comparable with the ones a radiologist would use. Furthermore, the trained models were able to generalize over mislabeled examples, which was found by comparing the human reader labels to the predicted labels as well as the DICOM labels. Conclusion The results show that certain incorrectly entered meta-information of radiological images can be effectively corrected by deep learning in order to increase data quality in clinical application as well as in research. Key Points:  Citation Format


2019 ◽  
Vol 1 (12) ◽  
Author(s):  
Md. Moklesur Rahman ◽  
Md. Shafiqul Islam ◽  
Roberto Sassi ◽  
Md. Aktaruzzaman

2019 ◽  
Vol 34 (3) ◽  
pp. 207-215 ◽  
Author(s):  
Cheol-Hee Lee ◽  
Yoon-Ju Jeong ◽  
Taeho Kim ◽  
Jae-Hyeon Park ◽  
Seongbin Bak ◽  
...  

2020 ◽  
Vol 32 (10) ◽  
pp. 3157
Author(s):  
Yung-Yao Chen ◽  
Guan-Yi Li ◽  
Sin-Ye Jhong ◽  
Ping-Han Chen ◽  
Chiung-Cheng Tsai ◽  
...  

IEEE Access ◽  
2019 ◽  
Vol 7 ◽  
pp. 23027-23037 ◽  
Author(s):  
Inyong Yun ◽  
Cheolkon Jung ◽  
Xinran Wang ◽  
Alfred O. Hero ◽  
Joong Kyu Kim

Sign in / Sign up

Export Citation Format

Share Document