late fusion
Recently Published Documents


TOTAL DOCUMENTS

171
(FIVE YEARS 44)

H-INDEX

12
(FIVE YEARS 1)

2021 ◽  
Vol 7 (12) ◽  
pp. 273
Author(s):  
Keisuke Maeda ◽  
Naoki Ogawa ◽  
Takahiro Ogawa ◽  
Miki Haseyama

This paper presents reliable estimation of deterioration levels via late fusion using multi-view distress images for practical inspection. The proposed method simultaneously solves the following two problems that are necessary to support the practical inspection. Since maintenance of infrastructures requires a high level of safety and reliability, this paper proposes a neural network that can generate an attention map from distress images and text data acquired during the inspection. Thus, deterioration level estimation with high interpretability can be realized. In addition, since multi-view distress images are taken for single distress during the actual inspection, it is necessary to estimate the final result from these images. Therefore, the proposed method integrates estimation results obtained from the multi-view images via the late fusion and can derive an appropriate result considering all the images. To the best of our knowledge, no method has been proposed to solve these problems simultaneously, and this achievement is the biggest contribution of this paper. In this paper, we confirm the effectiveness of the proposed method by conducting experiments using data acquired during the actual inspection.





2021 ◽  
Author(s):  
Yi Zhang ◽  
Xinwang Liu ◽  
Siwei Wang ◽  
Jiyuan Liu ◽  
Sisi Dai ◽  
...  
Keyword(s):  


2021 ◽  
Vol 11 (5) ◽  
pp. 7678-7683
Author(s):  
S. Nuanmeesri

Analysis of the symptoms of rose leaves can identify up to 15 different diseases. This research aims to develop Convolutional Neural Network models for classifying the diseases on rose leaves using hybrid deep learning techniques with Support Vector Machine (SVM). The developed models were based on the VGG16 architecture and early or late fusion techniques were applied to concatenate the output from a fully connected layer. The results showed that the developed models based on early fusion performed better than the developed models on either late fusion or VGG16 alone. In addition, it was found that the models using the SVM classifier had better efficiency in classifying the diseases appearing on rose leaves than the models using the softmax function classifier. In particular, a hybrid deep learning model based on early fusion and SVM, which applied the categorical hinge loss function, yielded a validation accuracy of 88.33% and a validation loss of 0.0679, which were higher than the ones of the other models. Moreover, this model was evaluated by 10-fold cross-validation with 90.26% accuracy, 90.59% precision, 92.44% recall, and 91.50% F1-score for disease classification on rose leaves.



2021 ◽  
Author(s):  
Lam Pham ◽  
Hieu Tang ◽  
Anahid Jalal ◽  
Alexander Schindler ◽  
Ross King

In this paper, we presents a low-complexitydeep learning frameworks for acoustic scene classification(ASC). The proposed framework can be separated into threemain steps: Front-end spectrogram extraction, back-endclassification, and late fusion of predicted probabilities.First, we use Mel filter, Gammatone filter and ConstantQ Transfrom (CQT) to transform raw audio signal intospectrograms, where both frequency and temporal featuresare presented. Three spectrograms are then fed into threeindividual back-end convolutional neural networks (CNNs),classifying into ten urban scenes. Finally, a late fusion ofthree predicted probabilities obtained from three CNNs isconducted to achieve the final classification result. To reducethe complexity of our proposed CNN network, we applytwo model compression techniques: model restriction anddecomposed convolution. Our extensive experiments, whichare conducted on DCASE 2021 (IEEE AASP Challenge onDetection and Classification of Acoustic Scenes and Events)Task 1A development dataset, achieve a low-complexity CNNbased framework with 128 KB trainable parameters andthe best classification accuracy of 66.7%, improving DCASEbaseline by 19.0%.



2021 ◽  
Vol 32 (6) ◽  
Author(s):  
Said Yacine Boulahia ◽  
Abdenour Amamra ◽  
Mohamed Ridha Madi ◽  
Said Daikh


2021 ◽  
Vol 22 (1) ◽  
Author(s):  
Francisco Carrillo-Perez ◽  
Juan Carlos Morales ◽  
Daniel Castillo-Secilla ◽  
Yésica Molina-Castro ◽  
Alberto Guillén ◽  
...  

Abstract Background Adenocarcinoma and squamous cell carcinoma are the two most prevalent lung cancer types, and their distinction requires different screenings, such as the visual inspection of histology slides by an expert pathologist, the analysis of gene expression or computer tomography scans, among others. In recent years, there has been an increasing gathering of biological data for decision support systems in the diagnosis (e.g. histology imaging, next-generation sequencing technologies data, clinical information, etc.). Using all these sources to design integrative classification approaches may improve the final diagnosis of a patient, in the same way that doctors can use multiple types of screenings to reach a final decision on the diagnosis. In this work, we present a late fusion classification model using histology and RNA-Seq data for adenocarcinoma, squamous-cell carcinoma and healthy lung tissue. Results The classification model improves results over using each source of information separately, being able to reduce the diagnosis error rate up to a 64% over the isolate histology classifier and a 24% over the isolate gene expression classifier, reaching a mean F1-Score of 95.19% and a mean AUC of 0.991. Conclusions These findings suggest that a classification model using a late fusion methodology can considerably help clinicians in the diagnosis between the aforementioned lung cancer cancer subtypes over using each source of information separately. This approach can also be applied to any cancer type or disease with heterogeneous sources of information.



2021 ◽  
Author(s):  
Roberto Leyva ◽  
Victor Sanchez
Keyword(s):  


Sign in / Sign up

Export Citation Format

Share Document