Learning Network
Recently Published Documents


TOTAL DOCUMENTS

1332
(FIVE YEARS 977)

H-INDEX

35
(FIVE YEARS 22)

2021 ◽  
Vol 13 (20) ◽  
pp. 4123
Author(s):  
Hanqi Wang ◽  
Zhiling Wang ◽  
Linglong Lin ◽  
Fengyu Xu ◽  
Jie Yu ◽  
...  

Vehicle pose estimation is essential in autonomous vehicle (AV) perception technology. However, due to the different density distributions of the point cloud, it is challenging to achieve sensitive direction extraction based on 3D LiDAR by using the existing pose estimation methods. In this paper, an optimal vehicle pose estimation network based on time series and spatial tightness (TS-OVPE) is proposed. This network uses five pose estimation algorithms proposed as candidate solutions to select each obstacle vehicle's optimal pose estimation result. Among these pose estimation algorithms, we first propose the Basic Line algorithm, which uses the road direction as the prior knowledge. Secondly, we propose improving principal component analysis based on point cloud distribution to conduct rotating principal component analysis (RPCA) and diagonal principal component analysis (DPCA) algorithms. Finally, we propose two global algorithms independent of the prior direction. We provided four evaluation indexes to transform each algorithm into a unified dimension. These evaluation indexes’ results were input into the ensemble learning network to obtain the optimal pose estimation results from the five proposed algorithms. The spatial dimension evaluation indexes reflected the tightness of the bounding box and the time dimension evaluation index reflected the coherence of the direction estimation. Since the network was indirectly trained through the evaluation index, it could be directly used on untrained LiDAR and showed a good pose estimation performance. Our approach was verified on the SemanticKITTI dataset and our urban environment dataset. Compared with the two mainstream algorithms, the polygon intersection over union (P-IoU) average increased by about 5.25% and 9.67%, the average heading error decreased by about 29.49% and 44.11%, and the average speed direction error decreased by about 3.85% and 46.70%. The experiment results showed that the ensemble learning network could effectively select the optimal pose estimation from the five abovementioned algorithms, making pose estimation more accurate.


2021 ◽  
Author(s):  
Yunxiang Liu ◽  
Zhe Yang ◽  
Y. T. Jade Morton ◽  
Ruoyu Li

Symmetry ◽  
2021 ◽  
Vol 13 (10) ◽  
pp. 1914
Author(s):  
Mehmet Ali Kobat ◽  
Ozkan Karaca ◽  
Prabal Datta Barua ◽  
Sengul Dogan

Background and objective: Arrhythmia is a widely seen cardiologic ailment worldwide, and is diagnosed using electrocardiogram (ECG) signals. The ECG signals can be translated manually by human experts, but can also be scheduled to be carried out automatically by some agents. To easily diagnose arrhythmia, an intelligent assistant can be used. Machine learning-based automatic arrhythmia detection models have been proposed to create an intelligent assistant. Materials and Methods: In this work, we have used an ECG dataset. This dataset contains 1000 ECG signals with 17 categories. A new hand-modeled learning network is developed on this dataset, and this model uses a 3D shape (prismatoid) to create textural features. Moreover, a tunable Q wavelet transform with low oscillatory parameters and a statistical feature extractor has been applied to extract features at both low and high levels. The suggested prismatoid pattern and statistical feature extractor create features from 53 sub-bands. A neighborhood component analysis has been used to choose the most discriminative features. Two classifiers, k nearest neighbor (kNN) and support vector machine (SVM), were used to classify the selected top features with 10-fold cross-validation. Results: The calculated best accuracy rate of the proposed model is equal to 97.30% using the SVM classifier. Conclusion: The computed results clearly indicate the success of the proposed prismatoid pattern-based model.


2021 ◽  
Author(s):  
Sang-Heon Lim ◽  
Young Jae Kim ◽  
Yeon-Ho Park ◽  
Doojin Kim ◽  
Kwang Gi Kim ◽  
...  

Abstract Pancreas segmentation is necessary for observing lesions, analyzing anatomical structures, and predicting patient prognosis. Therefore, various studies have designed segmentation models based on convolutional neural networks for pancreas segmentation. However, the deep learning approach is limited by a lack of data, and studies conducted on a large computed tomography dataset are scarce. Therefore, this study aims to perform deep-learning-based semantic segmentation on 1,006 participants and evaluate the automatic segmentation performance of the pancreas via four individual three-dimensional segmentation networks. In this study, we performed internal validation with 1,006 patients and external validation using the Cancer Imaging Archive (TCIA) pancreas dataset. We obtained mean precision, recall, and dice similarity coefficients of 0.869, 0.842, and 0.842, respectively, for internal validation via a relevant approach among the four deep learning networks. Using the external dataset, the deep learning network achieved mean precision, recall, and dice similarity coefficients of 0.779, 0.749, and 0.735, respectively. We expect that generalized deep-learning-based systems can assist clinical decisions by providing accurate pancreatic segmentation and quantitative information of the pancreas for abdominal computed tomography.


Plants are seen as vital because they provide mankind with energy. Plant diseases can harm the leaf at any time between planting and harvesting, resulting in enormous losses in crop output and market value. A leaf disease detection system acts asignificant role in agricultural production. A large amount of labour is required for this process as well as an in-depth understanding of plant diseases. Determining the presence of illnesses in plant leaves requires the use of deep learning and machine learning methods, which classify the data based on a specified set. In this paper, apple and tomato leaves disease detection process is carried out by Chaotic Salp Swarm algorithm (CSSA) followed by Bi-directional Long Short Term Memory (Bi-LSTM) technique for classification. We've used the Bi-LSTM architecture to sense disease in tomato and apple leaves in studies. In order to determine the type of leaves, we trained a deep learning network using the PlantVillage dataset of damaged and healthy plant leaves. It is estimated that the trained model achieves a test accuracy of 96%.


Author(s):  
Peter Evanschitzky ◽  
Nicole Auth ◽  
Tilmann Heil ◽  
Christian Felix Hermanns ◽  
Andreas Erdmann

2021 ◽  
Vol 11 (19) ◽  
pp. 9204
Author(s):  
Xinyi Ma ◽  
Zhifeng Xiao ◽  
Hong-sik Yun ◽  
Seung-Jun Lee

High-resolution remote sensing image scene classification is a challenging visual task due to the large intravariance and small intervariance between the categories. To accurately recognize the scene categories, it is essential to learn discriminative features from both global and local critical regions. Recent efforts focus on how to encourage the network to learn multigranularity features with the destruction of the spatial information on the input image at different scales, which leads to meaningless edges that are harmful to training. In this study, we propose a novel method named Semantic Multigranularity Feature Learning Network (SMGFL-Net) for remote sensing image scene classification. The core idea is to learn both global and multigranularity local features from rearranged intermediate feature maps, thus, eliminating the meaningless edges. These features are then fused for the final prediction. Our proposed framework is compared with a collection of state-of-the-art (SOTA) methods on two fine-grained remote sensing image scene datasets, including the NWPU-RESISC45 and Aerial Image Datasets (AID). We justify several design choices, including the branch granularities, fusion strategies, pooling operations, and necessity of feature map rearrangement through a comparative study. Moreover, the overall performance results show that SMGFL-Net consistently outperforms other peer methods in classification accuracy, and the superiority is more apparent with less training data, demonstrating the efficacy of feature learning of our approach.


2021 ◽  
Author(s):  
Anh Nguyen ◽  
Khoa Pham ◽  
Dat Ngo ◽  
Thanh Ngo ◽  
Lam Pham

This paper provides an analysis of state-of-the-art activation functions with respect to supervised classification of deep neural network. These activation functions comprise of Rectified Linear Units (ReLU), Exponential Linear Unit (ELU), Scaled Exponential Linear Unit (SELU), Gaussian Error Linear Unit (GELU), and the Inverse Square Root Linear Unit (ISRLU). To evaluate, experiments over two deep learning network architectures integrating these activation functions are conducted. The first model, basing on Multilayer Perceptron (MLP), is evaluated with MNIST dataset to perform these activation functions.Meanwhile, the second model, likely VGGish-based architecture, is applied for Acoustic Scene Classification (ASC) Task 1A in DCASE 2018 challenge, thus evaluate whether these activation functions work well in different datasets as well as different network architectures.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Jakob Weiss ◽  
Jana Taron ◽  
Zexi Jin ◽  
Thomas Mayrhofer ◽  
Hugo J. W. L. Aerts ◽  
...  

AbstractDeep learning convolutional neural network (CNN) can predict mortality from chest radiographs, yet, it is unknown whether radiologists can perform the same task. Here, we investigate whether radiologists can visually assess image gestalt (defined as deviation from an unremarkable chest radiograph associated with the likelihood of 6-year mortality) of a chest radiograph to predict 6-year mortality. The assessment was validated in an independent testing dataset and compared to the performance of a CNN developed for mortality prediction. Results are reported for the testing dataset only (n = 100; age 62.5 ± 5.2; male 55%, event rate 50%). The probability of 6-year mortality based on image gestalt had high accuracy (AUC: 0.68 (95% CI 0.58–0.78), similar to that of the CNN (AUC: 0.67 (95% CI 0.57–0.77); p = 0.90). Patients with high/very high image gestalt ratings were significantly more likely to die when compared to those rated as very low (p ≤ 0.04). Assignment to risk categories was not explained by patient characteristics or traditional risk factors and imaging findings (p ≥ 0.2). In conclusion, assessing image gestalt on chest radiographs by radiologists renders high prognostic accuracy for the probability of mortality, similar to that of a specifically trained CNN. Further studies are warranted to confirm this concept and to determine potential clinical benefits.


Author(s):  
Khalid Mujasam Batoo ◽  
Saravanan Pandiaraj ◽  
Muthumareeswaran Muthuramamoorthy ◽  
Emad Raslan ◽  
Sujatha Krishnamoorthy

Sign in / Sign up

Export Citation Format

Share Document