scholarly journals Sampling Algorithms Combination with Machine Learning for Efficient Safe Trajectory Planning

2021 ◽  
Vol 11 (1) ◽  
pp. 1-11
Author(s):  
Amit Chaulwar ◽  

The planning of safe trajectories in critical traffic scenarios using model-based algorithms is a very computationally intensive task. Recently proposed algorithms, namely Hybrid Augmented CL-RRT, Hybrid Augmented CL-RRT+ and GATE-ARRT+, reduce the computation time for safe trajectory planning drastically using a combination of a deep learning algorithm 3D-ConvNet with a vehicle dynamic model. An efficient embedded implementation of these algorithms is required as the vehicle on-board micro-controller resources are limited. This work proposes methodologies for replacing the computationally intensive modules of these trajectory planning algorithms using different efficient machine learning and analytical methods. The required computational resources are measured by downloading and running the algorithms on various hardware platforms. The results show significant reduction in computational resources and the potential of proposed algorithms to run in real time. Also, alternative architectures for 3D-ConvNet are presented for further reduction of required computational resources.

2021 ◽  
Vol 35 (4) ◽  
pp. 349-357
Author(s):  
Shilpa P. Khedkar ◽  
Aroul Canessane Ramalingam

The Internet of Things (IoT) is a rising infrastructure of 21st century. The classification of traffic over IoT networks is attained significance importance due to rapid growth of users and devices. It is need of the hour to isolate the normal traffic from the malicious traffic and to assign the normal traffic to the proper destination to suffice the QoS requirements of the IoT users. Detection of malicious traffic can be done by continuously monitoring traffic for suspicious links, files, connection created and received, unrecognised protocol/port numbers, and suspicious Destination/Source IP combinations. A proficient classification mechanism in IoT environment should be capable enough to classify the heavy traffic in a fast manner, to deflect the malevolent traffic on time and to transmit the benign traffic to the designated nodes for serving the needs of the users. In this work, adaboost and Xgboost machine learning algorithms and Deep Neural Networks approach are proposed to separate the IoT traffic which eventually enhances the throughput of IoT networks and reduces the congestion over IoT channels. The result of experiment indicates a deep learning algorithm achieves higher accuracy compared to machine learning algorithms.


Author(s):  
Fawziya M. Rammo ◽  
Mohammed N. Al-Hamdani

Many languages identification (LID) systems rely on language models that use machine learning (ML) approaches, LID systems utilize rather long recording periods to achieve satisfactory accuracy. This study aims to extract enough information from short recording intervals in order to successfully classify the spoken languages under test. The classification process is based on frames of (2-18) seconds where most of the previous LID systems were based on much longer time frames (from 3 seconds to 2 minutes). This research defined and implemented many low-level features using MFCC (Mel-frequency cepstral coefficients), containing speech files in five languages (English. French, German, Italian, Spanish), from voxforge.org an open-source corpus that consists of user-submitted audio clips in various languages, is the source of data used in this paper. A CNN (convolutional Neural Networks) algorithm applied in this paper for classification and the result was perfect, binary language classification had an accuracy of 100%, and five languages classification with six languages had an accuracy of 99.8%.


Information ◽  
2020 ◽  
Vol 11 (5) ◽  
pp. 279 ◽  
Author(s):  
Bambang Susilo ◽  
Riri Fitri Sari

The internet has become an inseparable part of human life, and the number of devices connected to the internet is increasing sharply. In particular, Internet of Things (IoT) devices have become a part of everyday human life. However, some challenges are increasing, and their solutions are not well defined. More and more challenges related to technology security concerning the IoT are arising. Many methods have been developed to secure IoT networks, but many more can still be developed. One proposed way to improve IoT security is to use machine learning. This research discusses several machine-learning and deep-learning strategies, as well as standard datasets for improving the security performance of the IoT. We developed an algorithm for detecting denial-of-service (DoS) attacks using a deep-learning algorithm. This research used the Python programming language with packages such as scikit-learn, Tensorflow, and Seaborn. We found that a deep-learning model could increase accuracy so that the mitigation of attacks that occur on an IoT network is as effective as possible.


Computation ◽  
2019 ◽  
Vol 7 (1) ◽  
pp. 13 ◽  
Author(s):  
Francesco Rundo ◽  
Sergio Rinella ◽  
Simona Massimino ◽  
Marinella Coco ◽  
Giorgio Fallica ◽  
...  

The development of detection methodologies for reliable drowsiness tracking is a challenging task requiring both appropriate signal inputs and accurate and robust algorithms of analysis. The aim of this research is to develop an advanced method to detect the drowsiness stage in electroencephalogram (EEG), the most reliable physiological measurement, using the promising Machine Learning methodologies. The methods used in this paper are based on Machine Learning methodologies such as stacked autoencoder with softmax layers. Results obtained from 62 volunteers indicate 100% accuracy in drowsy/wakeful discrimination, proving that this approach can be very promising for use in the next generation of medical devices. This methodology can be extended to other uses in everyday life in which the maintaining of the level of vigilance is critical. Future works aim to perform extended validation of the proposed pipeline with a wide-range training set in which we integrate the photoplethysmogram (PPG) signal and visual information with EEG analysis in order to improve the robustness of the overall approach.


2021 ◽  
Author(s):  
Sidhant Idgunji ◽  
Madison Ho ◽  
Jonathan L. Payne ◽  
Daniel Lehrmann ◽  
Michele Morsilli ◽  
...  

<p>The growing digitization of fossil images has vastly improved and broadened the potential application of big data and machine learning, particularly computer vision, in paleontology. Recent studies show that machine learning is capable of approaching human abilities of classifying images, and with the increase in computational power and visual data, it stands to reason that it can match human ability but at much greater efficiency in the near future. Here we demonstrate this potential of using deep learning to identify skeletal grains at different levels of the Linnaean taxonomic hierarchy. Our approach was two-pronged. First, we built a database of skeletal grain images spanning a wide range of animal phyla and classes and used this database to train the model. We used a Python-based method to automate image recognition and extraction from published sources. Second, we developed a deep learning algorithm that can attach multiple labels to a single image. Conventionally, deep learning is used to predict a single class from an image; here, we adopted a Branch Convolutional Neural Network (B-CNN) technique to classify multiple taxonomic levels for a single skeletal grain image. Using this method, we achieved over 90% accuracy for both the coarse, phylum-level recognition and the fine, class-level recognition across diverse skeletal grains (6 phyla and 15 classes). Furthermore, we found that image augmentation improves the overall accuracy. This tool has potential applications in geology ranging from biostratigraphy to paleo-bathymetry, paleoecology, and microfacies analysis. Further improvement of the algorithm and expansion of the training dataset will continue to narrow the efficiency gap between human expertise and machine learning.</p>


2021 ◽  
Author(s):  
Donghwan Yun ◽  
Semin Cho ◽  
Yong Chul Kim ◽  
Dong Ki Kim ◽  
Kook-Hwan Oh ◽  
...  

BACKGROUND Precise prediction of contrast media-induced acute kidney injury (CIAKI) is an important issue because of its relationship with worse outcomes. OBJECTIVE Herein, we examined whether a deep learning algorithm could predict the risk of intravenous CIAKI better than other machine learning and logistic regression models in patients undergoing computed tomography. METHODS A total of 14,185 cases that underwent intravenous contrast media for computed tomography under the preventive and monitoring facility in Seoul National University Hospital were reviewed. CIAKI was defined as an increase in serum creatinine ≥0.3 mg/dl within 2 days and/or ≥50% within 7 days. Using both time-varying and time-invariant features, machine learning models, such as the recurrent neural network (RNN), light gradient boosting machine, extreme boosting machine, random forest, decision tree, support vector machine, κ-nearest neighboring, and logistic regression, were developed using a training set, and their performance was compared using the area under the receiver operating characteristic curve (AUROC) in a test set. RESULTS CIAKI developed in 261 cases (1.8%). The RNN model had the highest AUROC value of 0.755 (0.708–0.802) for predicting CIAKI, which was superior to those obtained from other machine learning models. Although CIAKI was defined as an increase in serum creatinine ≥0.5 mg/dl and/or ≥25% within 3 days, the highest performance was achieved in the RNN model with an AUROC of 0.716 (0.664–0.768). In the feature ranking analysis, albumin level was the most highly contributing factor to RNN performance, followed by time-varying kidney function. CONCLUSIONS Application of a deep learning algorithm improves the predictability of intravenous CIAKI after computed tomography, representing a basis for future clinical alarming and preventive systems.


2019 ◽  
Vol 2019 ◽  
pp. 1-9
Author(s):  
Sheng Huang ◽  
Xiaofei Fan ◽  
Lei Sun ◽  
Yanlu Shen ◽  
Xuesong Suo

Traditionally, the classification of seed defects mainly relies on the characteristics of color, shape, and texture. This method requires repeated extraction of a large amount of feature information, which is not efficiently used in detection. In recent years, deep learning has performed well in the field of image recognition. We introduced convolutional neural networks (CNNs) and transfer learning into the quality classification of seeds and compared them with traditional machine learning algorithms. Experiments showed that deep learning algorithm was significantly better than the machine learning algorithm with an accuracy of 95% (GoogLeNet) vs. 79.2% (SURF+SVM). We used three classifiers in GoogLeNet to demonstrate that network accuracy increases as the depth of the network increases. We used the visualization technology to obtain the feature map of each layer of the network in CNNs and used the heat map to represent the probability distribution of the inference results. As an end-to-end network, CNNs can be easily applied for automated seed manufacturing.


Sign in / Sign up

Export Citation Format

Share Document