scholarly journals Partial caging: a clearance-based definition, datasets, and deep learning

2021 ◽  
Author(s):  
Michael C. Welle ◽  
Anastasiia Varava ◽  
Jeffrey Mahler ◽  
Ken Goldberg ◽  
Danica Kragic ◽  
...  

AbstractCaging grasps limit the mobility of an object to a bounded component of configuration space. We introduce a notion of partial cage quality based on maximal clearance of an escaping path. As computing this is a computationally demanding task even in a two-dimensional scenario, we propose a deep learning approach. We design two convolutional neural networks and construct a pipeline for real-time planar partial cage quality estimation directly from 2D images of object models and planar caging tools. One neural network, CageMaskNN, is used to identify caging tool locations that can support partial cages, while a second network that we call CageClearanceNN is trained to predict the quality of those configurations. A partial caging dataset of 3811 images of objects and more than 19 million caging tool configurations is used to train and evaluate these networks on previously unseen objects and caging tool configurations. Experiments show that evaluation of a given configuration on a GeForce GTX 1080 GPU takes less than 6 ms. Furthermore, an additional dataset focused on grasp-relevant configurations is curated and consists of 772 objects with 3.7 million configurations. We also use this dataset for 2D Cage acquisition on novel objects. We study how network performance depends on the datasets, as well as how to efficiently deal with unevenly distributed training data. In further analysis, we show that the evaluation pipeline can approximately identify connected regions of successful caging tool placements and we evaluate the continuity of the cage quality score evaluation along caging tool trajectories. Influence of disturbances is investigated and quantitative results are provided.

2021 ◽  
Vol 13 (19) ◽  
pp. 3859
Author(s):  
Joby M. Prince Czarnecki ◽  
Sathishkumar Samiappan ◽  
Meilun Zhou ◽  
Cary Daniel McCraine ◽  
Louis L. Wasson

The radiometric quality of remotely sensed imagery is crucial for precision agriculture applications because estimations of plant health rely on the underlying quality. Sky conditions, and specifically shadowing from clouds, are critical determinants in the quality of images that can be obtained from low-altitude sensing platforms. In this work, we first compare common deep learning approaches to classify sky conditions with regard to cloud shadows in agricultural fields using a visible spectrum camera. We then develop an artificial-intelligence-based edge computing system to fully automate the classification process. Training data consisting of 100 oblique angle images of the sky were provided to a convolutional neural network and two deep residual neural networks (ResNet18 and ResNet34) to facilitate learning two classes, namely (1) good image quality expected, and (2) degraded image quality expected. The expectation of quality stemmed from the sky condition (i.e., density, coverage, and thickness of clouds) present at the time of the image capture. These networks were tested using a set of 13,000 images. Our results demonstrated that ResNet18 and ResNet34 classifiers produced better classification accuracy when compared to a convolutional neural network classifier. The best overall accuracy was obtained by ResNet34, which was 92% accurate, with a Kappa statistic of 0.77. These results demonstrate a low-cost solution to quality control for future autonomous farming systems that will operate without human intervention and supervision.


2020 ◽  
Vol 8 ◽  
Author(s):  
Adil Khadidos ◽  
Alaa O. Khadidos ◽  
Srihari Kannan ◽  
Yuvaraj Natarajan ◽  
Sachi Nandan Mohanty ◽  
...  

In this paper, a data mining model on a hybrid deep learning framework is designed to diagnose the medical conditions of patients infected with the coronavirus disease 2019 (COVID-19) virus. The hybrid deep learning model is designed as a combination of convolutional neural network (CNN) and recurrent neural network (RNN) and named as DeepSense method. It is designed as a series of layers to extract and classify the related features of COVID-19 infections from the lungs. The computerized tomography image is used as an input data, and hence, the classifier is designed to ease the process of classification on learning the multidimensional input data using the Expert Hidden layers. The validation of the model is conducted against the medical image datasets to predict the infections using deep learning classifiers. The results show that the DeepSense classifier offers accuracy in an improved manner than the conventional deep and machine learning classifiers. The proposed method is validated against three different datasets, where the training data are compared with 70%, 80%, and 90% training data. It specifically provides the quality of the diagnostic method adopted for the prediction of COVID-19 infections in a patient.


2021 ◽  
Vol 182 (2) ◽  
pp. 95-110
Author(s):  
Linh Le ◽  
Ying Xie ◽  
Vijay V. Raghavan

The k Nearest Neighbor (KNN) algorithm has been widely applied in various supervised learning tasks due to its simplicity and effectiveness. However, the quality of KNN decision making is directly affected by the quality of the neighborhoods in the modeling space. Efforts have been made to map data to a better feature space either implicitly with kernel functions, or explicitly through learning linear or nonlinear transformations. However, all these methods use pre-determined distance or similarity functions, which may limit their learning capacity. In this paper, we present two loss functions, namely KNN Loss and Fuzzy KNN Loss, to quantify the quality of neighborhoods formed by KNN with respect to supervised learning, such that minimizing the loss function on the training data leads to maximizing KNN decision accuracy on the training data. We further present a deep learning strategy that is able to learn, by minimizing KNN loss, pairwise similarities of data that implicitly maps data to a feature space where the quality of KNN neighborhoods is optimized. Experimental results show that this deep learning strategy (denoted as Deep KNN) outperforms state-of-the-art supervised learning methods on multiple benchmark data sets.


2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Simon Müller ◽  
Christina Sauter ◽  
Ramesh Shunmugasundaram ◽  
Nils Wenzler ◽  
Vincent De Andrade ◽  
...  

AbstractAccurate 3D representations of lithium-ion battery electrodes, in which the active particles, binder and pore phases are distinguished and labeled, can assist in understanding and ultimately improving battery performance. Here, we demonstrate a methodology for using deep-learning tools to achieve reliable segmentations of volumetric images of electrodes on which standard segmentation approaches fail due to insufficient contrast. We implement the 3D U-Net architecture for segmentation, and, to overcome the limitations of training data obtained experimentally through imaging, we show how synthetic learning data, consisting of realistic artificial electrode structures and their tomographic reconstructions, can be generated and used to enhance network performance. We apply our method to segment x-ray tomographic microscopy images of graphite-silicon composite electrodes and show it is accurate across standard metrics. We then apply it to obtain a statistically meaningful analysis of the microstructural evolution of the carbon-black and binder domain during battery operation.


Author(s):  
Mr Almelu ◽  
Dr. S. Veenadhari ◽  
Kamini Maheshwar

The Internet of Things (IoT) systems create a large amount of sensing information. The consistency of this information is an essential problem for ensuring the quality of IoT services. The IoT data, however, generally suffers due to a variety of factors such as collisions, unstable network communication, noise, manual system closure, incomplete values and equipment failure. Due to excessive latency, bandwidth limitations, and high communication costs, transferring all IoT data to the cloud to solve the missing data problem may have a detrimental impact on network performance and service quality. As a result, the issue of missing information should be addressed as soon as feasible by offloading duties like data prediction or estimations closer to the source. As a result, the issue of incomplete information must be addressed as soon as feasible by offloading duties such as predictions or assessment to the network’s edge devices. In this work, we show how deep learning may be used to offload tasks in IoT applications.


2021 ◽  
Vol 905 (1) ◽  
pp. 012018
Author(s):  
I Y Prayogi ◽  
Sandra ◽  
Y Hendrawan

Abstract The objective of this study is to classify the quality of dried clove flowers using deep learning method with Convolutional Neural Network (CNN) algorithm, and also to perform the sensitivity analysis of CNN hyperparameters to obtain best model for clove quality classification process. The quality of clove as raw material in this study was determined according to SNI 3392-1994 by PT. Perkebunan Nusantara XII Pancusari Plantation, Malang, East Java, Indonesia. In total 1,600 images of dried clove flower were divided into 4 qualities. Each clove quality has 225 training data, 75 validation data, and 100 test data. The first step of this study is to build CNN model architecture as first model. The result of that model gives 65.25% reading accuracy. The second step is to analyze CNN sensitivity or CNN hyperparameter on the first model. The best value of CNN hyperparameter in each step then to be used in the next stage. Finally, after CNN hyperparameter carried out the reading accuracy of the test data is improved to 87.75%.


2018 ◽  
Author(s):  
Shuntaro Watanabe ◽  
Kazuaki Sumi ◽  
Takeshi Ise

ABSTRACTClassifying and mapping vegetation are very important tasks in environmental science and natural resource management. However, these tasks are not easy because conventional methods such as field surveys are highly labor intensive. Automatic identification of target objects from visual data is one of the most promising ways to reduce the costs for vegetation mapping. Although deep learning has become a new solution for image recognition and classification recently, in general, detection of ambiguous objects such as vegetation still is considered difficult. In this paper, we investigated the potential for adapting the chopped picture method, a recently described protocol for deep learning, to detect plant communities in Google Earth images. We selected bamboo forests as the target. We obtained Google Earth images from three regions in Japan. By applying the deep convolutional neural network, the model successfully learned the features of bamboo forests in Google Earth images, and the best trained model correctly detected 97% of the targets. Our results show that identification accuracy strongly depends on the image resolution and the quality of training data. Our results also highlight that deep learning and the chopped picture method can potentially become a powerful tool for high accuracy automated detection and mapping of vegetation.


Author(s):  
Qingsong Wen ◽  
Liang Sun ◽  
Fan Yang ◽  
Xiaomin Song ◽  
Jingkun Gao ◽  
...  

Deep learning performs remarkably well on many time series analysis tasks recently. The superior performance of deep neural networks relies heavily on a large number of training data to avoid overfitting. However, the labeled data of many real-world time series applications may be limited such as classification in medical time series and anomaly detection in AIOps. As an effective way to enhance the size and quality of the training data, data augmentation is crucial to the successful application of deep learning models on time series data. In this paper, we systematically review different data augmentation methods for time series. We propose a taxonomy for the reviewed methods, and then provide a structured review for these methods by highlighting their strengths and limitations. We also empirically compare different data augmentation methods for different tasks including time series classification, anomaly detection, and forecasting. Finally, we discuss and highlight five future directions to provide useful research guidance.


2021 ◽  
Author(s):  
Saniya Zafar ◽  
sobia Jangsher ◽  
Arafat Al-Dweik

The deployment of mobile-Small cells (mScs) is widely adopted to intensify the quality-of-service (QoS) in high mobility vehicles. However, the rapidly varying interference patterns among densely deployed mScs make the resource allocation (RA) highly challenging. In such scenarios, RA problem needs to be solved nearly in real-time, which can be considered as drawback for most existing RA algorithms. To overcome this constraint and solve the RA problem efficiently, we use deep learning (DL) in this work due to its ability to leverage the historical data in RA problem and to deal with computationally expensive tasks offline. More specifically, this paper considers the RA problem in vehicular environment comprising of city buses, where DL is explored for optimization of network performance. Simulation results reveal that RA in a network using Long Short-Term Memory (LSTM) algorithm outperforms other machine learning (ML) and DL-based RA mechanisms. Moreover, RA using LSTM provides less accurate results as compared to existing Time Interval Dependent Interference Graph (TIDIG)-based, and Threshold Percentage Dependent Interference Graph (TPDIG)-based RA but shows improved results when compared to RA using Global Positioning System Dependent Interference Graph (GPSDIG). However, the proposed scheme is computationally less expensive in comparison with TIDIG and TPDIG-based algorithms.


Author(s):  
C. Ko ◽  
J. Kang ◽  
G. Sohn

The goal for our paper is to classify tree genera using airborne Light Detection and Ranging (LiDAR) data with Convolution Neural Network (CNN) &amp;ndash; Multi-task Network (MTN) implementation. Unlike Single-task Network (STN) where only one task is assigned to the learning outcome, MTN is a deep learning architect for learning a main task (classification of tree genera) with other tasks (in our study, classification of coniferous and deciduous) simultaneously, with shared classification features. The main contribution of this paper is to improve classification accuracy from CNN-STN to CNN-MTN. This is achieved by introducing a concurrence loss (<i>L</i><sub>cd</sub>) to the designed MTN. This term regulates the overall network performance by minimizing the inconsistencies between the two tasks. Results show that we can increase the classification accuracy from 88.7&amp;thinsp;% to 91.0&amp;thinsp;% (from STN to MTN). The second goal of this paper is to solve the problem of small training sample size by multiple-view data generation. The motivation of this goal is to address one of the most common problems in implementing deep learning architecture, the insufficient number of training data. We address this problem by simulating training dataset with multiple-view approach. The promising results from this paper are providing a basis for classifying a larger number of dataset and number of classes in the future.


Sign in / Sign up

Export Citation Format

Share Document