SAR object classification implementation for embedded platforms

Author(s):  
Chris Capraro ◽  
Uttam Kumar Majumder ◽  
Josh Siddall ◽  
Eric K. Davis ◽  
Dan Brown ◽  
...  
1999 ◽  
Author(s):  
Kimberly Coombs ◽  
Debra Freel ◽  
Douglas Lampert ◽  
Steven Brahm

2021 ◽  
Vol 3 (5) ◽  
Author(s):  
João Gaspar Ramôa ◽  
Vasco Lopes ◽  
Luís A. Alexandre ◽  
S. Mogo

AbstractIn this paper, we propose three methods for door state classification with the goal to improve robot navigation in indoor spaces. These methods were also developed to be used in other areas and applications since they are not limited to door detection as other related works are. Our methods work offline, in low-powered computers as the Jetson Nano, in real-time with the ability to differentiate between open, closed and semi-open doors. We use the 3D object classification, PointNet, real-time semantic segmentation algorithms such as, FastFCN, FC-HarDNet, SegNet and BiSeNet, the object detection algorithm, DetectNet and 2D object classification networks, AlexNet and GoogleNet. We built a 3D and RGB door dataset with images from several indoor environments using a 3D Realsense camera D435. This dataset is freely available online. All methods are analysed taking into account their accuracy and the speed of the algorithm in a low powered computer. We conclude that it is possible to have a door classification algorithm running in real-time on a low-power device.


Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1461
Author(s):  
Shun-Hsin Yu ◽  
Jen-Shuo Chang ◽  
Chia-Hung Dylan Tsai

This paper proposes an object classification method using a flexion glove and machine learning. The classification is performed based on the information obtained from a single grasp on a target object. The flexion glove is developed with five flex sensors mounted on five finger sleeves, and is used for measuring the flexion of individual fingers while grasping an object. Flexion signals are divided into three phases, and they are the phases of picking, holding and releasing, respectively. Grasping features are extracted from the phase of holding for training the support vector machine. Two sets of objects are prepared for the classification test. One is printed-object set and the other is daily-life object set. The printed-object set is for investigating the patterns of grasping with specified shape and size, while the daily-life object set includes nine objects randomly chosen from daily life for demonstrating that the proposed method can be used to identify a wide range of objects. According to the results, the accuracy of the classifications are achieved 95.56% and 88.89% for the sets of printed objects and daily-life objects, respectively. A flexion glove which can perform object classification is successfully developed in this work and is aimed at potential grasp-to-see applications, such as visual impairment aid and recognition in dark space.


2021 ◽  
Vol 11 (15) ◽  
pp. 7148
Author(s):  
Bedada Endale ◽  
Abera Tullu ◽  
Hayoung Shi ◽  
Beom-Soo Kang

Unmanned aerial vehicles (UAVs) are being widely utilized for various missions: in both civilian and military sectors. Many of these missions demand UAVs to acquire artificial intelligence about the environments they are navigating in. This perception can be realized by training a computing machine to classify objects in the environment. One of the well known machine training approaches is supervised deep learning, which enables a machine to classify objects. However, supervised deep learning comes with huge sacrifice in terms of time and computational resources. Collecting big input data, pre-training processes, such as labeling training data, and the need for a high performance computer for training are some of the challenges that supervised deep learning poses. To address these setbacks, this study proposes mission specific input data augmentation techniques and the design of light-weight deep neural network architecture that is capable of real-time object classification. Semi-direct visual odometry (SVO) data of augmented images are used to train the network for object classification. Ten classes of 10,000 different images in each class were used as input data where 80% were for training the network and the remaining 20% were used for network validation. For the optimization of the designed deep neural network, a sequential gradient descent algorithm was implemented. This algorithm has the advantage of handling redundancy in the data more efficiently than other algorithms.


Sensors ◽  
2021 ◽  
Vol 21 (9) ◽  
pp. 3240
Author(s):  
Tehreem Syed ◽  
Vijay Kakani ◽  
Xuenan Cui ◽  
Hakil Kim

In recent times, the usage of modern neuromorphic hardware for brain-inspired SNNs has grown exponentially. In the context of sparse input data, they are undertaking low power consumption for event-based neuromorphic hardware, specifically in the deeper layers. However, using deep ANNs for training spiking models is still considered as a tedious task. Until recently, various ANN to SNN conversion methods in the literature have been proposed to train deep SNN models. Nevertheless, these methods require hundreds to thousands of time-steps for training and still cannot attain good SNN performance. This work proposes a customized model (VGG, ResNet) architecture to train deep convolutional spiking neural networks. In this current study, the training is carried out using deep convolutional spiking neural networks with surrogate gradient descent backpropagation in a customized layer architecture similar to deep artificial neural networks. Moreover, this work also proposes fewer time-steps for training SNNs with surrogate gradient descent. During the training with surrogate gradient descent backpropagation, overfitting problems have been encountered. To overcome these problems, this work refines the SNN based dropout technique with surrogate gradient descent. The proposed customized SNN models achieve good classification results on both private and public datasets. In this work, several experiments have been carried out on an embedded platform (NVIDIA JETSON TX2 board), where the deployment of customized SNN models has been extensively conducted. Performance validations have been carried out in terms of processing time and inference accuracy between PC and embedded platforms, showing that the proposed customized models and training techniques are feasible for achieving a better performance on various datasets such as CIFAR-10, MNIST, SVHN, and private KITTI and Korean License plate dataset.


Sign in / Sign up

Export Citation Format

Share Document