scholarly journals Ridon Vehicle: Drive-by-Wire System for Scaled Vehicle Platform and Its Application on Behavior Cloning

Energies ◽  
2021 ◽  
Vol 14 (23) ◽  
pp. 8039
Author(s):  
Aws Khalil ◽  
Ahmed Abdelhamed ◽  
Girma Tewolde ◽  
Jaerock Kwon

For autonomous driving research, using a scaled vehicle platform is a viable alternative compared to a full-scale vehicle. However, using embedded solutions such as small robotic platforms with differential driving or radio-controlled (RC) car-based platforms can be limiting on, for example, sensor package restrictions or computing challenges. Furthermore, for a given controller, specialized expertise and abilities are necessary. To address such problems, this paper proposes a feasible solution, the Ridon vehicle, which is a spacious ride-on automobile with high-driving electric power and a custom-designed drive-by-wire system powered by a full-scale machine-learning-ready computer. The major objective of this paper is to provide a thorough and appropriate method for constructing a cost-effective platform with a drive-by-wire system and sensor packages so that machine-learning-based algorithms can be tested and deployed on a scaled vehicle. The proposed platform employs a modular and hierarchical software architecture, with microcontroller programs handling the low-level motor controls and a graphics processing unit (GPU)-powered laptop computer processing the higher and more sophisticated algorithms. The Ridon vehicle platform is validated by employing it in a deep-learning-based behavioral cloning study. The suggested platform’s affordability and adaptability would benefit broader research and the education community.

Information ◽  
2020 ◽  
Vol 11 (4) ◽  
pp. 193 ◽  
Author(s):  
Sebastian Raschka ◽  
Joshua Patterson ◽  
Corey Nolet

Smarter applications are making better use of the insights gleaned from data, having an impact on every industry and research discipline. At the core of this revolution lies the tools and the methods that are driving it, from processing the massive piles of data generated each day to learning from and taking useful action. Deep neural networks, along with advancements in classical machine learning and scalable general-purpose graphics processing unit (GPU) computing, have become critical components of artificial intelligence, enabling many of these astounding breakthroughs and lowering the barrier to adoption. Python continues to be the most preferred language for scientific computing, data science, and machine learning, boosting both performance and productivity by enabling the use of low-level libraries and clean high-level APIs. This survey offers insight into the field of machine learning with Python, taking a tour through important topics to identify some of the core hardware and software paradigms that have enabled it. We cover widely-used libraries and concepts, collected together for holistic comparison, with the goal of educating the reader and driving the field of Python machine learning forward.


2020 ◽  
Author(s):  
Vui Huang Tea

The 3rd Generation Partnership Project (3GPP) standard for 5G telecommunications specifies privacy protection schemes to cryptographically encrypt and conceal permanent identifiers of subscribers to prevent them from being exposed and tracked by over-the-air eavesdroppers. However, conventional privacy-preserving protocols and architectures alone are insufficient to protect subscriber privacy as they are vulnerable to new types of attacks due to the utilization of the emerging technologies such artificial intelligence (AI). A conventional brute force attack to unmask concealed 5G identity using a CPU would require ~877 million years. This paper presents an apparatus using machine learning (ML) and a graphics processing unit (GPU) that is able to unmask a concealed 5G identity in ~12 minutes with an untrained neural-network, or ~0.015 milliseconds with a pre-trained neural-network. The 5G concealed identities are effectively identified without requiring decryption, hence severely diminishing the level of privacy-preservation. Finally, several ML defence countermeasures are proposed to re-establish privacy protection in 5G identity.


Sensors ◽  
2020 ◽  
Vol 20 (14) ◽  
pp. 3969
Author(s):  
Hongzhi Huang ◽  
Yakun Wu ◽  
Mengqi Yu ◽  
Xuesong Shi ◽  
Fei Qiao ◽  
...  

Visual semantic segmentation, which is represented by the semantic segmentation network, has been widely used in many fields, such as intelligent robots, security, and autonomous driving. However, these Convolutional Neural Network (CNN)-based networks have high requirements for computing resources and programmability for hardware platforms. For embedded platforms and terminal devices in particular, Graphics Processing Unit (GPU)-based computing platforms cannot meet these requirements in terms of size and power consumption. In contrast, the Field Programmable Gate Array (FPGA)-based hardware system not only has flexible programmability and high embeddability, but can also meet lower power consumption requirements, which make it an appropriate solution for semantic segmentation on terminal devices. In this paper, we demonstrate EDSSA—an Encoder-Decoder semantic segmentation networks accelerator architecture which can be implemented with flexible parameter configurations and hardware resources on the FPGA platforms that support Open Computing Language (OpenCL) development. We introduce the related technologies, architecture design, algorithm optimization, and hardware implementation of the Encoder-Decoder semantic segmentation network SegNet as an example, and undertake a performance evaluation. Using an Intel Arria-10 GX1150 platform for evaluation, our work achieves a throughput higher than 432.8 GOP/s with power consumption of about 20 W, which is a 1.2× times improvement the energy-efficiency ratio compared to a high-performance GPU.


Author(s):  
Ram C. Sharma ◽  
Keitarou Hara

This research introduces Genus-Physiognomy-Ecosystem (GPE) mapping at a prefecture level through machine learning of multi-spectral and multi-temporal satellite images at 10m spatial resolution, and later integration of prefecture wise maps into country scale for dealing with 88 GPE types to be classified from a large size of training data involved in the research effectively. This research was made possible by harnessing entire archives of Level-2A product, Bottom of Atmosphere reflectance images collected by MultiSpectral Instruments onboard a constellation of two polar-orbiting Sentinel-2 mission satellites. The satellite images were pre-processed for cloud masking and monthly median composite images consisting of 10 multi-spectral bands and 7 spectral indexes were generated. The ground truth labels were extracted from extant vegetation survey maps by implementing systematic stratified sampling approach and noisy labels were dropped out for preparing a reliable ground truth database. Graphics Processing Unit (GPU) implementation of Gradient Boosting Decision Trees (GBDT) classifier was employed for classification of 88 GPE types from 204 satellite features. The classification accuracy computed with 25% test data varied from 65-81% in terms of F1-score across 48 prefectural regions. This research produced seamless maps of 88 GPE types first time at a country scale with an average 72% F1-score.


2019 ◽  
Vol 16 (12) ◽  
pp. 5111-5117
Author(s):  
Anil Kumar Rawat ◽  
Kamal Kumar Sharma ◽  
Sahil Verma

Computer Aided Diagnosis enabled by machine learning has revolutionized the way medical industry operates. Medical Imaging has provided a convenient and hassle free diagnosis methods for medical treatment. Medical Imaging has its roots in all spheres of healthcare. In the recent times, availability of quality digital data in medical field along with convergence of various technological tools resulted in exponential growth in various areas including medical industry. Deep learning has emerged as a subset of machine learning with automated feature extraction abilities ensuring at par or higher accuracy as compared to the medical experts. An accuracy of 99.3% and 100% is achieved in classification of individuals suffering for Alzheimer’s disease with respect to the normal individual and with mild cognitive impairment respectively reinforcing the potential of deep learning tools. With the increasing availability of multi-modal imaging data, the need for churning and extracting the key information becomes the key priority for automation and big data precisely performs the same thereby enabling interpretation based personalized imaging and discovering imaging biomarkers. Finally, these two techniques can only be efficient if there is high end computing power and Graphics Processing Unit (GPU) enabled parallel processing has provided the required platform. However, there still exists challenges like lack of annotated data, variety of modalities, varied sources of modalities, variation in class label and uncertainty in the deep learning black box. To address all these issues, this paper aims at exploring the breadth and depth of the outreach of medical imaging ranging from classification, segmentation, abnormalities detection, motion detection, image reconstruction and pharmacological imaging as an assisting tool including the challenges and the future scope.


2008 ◽  
Vol 46 (4) ◽  
pp. 152-159 ◽  
Author(s):  
Byeong-Gyu Nam ◽  
Jeabin Lee ◽  
Kwanho Kim ◽  
Seungjin Lee ◽  
Hoi-Jun Yoo

Author(s):  
Shweta Sharma ◽  
Rama Krishna ◽  
Rakesh Kumar

With latest development in technology, the usage of smartphones to fulfill day-to-day requirements has been increased. The Android-based smartphones occupy the largest market share among other mobile operating systems. The hackers are continuously keeping an eye on Android-based smartphones by creating malicious apps housed with ransomware functionality for monetary purposes. Hackers lock the screen and/or encrypt the documents of the victim’s Android based smartphones after performing ransomware attacks. Thus, in this paper, a framework has been proposed in which we (1) utilize novel features of Android ransomware, (2) reduce the dimensionality of the features, (3) employ an ensemble learning model to detect Android ransomware, and (4) perform a comparative analysis to calculate the computational time required by machine learning models to detect Android ransomware. Our proposed framework can efficiently detect both locker and crypto ransomware. The experimental results reveal that the proposed framework detects Android ransomware by achieving an accuracy of 99.67% with Random Forest ensemble model. After reducing the dimensionality of the features with principal component analysis technique; the Logistic Regression model took least time to execute on the Graphics Processing Unit (GPU) and Central Processing Unit (CPU) in 41 milliseconds and 50 milliseconds respectively


2020 ◽  
Vol 7 (1) ◽  
pp. 2-3
Author(s):  
Shadi Saleh

Deep learning and machine learning innovations are at the core of the ongoing revolution in Artificial Intelligence for the interpretation and analysis of multimedia data. The convergence of large-scale datasets and more affordable Graphics Processing Unit (GPU) hardware has enabled the development of neural networks for data analysis problems that were previously handled by traditional handcrafted features. Several deep learning architectures such as Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), Long Short Term Memory (LSTM)/Gated Recurrent Unit (GRU), Deep Believe Networks (DBN), and Deep Stacking Networks (DSNs) have been used with new open source software and libraries options to shape an entirely new scenario in computer vision processing.


2014 ◽  
Vol 490-491 ◽  
pp. 1177-1189 ◽  
Author(s):  
Muralindran Mariappan ◽  
Vigneswaran Ramu ◽  
Brendan Khoo Teng Thiam ◽  
Thayabaren Ganesan ◽  
Manimehala Nadarajan

Medical Tele-diagnosis Robot (MTR) is a cost effective telemedicine mobile robot that provides tele-presence capability for the specialist on a remote location to virtually meet the patient, perform diagnostics and consult the resident doctor regarding the patient via internet. This paper highlights on the development of a doctor-robot interface where the doctor or user can control the robot reliably via regular internet connection from a different location, a distributed secured network for MTRs communication, an audiovisual communication system for tele-diagnosis and a navigation safety system called Danger Monitoring System (DMS) as part of MTRs assistive internet based navigation remote control system. The overall setup and maintenance cost of MTR is reduced by adopting a decentralized network via hybrid P2P technology. With this, the network load is distributed among the users. As for the audiovisual system, the timeliness of the video transmission from the robot to the operator can be attained by CUDA H.264 video encoding to reduce the size of the video stream and by taking advantage of the highly-parallel processors in the graphics processing unit. Combinations of sensors are place around the robot to provide data on the robots surrounding during operation. The sensors data are fed into the DMS algorithm which is equipped with fuzzy logic based artificial intelligence system to process the data from all the sensors and user input to decide preventative measures to avoid any danger to humans and the robot in terms of obstacle avoidance and robot tilt angle safety. The overall system is tested by a set of experiments and found to be demonstrating an acceptable performance. This system proved to be suitable to be used in MTR.


Sign in / Sign up

Export Citation Format

Share Document