scholarly journals Vision-Based Multirotor Following Using Synthetic Learning Techniques

Sensors ◽  
2019 ◽  
Vol 19 (21) ◽  
pp. 4794
Author(s):  
Alejandro Rodriguez-Ramos ◽  
Adrian Alvarez-Fernandez ◽  
Hriday Bavle ◽  
Pascual Campoy ◽  
Jonathan P. How

Deep- and reinforcement-learning techniques have increasingly required large sets of real data to achieve stable convergence and generalization, in the context of image-recognition, object-detection or motion-control strategies. On this subject, the research community lacks robust approaches to overcome unavailable real-world extensive data by means of realistic synthetic-information and domain-adaptation techniques. In this work, synthetic-learning strategies have been used for the vision-based autonomous following of a noncooperative multirotor. The complete maneuver was learned with synthetic images and high-dimensional low-level continuous robot states, with deep- and reinforcement-learning techniques for object detection and motion control, respectively. A novel motion-control strategy for object following is introduced where the camera gimbal movement is coupled with the multirotor motion during the multirotor following. Results confirm that our present framework can be used to deploy a vision-based task in real flight using synthetic data. It was extensively validated in both simulated and real-flight scenarios, providing proper results (following a multirotor up to 1.3 m/s in simulation and 0.3 m/s in real flights).

Sensors ◽  
2021 ◽  
Vol 21 (24) ◽  
pp. 8477
Author(s):  
Roozbeh Mohammadi ◽  
Claudio Roncoli

Connected vehicles (CVs) have the potential to collect and share information that, if appropriately processed, can be employed for advanced traffic control strategies, rendering infrastructure-based sensing obsolete. However, before we reach a fully connected environment, where all vehicles are CVs, we have to deal with the challenge of incomplete data. In this paper, we develop data-driven methods for the estimation of vehicles approaching a signalised intersection, based on the availability of partial information stemming from an unknown penetration rate of CVs. In particular, we build machine learning models with the aim of capturing the nonlinear relations between the inputs (CV data) and the output (number of non-connected vehicles), which are characterised by highly complex interactions and may be affected by a large number of factors. We show that, in order to train these models, we may use data that can be easily collected with modern technologies. Moreover, we demonstrate that, if the available real data is not deemed sufficient, training can be performed using synthetic data, produced via microscopic simulations calibrated with real data, without a significant loss of performance. Numerical experiments, where the estimation methods are tested using real vehicle data simulating the presence of various penetration rates of CVs, show very good performance of the estimators, making them promising candidates for applications in the near future.


Sensors ◽  
2021 ◽  
Vol 21 (22) ◽  
pp. 7539
Author(s):  
Jungchan Cho

Universal domain adaptation (UDA) is a crucial research topic for efficient deep learning model training using data from various imaging sensors. However, its development is affected by unlabeled target data. Moreover, the nonexistence of prior knowledge of the source and target domain makes it more challenging for UDA to train models. I hypothesize that the degradation of trained models in the target domain is caused by the lack of direct training loss to improve the discriminative power of the target domain data. As a result, the target data adapted to the source representations is biased toward the source domain. I found that the degradation was more pronounced when I used synthetic data for the source domain and real data for the target domain. In this paper, I propose a UDA method with target domain contrastive learning. The proposed method enables models to leverage synthetic data for the source domain and train the discriminativeness of target features in an unsupervised manner. In addition, the target domain feature extraction network is shared with the source domain classification task, preventing unnecessary computational growth. Extensive experimental results on VisDa-2017 and MNIST to SVHN demonstrated that the proposed method significantly outperforms the baseline by 2.7% and 5.1%, respectively.


Photography used to be a hobby that required equipment such as a professional camera. Today, photography has evolved to be a daily activity conducted on an unprecedented scale due to the adoption of camera into smartphones. Mobile phone cameras are on the way to completely replace other forms of camera due to their portability and quality. Millions of images are captured on mobile devices across the globe. These images are clear and crisp. But all these images are captured in daylight. Images taken in low illumination essentially turn out to be too dark to be comprehensible. Research shows that current solutions to this problem work for dim to moderate light level but fail in extreme low light. There are certain problems involved with these techniques. Firstly, image denoising relies on image priors limiting the situations on what it will work on. Other deep learning techniques work on synthetic data and cannot be proficient on real data. Secondly, Low light image enhancement assumes that images already contain a good representation of scene content. This paper proposes to capture low illumination images and transform them to high quality images using end to end fully convolutional neural network trained on our data set of raw images shot in low aperture and their corresponding high aperture raw images. As an outcome, we will be able to transform images to high quality and identify objects.


Sensors ◽  
2021 ◽  
Vol 21 (23) ◽  
pp. 7785
Author(s):  
Jun Mao ◽  
Change Zheng ◽  
Jiyan Yin ◽  
Ye Tian ◽  
Wenbin Cui

Training a deep learning-based classification model for early wildfire smoke images requires a large amount of rich data. However, due to the episodic nature of fire events, it is difficult to obtain wildfire smoke image data, and most of the samples in public datasets suffer from a lack of diversity. To address these issues, a method using synthetic images to train a deep learning classification model for real wildfire smoke was proposed in this paper. Firstly, we constructed a synthetic dataset by simulating a large amount of morphologically rich smoke in 3D modeling software and rendering the virtual smoke against many virtual wildland background images with rich environmental diversity. Secondly, to better use the synthetic data to train a wildfire smoke image classifier, we applied both pixel-level domain adaptation and feature-level domain adaptation. The CycleGAN-based pixel-level domain adaptation method for image translation was employed. On top of this, the feature-level domain adaptation method incorporated ADDA with DeepCORAL was adopted to further reduce the domain shift between the synthetic and real data. The proposed method was evaluated and compared on a test set of real wildfire smoke and achieved an accuracy of 97.39%. The method is applicable to wildfire smoke classification tasks based on RGB single-frame images and would also contribute to training image classification models without sufficient data.


Author(s):  
Natália Souza Soares ◽  
João Marcelo Xavier Natário Teixeira ◽  
Veronica Teichrieb

In this work, we propose a framework to train a robot in a virtual environment using Reinforcement Learning (RL) techniques and thus facilitating the use of this type of approach in robotics. With our integrated solution for virtual training, it is possible to programmatically change the environment parameters, making it easy to implement domain randomization techniques on-the-fly. We conducted experiments with a TurtleBot 2i in an indoor navigation task with static obstacle avoidance using an RL algorithm called Proximal Policy Optimization (PPO). Our results show that even though the training did not use any real data, the trained model was able to generalize to different virtual environments and real-world scenes.


Machine learning area enable the utilization of Deep learning algorithm and neural networks (DNNs) with Reinforcement Learning. Reinforcement learning and DL both is region of AI, it’s an efficient tool towards structuring artificially intelligent systems and solving sequential deciding problems. Reinforcement learning (RL) deals with the history of moves; Reinforcement learning problems are often resolve by an agent often denoted as (A) it has privilege to make decisions during a situation to optimize a given problem by collective rewards. Ability to structure sizable amount of attributes make deep learning an efficient tool for unstructured data. Comparing multiple deep learning algorithms may be a major issue thanks to the character of the training process and therefore the narrow scope of datasets tested in algorithmic prisons. Our research proposed a framework which exposed that reinforcement learning techniques in combination with Deep learning techniques learn functional representations for sorting problems with high dimensional unprocessed data. The faster RCNN model typically founds objects in faster way saving resources like computation, processing, and storage. But still object detection technique typically require high computation power and large memory and processor building it hard to run on resource constrained devices (RCD) for detecting an object during real time without an efficient and high computing machine.


Author(s):  
Farzaneh Shoeleh ◽  
Mohammad Mehdi Yadollahi ◽  
Masoud Asadpour

Abstract There is an implicit assumption in machine learning techniques that each new task has no relation to the tasks previously learned. Therefore, tasks are often addressed independently. However, in some domains, particularly reinforcement learning (RL), this assumption is often incorrect because tasks in the same or similar domain tend to be related. In other words, even though tasks are quite different in their specifics, they may have general similarities, such as shared skills, making them related. In this paper, a novel domain adaptation-based method using adversarial networks is proposed to do transfer learning in RL problems. Our proposed method incorporates skills previously learned from source task to speed up learning on a new target task by providing generalization not only within a task but also across different, but related tasks. The experimental results indicate the effectiveness of our method in dealing with RL problems.


2019 ◽  
Author(s):  
Leor M Hackel ◽  
Jeffrey Jordan Berg ◽  
Björn Lindström ◽  
David Amodio

Do habits play a role in our social impressions? To investigate the contribution of habits to the formation of social attitudes, we examined the roles of model-free and model-based reinforcement learning in social interactions—computations linked in past work to habit and planning, respectively. Participants in this study learned about novel individuals in a sequential reinforcement learning paradigm, choosing financial advisors who led them to high- or low-paying stocks. Results indicated that participants relied on both model-based and model-free learning, such that each independently predicted choice during the learning task and self-reported liking in a post-task assessment. Specifically, participants liked advisors who could provide large future rewards as well as advisors who had provided them with large rewards in the past. Moreover, participants varied in their use of model-based and model-free learning strategies, and this individual difference influenced the way in which learning related to self-reported attitudes: among participants who relied more on model-free learning, model-free social learning related more to post-task attitudes. We discuss implications for attitudes, trait impressions, and social behavior, as well as the role of habits in a memory systems model of social cognition.


Author(s):  
P.L. Nikolaev

This article deals with method of binary classification of images with small text on them Classification is based on the fact that the text can have 2 directions – it can be positioned horizontally and read from left to right or it can be turned 180 degrees so the image must be rotated to read the sign. This type of text can be found on the covers of a variety of books, so in case of recognizing the covers, it is necessary first to determine the direction of the text before we will directly recognize it. The article suggests the development of a deep neural network for determination of the text position in the context of book covers recognizing. The results of training and testing of a convolutional neural network on synthetic data as well as the examples of the network functioning on the real data are presented.


Author(s):  
Ivan Herreros

This chapter discusses basic concepts from control theory and machine learning to facilitate a formal understanding of animal learning and motor control. It first distinguishes between feedback and feed-forward control strategies, and later introduces the classification of machine learning applications into supervised, unsupervised, and reinforcement learning problems. Next, it links these concepts with their counterparts in the domain of the psychology of animal learning, highlighting the analogies between supervised learning and classical conditioning, reinforcement learning and operant conditioning, and between unsupervised and perceptual learning. Additionally, it interprets innate and acquired actions from the standpoint of feedback vs anticipatory and adaptive control. Finally, it argues how this framework of translating knowledge between formal and biological disciplines can serve us to not only structure and advance our understanding of brain function but also enrich engineering solutions at the level of robot learning and control with insights coming from biology.


Sign in / Sign up

Export Citation Format

Share Document