scholarly journals A Monocular Vision Obstacle Avoidance Method Applied to Indoor Tracking Robot

Drones ◽  
2021 ◽  
Vol 5 (4) ◽  
pp. 105
Author(s):  
Shubo Wang ◽  
Ling Wang ◽  
Xiongkui He ◽  
Yi Cao

The overall safety of a building can be effectively evaluated through regular inspection of the indoor walls by unmanned ground vehicles (UGVs). However, when the UGV performs line patrol inspections according to the specified path, it is easy to be affected by obstacles. This paper presents an obstacle avoidance strategy for unmanned ground vehicles in indoor environments. The proposed method is based on monocular vision. Through the obtained environmental information in front of the unmanned vehicle, the obstacle orientation is determined, and the moving direction and speed of the mobile robot are determined based on the neural network output and confidence. This paper also innovatively adopts the method of collecting indoor environment images based on camera array and realizes the automatic classification of data sets by arranging cameras with different directions and focal lengths. In the training of a transfer neural network, aiming at the problem that it is difficult to set the learning rate factor of the new layer, the improved bat algorithm is used to find the optimal learning rate factor on a small sample data set. The simulation results show that the accuracy can reach 94.84%. Single-frame evaluation and continuous obstacle avoidance evaluation are used to verify the effectiveness of the obstacle avoidance algorithm. The experimental results show that an unmanned wheeled robot with a bionic transfer-convolution neural network as the control command output can realize autonomous obstacle avoidance in complex indoor scenes.

Author(s):  
Yimin Chen ◽  
Chuan Hu ◽  
Yechen Qin ◽  
Mingjun Li ◽  
Xiaolin Song

Obstacle avoidance strategy is important to ensure the driving safety of unmanned ground vehicles. In the presence of static and moving obstacles, it is challenging for the unmanned ground vehicles to plan and track the collision-free paths. This paper proposes an obstacle avoidance strategy consists of the path planning and the robust fuzzy output-feedback control. A path planner is formed to generate the collision-free paths that avoid static and moving obstacles. The quintic polynomial curves are employed for path generation considering computational efficiency and ride comfort. Then, a robust fuzzy output-feedback controller is designed to track the planned paths. The Takagi–Sugeno (T–S) fuzzy modeling technique is utilized to handle the system variables when forming the vehicle dynamic model. The robust output-feedback control approach is used to track the planned paths without using the lateral velocity signal. The proposed obstacle avoidance strategy is validated in CarSim® simulations. The simulation results show the unmanned ground vehicle can avoid the static and moving obstacles by applying the designed path planning and robust fuzzy output-feedback control approaches.


Machines ◽  
2018 ◽  
Vol 6 (2) ◽  
pp. 18 ◽  
Author(s):  
Marco De Simone ◽  
Zandra Rivera ◽  
Domenico Guida

Author(s):  
Jonathan Lwowski ◽  
Liang Sun ◽  
Roberto Mexquitic-Saavedra ◽  
Rajnikant Sharma ◽  
Daniel Pack

2020 ◽  
Vol 8 (6) ◽  
pp. 2466-2472

Autonomous ground vehicles (AGVs) started occupying our day-to-day life. AGVs can be programmed to be smart with the current technological advancements. In doing so, we can apply them to assist humans in many aspects like reducing road accidents, enabling us to use cars without driving knowledge, autonomous patrolling in dangerous zones, and autonomous farming. For AGVs to operate at this level of automation, it must be equipped with sensory perception devices to be aware of its surroundings, and also, a way to perceives this data is crucial. As a first step towards this, researchers have developed a vast number of camera vision-based efficient neural network algorithms for detecting and avoiding obstacles. Unfortunately, an AGV cannot survive only with computer vision as it suffers from several effects like night driving and erroneous estimation of distance information. Camera vision and lidar vision together is suitable for AGVs to operate in all conditions like day, night, and fog. We propose a novel neural network model, which transforms the lidar sensor data into obstacle avoidance decisions, which is integrated into the hybrid vision of any AGV. Existing lidar sensor-based obstacle detection and avoidance systems like 2D collision cone approaches are not suitable for real-time applications, as they lag in providing accurate and quick responses, which leads to collisions. The proposed intelligent Field of View (FOV) mechanism replaces classical mathematical approaches, which accurately mimics the behavior of human drivers. The model quickly takes decisions with a high level of accuracy to command the AGV upon being obstructed with obstacles in the trajectory. This makes the AGV drive in obstacle rich environments without manual maneuvering autonomously.


Author(s):  
Na Lyu ◽  
Jiaxin Zhou ◽  
Zhuo Chen ◽  
Wu Chen

Due to the high cost and difficulty of traffic data set acquisition and the high time sensitivity of traffic distribution, the machine learning-based traffic identification method is difficult to be applied in airborne network environment. Aiming at this problem, a method for airborne network traffic identification based on the convolutional neural network under small traffic samples is proposed. Firstly, the pre-training of the initial model for the convolutional neural network is implemented based on the complete data set in source domain, and then the retraining of the convolutional neural network is realized through the layer frozen based fine-tuning learning algorithm of convolutional neural network on the incomplete dataset in target domain, and the convolutional neural network model based feature representing transferring(FRT-CNN) is constructed to realize online traffic identification. The experiment results on the actual airborne network traffic dataset show that the proposed method can guarantee the accuracy of traffic identification under limited traffic samples, and the classification performance is significantly improved comparing with the existing small-sample learning methods.


2011 ◽  
Author(s):  
R. Cortland Tompkins ◽  
Yakov Diskin ◽  
Menatoallah M. Youssef ◽  
Vijayan K. Asari

2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Bin Zheng ◽  
Tao Huang

In order to achieve the accuracy of mango grading, a mango grading system was designed by using the deep learning method. The system mainly includes CCD camera image acquisition, image preprocessing, model training, and model evaluation. Aiming at the traditional deep learning, neural network training needs a large number of sample data sets; a convolutional neural network is proposed to realize the efficient grading of mangoes through the continuous adjustment and optimization of super-parameters and batch size. The ultra-lightweight SqueezeNet related algorithm is introduced. Compared with AlexNet and other related algorithms with the same accuracy level, it has the advantages of small model scale and fast operation speed. The experimental results show that the convolutional neural network model after super-parameters optimization and adjustment has excellent effect on deep learning image processing of small sample data set. Two hundred thirty-four Jinhuang mangoes of Panzhihua were picked in the natural environment and tested. The analysis results can meet the requirements of the agricultural industry standard of the People’s Republic of China—mango and mango grade specification. At the same time, the average accuracy rate was 97.37%, the average error rate was 2.63%, and the average loss value of the model was 0.44. The processing time of an original image with a resolution of 500 × 374 was only 2.57 milliseconds. This method has important theoretical and application value and can provide a powerful means for mango automatic grading.


2020 ◽  
Vol 8 (6) ◽  
pp. 1766-1771

This paper presents a hardware and software architecture for an indoor navigation of unmanned ground vehicles. It discusses the complete process of taking input from the camera to steering the vehicle in a desired direction. Images taken from a single front-facing camera are taken as input. We have prepared our own dataset of the indoor environment in order to generate data for training the network. For training, the images are mapped with steering directions, those are, left, right, forward or reverse. The pre-trained convolutional neural network(CNN) model then predicts the direction to steer in. The model then gives this output direction to the microprocessor, which in turn controls the motors to transverse in that direction. With minimum amount of training data and time taken for training, very accurate results were obtained, both in the simulation as well as actual hardware testing. After training, the model itself learned to stay within the boundary of the corridor and identify any immediate obstruction which might come up. The system operates at a speed of 2 fps. For training as well as making real time predictions, MacBook Air was used.


Sign in / Sign up

Export Citation Format

Share Document