scholarly journals Automating Aircraft Scanning For Inspection With A UAV And Reinforcement Learning Technique

Author(s):  
Yufeng Sun ◽  
Ou Ma

Abstract Visual inspections of aircraft exterior surface are usually required in aircraft maintenance routine. It becomes a trend to use mobile robots equipped with sensors to perform automatic inspections as a replacement of manual inspections which are time-consuming and error-prone. The sensed data such as images and point cloud can be used for further defect characterization leveraging the power of machine learning and data science. In such a robotic inspection procedure, a precise digital model of the aircraft is required for planning the inspection path, however, the original CAD model of the aircraft is often inaccessible to aircraft maintenance shops. Thus, sensors such as 3D Laser scanners and RGB-D (Red, Green, Blue, and Depth) cameras are used because of their capability of generating a 3D model of an interested object in an efficient manner. This paper presents a two-stage approach of automating aircraft scanning with a UAV (Unmanned Aerial Vehicle) equipped with an RGB-D camera for reconstructing a digital replica of the aircraft when its original CAD model is not available. In the first stage, the UAVcamera system follows a predefined path to quickly scan the aircraft and generate a coarse model of the aircraft. Then, a full-coverage scanning path is computed based on the coarse model of the aircraft. In the second stage, the UAV-Camera system follows the computed path to closely scan the aircraft for generating a dense and precise model of the aircraft. We solved the Coverage Path Planning (CPP) problem for the aircraft scanning using Monte Carlo Tree Search (MCTS) which is a reinforcement learning technique. We also implemented the Max-Min Ant System (MMAS) strategy, a population-based optimization algorithm, to solve the CPP problem and demonstrate the effectiveness of our approach.

Author(s):  
Jun Long ◽  
Yueyi Luo ◽  
Xiaoyu Zhu ◽  
Entao Luo ◽  
Mingfeng Huang

AbstractWith the developing of Internet of Things (IoT) and mobile edge computing (MEC), more and more sensing devices are widely deployed in the smart city. These sensing devices generate various kinds of tasks, which need to be sent to cloud to process. Usually, the sensing devices do not equip with wireless modules, because it is neither economical nor energy saving. Thus, it is a challenging problem to find a way to offload tasks for sensing devices. However, many vehicles are moving around the city, which can communicate with sensing devices in an effective and low-cost way. In this paper, we propose a computation offloading scheme through mobile vehicles in IoT-edge-cloud network. The sensing devices generate tasks and transmit the tasks to vehicles, then the vehicles decide to compute the tasks in the local vehicle, MEC server or cloud center. The computation offloading decision is made based on the utility function of the energy consumption and transmission delay, and the deep reinforcement learning technique is adopted to make decisions. Our proposed method can make full use of the existing infrastructures to implement the task offloading of sensing devices, the experimental results show that our proposed solution can achieve the maximum reward and decrease delay.


2015 ◽  
Vol 12 (1) ◽  
pp. 23-28 ◽  
Author(s):  
Adik Yadao ◽  
R. S. Hingole

Today’s car is one of the most important things in everyone’s life .Every person wants to have his or her own car but the question that arises in each buyer’s mind is whether the vehicle is safe enough to spend so much of money so it is the responsibility of an mechanical engineer to make the vehical comfortable and at the Same time safer. Now a days automakers are coming with various energy absorbing devices such as crush box, door beams etc. this energy absorbing device s prove to be very useful in reducing the amount force that is being transmitted to the occupant. In this we are using impact energy absorber in efficient manner as compare to earlier. The various steps involved in this project starting from developing the cad model of this inner impact energy absorber using the CAD software CATIA V5 R19. Then pre-processing is carried out in HYPERMESH 11.0 which includes assigning material, properties, boundary conditions such as contacts, constraints etc. LS-DYNA971 is used as a solver and LS-POST is used for the post processing and results obtained are compared to the standards. By carrying out this idea it has been observed that there is a considerable amount of energy that is being absorbed by this energy-absorbing device. Along with this energy absorption, the intrusion in passenger compartment is also reduced by considerable amount. So for safer and comfortable car with inner impact energy absorber is one of the best options available. This will get implement by this research work.


Author(s):  
Ali Fakhry

The applications of Deep Q-Networks are seen throughout the field of reinforcement learning, a large subsect of machine learning. Using a classic environment from OpenAI, CarRacing-v0, a 2D car racing environment, alongside a custom based modification of the environment, a DQN, Deep Q-Network, was created to solve both the classic and custom environments. The environments are tested using custom made CNN architectures and applying transfer learning from Resnet18. While DQNs were state of the art years ago, using it for CarRacing-v0 appears somewhat unappealing and not as effective as other reinforcement learning techniques. Overall, while the model did train and the agent learned various parts of the environment, attempting to reach the reward threshold for the environment with this reinforcement learning technique seems problematic and difficult as other techniques would be more useful.


IEEE Access ◽  
2019 ◽  
Vol 7 ◽  
pp. 2782-2798 ◽  
Author(s):  
Lucileide M. D. Da Silva ◽  
Matheus F. Torquato ◽  
Marcelo A. C. Fernandes

Computers ◽  
2019 ◽  
Vol 8 (1) ◽  
pp. 8 ◽  
Author(s):  
Marcus Lim ◽  
Azween Abdullah ◽  
NZ Jhanjhi ◽  
Mahadevan Supramaniam

Criminal network activities, which are usually secret and stealthy, present certain difficulties in conducting criminal network analysis (CNA) because of the lack of complete datasets. The collection of criminal activities data in these networks tends to be incomplete and inconsistent, which is reflected structurally in the criminal network in the form of missing nodes (actors) and links (relationships). Criminal networks are commonly analyzed using social network analysis (SNA) models. Most machine learning techniques that rely on the metrics of SNA models in the development of hidden or missing link prediction models utilize supervised learning. However, supervised learning usually requires the availability of a large dataset to train the link prediction model in order to achieve an optimum performance level. Therefore, this research is conducted to explore the application of deep reinforcement learning (DRL) in developing a criminal network hidden links prediction model from the reconstruction of a corrupted criminal network dataset. The experiment conducted on the model indicates that the dataset generated by the DRL model through self-play or self-simulation can be used to train the link prediction model. The DRL link prediction model exhibits a better performance than a conventional supervised machine learning technique, such as the gradient boosting machine (GBM) trained with a relatively smaller domain dataset.


Sign in / Sign up

Export Citation Format

Share Document