scholarly journals A Highly Effective Deep Learning Based Escape Route Recognition Module for Autonomous Robots in Crisis and Emergency Situations

Author(s):  
Ricardo Buettner ◽  
Hermann Baumgartl
Author(s):  
Riichi Kudo ◽  
Kahoko Takahashi ◽  
Takeru Inoue ◽  
Kohei Mizuno

Abstract Various smart connected devices are emerging like automated driving cars, autonomous robots, and remote-controlled construction vehicles. These devices have vision systems to conduct their operations without collision. Machine vision technology is becoming more accessible to perceive self-position and/or the surrounding environment thanks to the great advances in deep learning technologies. The accurate perception information of these smart connected devices makes it possible to predict wireless link quality (LQ). This paper proposes an LQ prediction scheme that applies machine learning to HD camera output to forecast the influence of surrounding mobile objects on LQ. The proposed scheme utilizes object detection based on deep learning and learns the relationship between the detected object position information and the LQ. Outdoor experiments show that LQ prediction proposal can well predict the throughput for around 1 s into the future in a 5.6-GHz wireless LAN channel.


2021 ◽  
Vol 7 ◽  
pp. e551
Author(s):  
Nihad Karim Chowdhury ◽  
Muhammad Ashad Kabir ◽  
Md. Muhtadir Rahman ◽  
Noortaz Rezoana

The goal of this research is to develop and implement a highly effective deep learning model for detecting COVID-19. To achieve this goal, in this paper, we propose an ensemble of Convolutional Neural Network (CNN) based on EfficientNet, named ECOVNet, to detect COVID-19 from chest X-rays. To make the proposed model more robust, we have used one of the largest open-access chest X-ray data sets named COVIDx containing three classes—COVID-19, normal, and pneumonia. For feature extraction, we have applied an effective CNN structure, namely EfficientNet, with ImageNet pre-training weights. The generated features are transferred into custom fine-tuned top layers followed by a set of model snapshots. The predictions of the model snapshots (which are created during a single training) are consolidated through two ensemble strategies, i.e., hard ensemble and soft ensemble, to enhance classification performance. In addition, a visualization technique is incorporated to highlight areas that distinguish classes, thereby enhancing the understanding of primal components related to COVID-19. The results of our empirical evaluations show that the proposed ECOVNet model outperforms the state-of-the-art approaches and significantly improves detection performance with 100% recall for COVID-19 and overall accuracy of 96.07%. We believe that ECOVNet can enhance the detection of COVID-19 disease, and thus, underpin a fully automated and efficacious COVID-19 detection system.


2021 ◽  
pp. 215-243
Author(s):  
M. G. Sarwar Murshed ◽  
James J. Carroll ◽  
Nazar Khan ◽  
Faraz Hussain

2021 ◽  
Vol 11 (14) ◽  
pp. 6340
Author(s):  
Michal Ptaszynski ◽  
Fumito Masui ◽  
Yuuto Fukushima ◽  
Yuuto Oikawa ◽  
Hiroshi Hayakawa ◽  
...  

In this paper, we present a Deep Learning-based system for the support of information triaging on Twitter during emergency situations, such as disasters, or other influential events, such as political elections. The system is based on the assumption that a different type of information is required right after the event and some time after the event occurs. In a preliminary study, we analyze the language behavior of Twitter users during two kinds of influential events, namely, natural disasters and political elections. In the study, we analyze the credibility of information included by users in tweets in the above-mentioned situations, by classifying the information into two kinds: Primary Information (first-hand reports) and Secondary Information (second-hand reports, retweets, etc.). We also perform sentiment analysis of the data to check user attitudes toward the occurring events. Next, we present the structure of the system and compare a number of classifiers, including the proposed one based on Convolutional Neural Networks. Finally, we validate the system by performing an in-depth analysis of information obtained after a number of additional events, including an eruption of a Japanese volcano Ontake on 27 September 2014, as well as heavy rains and typhoons that occurred in 2020. We confirm that the methods works sufficiently well even when trained on data from nearly 10 years ago, which strongly suggests that the model is well-generalized and sufficiently grasps important aspects of each type of classified information.


Author(s):  
Komala K. V. ◽  
Deepa V. P.

In the advance of the technology and implantation of Internet of Things (IoT), the realization of smart city seems to be very needed. One of the key parts of a cyber-physical system of urban life is transportation. Such mission-critical application has led to inquisitiveness in researchers to develop autonomous robots from academicians and industry. In the domain of autonomous robot, intelligent video analytics is very crucial. By the advent of deep learning many neural ¬¬¬networks-based learning approaches are considered. One of the advanced Single Shot Multibox Detector (SSD) method is exploited for real-time video/image analysis using an IOT device and vehicles/any barrier avoidance on road is done using image processing. The proposed work makes use of SSD algorithm which is responsible for object detection and image processing to control the car, based on its current input. Thus, this work aims to develop real-time barrier detection and barrier avoidance for autonomous robots using a camera and barrier avoidance sensor in an unstructured environment.


IEEE Access ◽  
2022 ◽  
pp. 1-1
Author(s):  
Qazi Mohammad Areeb ◽  
Ms. Maryam ◽  
Mohammad Nadeem ◽  
Roobaea Alroobaea ◽  
Faisal Anwer

Sensors ◽  
2021 ◽  
Vol 21 (21) ◽  
pp. 7120
Author(s):  
Sumaira Manzoor ◽  
Sung-Hyeon Joo ◽  
Eun-Jin Kim ◽  
Sang-Hyeon Bae ◽  
Gun-Gyo In ◽  
...  

3D visual recognition is a prerequisite for most autonomous robotic systems operating in the real world. It empowers robots to perform a variety of tasks, such as tracking, understanding the environment, and human–robot interaction. Autonomous robots equipped with 3D recognition capability can better perform their social roles through supportive task assistance in professional jobs and effective domestic services. For active assistance, social robots must recognize their surroundings, including objects and places to perform the task more efficiently. This article first highlights the value-centric role of social robots in society by presenting recently developed robots and describes their main features. Instigated by the recognition capability of social robots, we present the analysis of data representation methods based on sensor modalities for 3D object and place recognition using deep learning models. In this direction, we delineate the research gaps that need to be addressed, summarize 3D recognition datasets, and present performance comparisons. Finally, a discussion of future research directions concludes the article. This survey is intended to show how recent developments in 3D visual recognition based on sensor modalities using deep-learning-based approaches can lay the groundwork to inspire further research and serves as a guide to those who are interested in vision-based robotics applications.


2021 ◽  
Vol 13 (10) ◽  
pp. 1995
Author(s):  
Pan Xu ◽  
Qingyang Li ◽  
Bo Zhang ◽  
Fan Wu ◽  
Ke Zhao ◽  
...  

Synthetic aperture radar (SAR) satellites produce large quantities of remote sensing images that are unaffected by weather conditions and, therefore, widely used in marine surveillance. However, because of the hysteresis of satellite-ground communication and the massive quantity of remote sensing images, rapid analysis is not possible and real-time information for emergency situations is restricted. To solve this problem, this paper proposes an on-board ship detection scheme that is based on the traditional constant false alarm rate (CFAR) method and lightweight deep learning. This scheme can be used by the SAR satellite on-board computing platform to achieve near real-time image processing and data transmission. First, we use CFAR to conduct the initial ship detection and then apply the You Only Look Once version 4 (YOLOv4) method to obtain more accurate final results. We built a ground verification system to assess the feasibility of our scheme. With the help of the embedded Graphic Processing Unit (GPU) with high integration, our method achieved 85.9% precision for the experimental data, and the experimental results showed that the processing time was nearly half that required by traditional methods.


2001 ◽  
Vol 120 (5) ◽  
pp. A40-A40 ◽  
Author(s):  
S MIEHLKE ◽  
P HEYMER ◽  
T OCHSENKUEHN ◽  
E BAESTLEIN ◽  
G YARIAN ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document