scholarly journals sTetro-Deep Learning Powered Staircase Cleaning and Maintenance Reconfigurable Robot

Sensors ◽  
2021 ◽  
Vol 21 (18) ◽  
pp. 6279
Author(s):  
Balakrishnan Ramalingam ◽  
Rajesh Elara Mohan ◽  
Selvasundari Balakrishnan ◽  
Karthikeyan Elangovan ◽  
Braulio Félix Gómez ◽  
...  

Staircase cleaning is a crucial and time-consuming task for maintenance of multistory apartments and commercial buildings. There are many commercially available autonomous cleaning robots in the market for building maintenance, but few of them are designed for staircase cleaning. A key challenge for automating staircase cleaning robots involves the design of Environmental Perception Systems (EPS), which assist the robot in determining and navigating staircases. This system also recognizes obstacles and debris for safe navigation and efficient cleaning while climbing the staircase. This work proposes an operational framework leveraging the vision based EPS for the modular re-configurable maintenance robot, called sTetro. The proposed system uses an SSD MobileNet real-time object detection model to recognize staircases, obstacles and debris. Furthermore, the model filters out false detection of staircases by fusion of depth information through the use of a MobileNet and SVM. The system uses a contour detection algorithm to localize the first step of the staircase and depth clustering scheme for obstacle and debris localization. The framework has been deployed on the sTetro robot using the Jetson Nano hardware from NVIDIA and tested with multistory staircases. The experimental results show that the entire framework takes an average of 310 ms to run and achieves an accuracy of 94.32% for staircase recognition tasks and 93.81% accuracy for obstacle and debris detection tasks during real operation of the robot.

2019 ◽  
Vol 2019 ◽  
pp. 1-9 ◽  
Author(s):  
Hai Wang ◽  
Xinyu Lou ◽  
Yingfeng Cai ◽  
Yicheng Li ◽  
Long Chen

Vehicle detection is one of the most important environment perception tasks for autonomous vehicles. The traditional vision-based vehicle detection methods are not accurate enough especially for small and occluded targets, while the light detection and ranging- (lidar-) based methods are good in detecting obstacles but they are time-consuming and have a low classification rate for different target types. Focusing on these shortcomings to make the full use of the advantages of the depth information of lidar and the obstacle classification ability of vision, this work proposes a real-time vehicle detection algorithm which fuses vision and lidar point cloud information. Firstly, the obstacles are detected by the grid projection method using the lidar point cloud information. Then, the obstacles are mapped to the image to get several separated regions of interest (ROIs). After that, the ROIs are expanded based on the dynamic threshold and merged to generate the final ROI. Finally, a deep learning method named You Only Look Once (YOLO) is applied on the ROI to detect vehicles. The experimental results on the KITTI dataset demonstrate that the proposed algorithm has high detection accuracy and good real-time performance. Compared with the detection method based only on the YOLO deep learning, the mean average precision (mAP) is increased by 17%.


2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Yiran Feng ◽  
Xueheng Tao ◽  
Eung-Joo Lee

In view of the current absence of any deep learning algorithm for shellfish identification in real contexts, an improved Faster R-CNN-based detection algorithm is proposed in this paper. It achieves multiobject recognition and localization through a second-order detection network and replaces the original feature extraction module with DenseNet, which can fuse multilevel feature information, increase network depth, and avoid the disappearance of network gradients. Meanwhile, the proposal merging strategy is improved with Soft-NMS, where an attenuation function is designed to replace the conventional NMS algorithm, thereby avoiding missed detection of adjacent or overlapping objects and enhancing the network detection accuracy under multiple objects. By constructing a real contexts shellfish dataset and conducting experimental tests on a vision recognition seafood sorting robot production line, we were able to detect the features of shellfish in different scenarios, and the detection accuracy was improved by nearly 4% compared to the original detection model, achieving a better detection accuracy. This provides favorable technical support for future quality sorting of seafood using the improved Faster R-CNN-based approach.


2021 ◽  
Vol 23 (06) ◽  
pp. 1546-1553
Author(s):  
Impana N ◽  
◽  
K J Bhoomika ◽  
Suraksha S S ◽  
Karan Sawhney ◽  
...  

Keratoconus eye disease is not an inflammatory corneal disease that is caused by progress in thinning of the cornea, scarring, and deformation in the shape of the cornea. In India, there is a significant increase in the number of cases of keratoconus, and several research centers have been paying attention to this disease in recent years. In this situation, there is an immediate need for tools that simplify both diagnosis and treatment[1]. The algorithm developed can decide whether the eye is a normal eye or keratoconus eye with stages. The K-net model analyzes the pentagram images of the eye using a convolutional neural network(CNN) a deep learning model and pre-trained ResNet-50 and InceptionV3 pre-trained models and does the comparative analysis of the accuracies of these models. The results show that the Keratoconus Detection algorithm leads to a good job, with a 93.75 percent accuracy on the data test collection. Keratoconus Detection model is a program that can help ophthalmologists test their patients faster, therefore reducing diagnostic errors and facilitating treatment.


Drones ◽  
2021 ◽  
Vol 5 (2) ◽  
pp. 28
Author(s):  
Joan Y. Q. Li ◽  
Stephanie Duce ◽  
Karen E. Joyce ◽  
Wei Xiang

Sea cucumbers (Holothuroidea or holothurians) are a valuable fishery and are also crucial nutrient recyclers, bioturbation agents, and hosts for many biotic associates. Their ecological impacts could be substantial given their high abundance in some reef locations and thus monitoring their populations and spatial distribution is of research interest. Traditional in situ surveys are laborious and only cover small areas but drones offer an opportunity to scale observations more broadly, especially if the holothurians can be automatically detected in drone imagery using deep learning algorithms. We adapted the object detection algorithm YOLOv3 to detect holothurians from drone imagery at Hideaway Bay, Queensland, Australia. We successfully detected 11,462 of 12,956 individuals over 2.7ha with an average density of 0.5 individual/m2. We tested a range of hyperparameters to determine the optimal detector performance and achieved 0.855 mAP, 0.82 precision, 0.83 recall, and 0.82 F1 score. We found as few as ten labelled drone images was sufficient to train an acceptable detection model (0.799 mAP). Our results illustrate the potential of using small, affordable drones with direct implementation of open-source object detection models to survey holothurians and other shallow water sessile species.


2021 ◽  
Vol 12 ◽  
Author(s):  
Yiding Wang ◽  
Yuxin Qin ◽  
Jiali Cui

Counting the number of wheat ears in images under natural light is an important way to evaluate the crop yield, thus, it is of great significance to modern intelligent agriculture. However, the distribution of wheat ears is dense, so the occlusion and overlap problem appears in almost every wheat image. It is difficult for traditional image processing methods to solve occlusion problem due to the deficiency of high-level semantic features, while existing deep learning based counting methods did not solve the occlusion efficiently. This article proposes an improved EfficientDet-D0 object detection model for wheat ear counting, and focuses on solving occlusion. First, the transfer learning method is employed in the pre-training of the model backbone network to extract the high-level semantic features of wheat ears. Secondly, an image augmentation method Random-Cutout is proposed, in which some rectangles are selected and erased according to the number and size of the wheat ears in the images to simulate occlusion in real wheat images. Finally, convolutional block attention module (CBAM) is adopted into the EfficientDet-D0 model after the backbone, which makes the model refine the features, pay more attention to the wheat ears and suppress other useless background information. Extensive experiments are done by feeding the features to detection layer, showing that the counting accuracy of the improved EfficientDet-D0 model reaches 94%, which is about 2% higher than the original model, and false detection rate is 5.8%, which is the lowest among comparative methods.


2021 ◽  
Vol 11 (17) ◽  
pp. 8210
Author(s):  
Chaeyoung Lee ◽  
Hyomin Kim ◽  
Sejong Oh ◽  
Illchul Doo

This research produced a model that detects abnormal phenomena on the road, based on deep learning, and proposes a service that can prevent accidents because of other cars and traffic congestion. After extracting accident images based on traffic accident video data by using FFmpeg for model production, car collision types are classified, and only the head-on collision types are processed by using the deep learning object-detection algorithm YOLO (You Only Look Once). Using the car accident detection model that we built and the provided road obstacle-detection model, we programmed, for when the model detects abnormalities on the road, warning notification and photos that captures the accidents or obstacles, which are then transferred to the application. The proposed service was verified through application notification simulations and virtual experiments using CCTVs in Daegu, Busan, and Gwangju. By providing services, the goal is to improve traffic safety and achieve the development of a self-driving vehicle sector. As a future research direction, it is suggested that an efficient CCTV control system be introduced for the transportation environment.


2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Limin Qi ◽  
Yong Han

To address problems of serious loss of details and low detection definition in the traditional human motion posture detection algorithm, a human motion posture detection algorithm using deep reinforcement learning is proposed. Firstly, the perception ability of deep learning is used to match human motion feature points to obtain human motion posture features. Secondly, normalize the human motion image, take the color histogram distribution of human motion posture as the antigen, search the region close to the motion posture in the image, and take its candidate region as the antibody. By calculating the affinity between the antigen and the antibody, the feature extraction of human motion posture is realized. Finally, using the training characteristics of deep learning network and reinforcement learning network, the change information of human motion posture is obtained, and the design of human motion posture detection algorithm is realized. The results show that when the image resolution is 384 × 256 px, the motion pose contour detection accuracy of this algorithm is 87%. When the image size is 30 MB, the recognition time of this method is only 0.8 s. When the number of iterations is 500, the capture rate of human motion posture details can reach 98.5%. This shows that the proposed algorithm can improve the definition of human motion posture contour, improve the posture detailed capture rate, reduce the loss of detail, and have better effect and performance.


2020 ◽  
pp. 1-12
Author(s):  
Hu Jingchao ◽  
Haiying Zhang

The difficulty in class student state recognition is how to make feature judgments based on student facial expressions and movement state. At present, some intelligent models are not accurate in class student state recognition. In order to improve the model recognition effect, this study builds a two-level state detection framework based on deep learning and HMM feature recognition algorithm, and expands it as a multi-level detection model through a reasonable state classification method. In addition, this study selects continuous HMM or deep learning to reflect the dynamic generation characteristics of fatigue, and designs random human fatigue recognition experiments to complete the collection and preprocessing of EEG data, facial video data, and subjective evaluation data of classroom students. In addition to this, this study discretizes the feature indicators and builds a student state recognition model. Finally, the performance of the algorithm proposed in this paper is analyzed through experiments. The research results show that the algorithm proposed in this paper has certain advantages over the traditional algorithm in the recognition of classroom student state features.


Sign in / Sign up

Export Citation Format

Share Document