Analysis and proposal of a novel approach to collision detection and avoidance between moving objects using artificial intelligence

Author(s):  
Seema Rawat ◽  
Zohaib A. Faridi ◽  
Praveen Kumar
2021 ◽  
Vol 13 (2) ◽  
pp. 690
Author(s):  
Tao Wu ◽  
Huiqing Shen ◽  
Jianxin Qin ◽  
Longgang Xiang

Identifying stops from GPS trajectories is one of the main concerns in the study of moving objects and has a major effect on a wide variety of location-based services and applications. Although the spatial and non-spatial characteristics of trajectories have been widely investigated for the identification of stops, few studies have concentrated on the impacts of the contextual features, which are also connected to the road network and nearby Points of Interest (POIs). In order to obtain more precise stop information from moving objects, this paper proposes and implements a novel approach that represents a spatio-temproal dynamics relationship between stopping behaviors and geospatial elements to detect stops. The relationship between the candidate stops based on the standard time–distance threshold approach and the surrounding environmental elements are integrated in a complex way (the mobility context cube) to extract stop features and precisely derive stops using the classifier classification. The methodology presented is designed to reduce the error rate of detection of stops in the work of trajectory data mining. It turns out that 26 features can contribute to recognizing stop behaviors from trajectory data. Additionally, experiments on a real-world trajectory dataset further demonstrate the effectiveness of the proposed approach in improving the accuracy of identifying stops from trajectories.


Diagnosis ◽  
2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Taro Shimizu

Abstract Diagnostic errors are an internationally recognized patient safety concern, and leading causes are faulty data gathering and faulty information processing. Obtaining a full and accurate history from the patient is the foundation for timely and accurate diagnosis. A key concept underlying ideal history acquisition is “history clarification,” meaning that the history is clarified to be depicted as clearly as a video, with the chronology being accurately reproduced. A novel approach is presented to improve history-taking, involving six dimensions: Courtesy, Control, Compassion, Curiosity, Clear mind, and Concentration, the ‘6 C’s’. We report a case that illustrates how the 6C approach can improve diagnosis, especially in relation to artificial intelligence tools that assist with differential diagnosis.


Sensors ◽  
2021 ◽  
Vol 21 (1) ◽  
pp. 230
Author(s):  
Xiangwei Dang ◽  
Zheng Rong ◽  
Xingdong Liang

Accurate localization and reliable mapping is essential for autonomous navigation of robots. As one of the core technologies for autonomous navigation, Simultaneous Localization and Mapping (SLAM) has attracted widespread attention in recent decades. Based on vision or LiDAR sensors, great efforts have been devoted to achieving real-time SLAM that can support a robot’s state estimation. However, most of the mature SLAM methods generally work under the assumption that the environment is static, while in dynamic environments they will yield degenerate performance or even fail. In this paper, first we quantitatively evaluate the performance of the state-of-the-art LiDAR-based SLAMs taking into account different pattens of moving objects in the environment. Through semi-physical simulation, we observed that the shape, size, and distribution of moving objects all can impact the performance of SLAM significantly, and obtained instructive investigation results by quantitative comparison between LOAM and LeGO-LOAM. Secondly, based on the above investigation, a novel approach named EMO to eliminating the moving objects for SLAM fusing LiDAR and mmW-radar is proposed, towards improving the accuracy and robustness of state estimation. The method fully uses the advantages of different characteristics of two sensors to realize the fusion of sensor information with two different resolutions. The moving objects can be efficiently detected based on Doppler effect by radar, accurately segmented and localized by LiDAR, then filtered out from the point clouds through data association and accurate synchronized in time and space. Finally, the point clouds representing the static environment are used as the input of SLAM. The proposed approach is evaluated through experiments using both semi-physical simulation and real-world datasets. The results demonstrate the effectiveness of the method at improving SLAM performance in accuracy (decrease by 30% at least in absolute position error) and robustness in dynamic environments.


Author(s):  
Heming Yang ◽  
Xinfang Zhang ◽  
Ji Zhou ◽  
Jun Yu

Abstract Collision and interference detection among 3-D moving objects is an important issue in the simulation of their behavior. This paper presents a new model for representing 3-D objects and a corresponding effective algorithm for detecting collisions and interferences among moving objects. Objects can be represented for efficient collision and interference detection by a hierarchy of oct - sphere model (HOSM). Algorithms are given for building the HOSM and for detecting collisions and interferences between moving objects. On the basis of HOSM, the algorithm checks only intersections between the nodes of the models which are on the surfaces of the objects. Furthermore, because a node of HOSM represents a spherical region, the collision between the two nodes can be easily found just by calculating the distance between the centers of the two spheres corresponding to them no matter how the objects move. Finally, we discuss the efficiency of the algorithm through an example.


Author(s):  
Banu Çalış Uslu ◽  
Seniye Ümit Oktay Fırat

Under uncertainty, understanding and controlling complex environments is only possible with an ability to use distributed computing by the way of information exchange between devices to be able to understand the response of the system to a particular problem. From transformation of raw data in a huge distribution of network into the meaningful information, to use the understood knowledge to make rapid decisions needs to have a network composed of smart devices. Internet of things (IoT) is a novel approach, where these smart devices can communicate with each other by using key technologies of artificial intelligence (AI) in order to make timely autonomous decisions. This emerging technical advancement and realization of horizontal and vertical integration caused the fourth stage of industrialization (Industry 4.0). The objective of this chapter is to give detailed information on both IoT based on key AI technologies and Industry 4.0. It is expected to shed light on new work to be done by providing explanations about the new areas that will emerge with this new technology.


Author(s):  
Zhaohao Sun ◽  
Andrew Stranieri

Intelligent analytics is an emerging paradigm in the age of big data, analytics, and artificial intelligence (AI). This chapter explores the nature of intelligent analytics. More specifically, this chapter identifies the foundations, cores, and applications of intelligent big data analytics based on the investigation into the state-of-the-art scholars' publications and market analysis of advanced analytics. Then it presents a workflow-based approach to big data analytics and technological foundations for intelligent big data analytics through examining intelligent big data analytics as an integration of AI and big data analytics. The chapter also presents a novel approach to extend intelligent big data analytics to intelligent analytics. The proposed approach in this chapter might facilitate research and development of intelligent analytics, big data analytics, business analytics, business intelligence, AI, and data science.


2018 ◽  
Vol 110 (4) ◽  
pp. e430 ◽  
Author(s):  
A. Tran ◽  
S. Cooke ◽  
P.J. Illingworth ◽  
D.K. Gardner

Author(s):  
Vincent Casser ◽  
Soeren Pirk ◽  
Reza Mahjourian ◽  
Anelia Angelova

Learning to predict scene depth from RGB inputs is a challenging task both for indoor and outdoor robot navigation. In this work we address unsupervised learning of scene depth and robot ego-motion where supervision is provided by monocular videos, as cameras are the cheapest, least restrictive and most ubiquitous sensor for robotics. Previous work in unsupervised image-to-depth learning has established strong baselines in the domain. We propose a novel approach which produces higher quality results, is able to model moving objects and is shown to transfer across data domains, e.g. from outdoors to indoor scenes. The main idea is to introduce geometric structure in the learning process, by modeling the scene and the individual objects; camera ego-motion and object motions are learned from monocular videos as input. Furthermore an online refinement method is introduced to adapt learning on the fly to unknown domains. The proposed approach outperforms all state-of-the-art approaches, including those that handle motion e.g. through learned flow. Our results are comparable in quality to the ones which used stereo as supervision and significantly improve depth prediction on scenes and datasets which contain a lot of object motion. The approach is of practical relevance, as it allows transfer across environments, by transferring models trained on data collected for robot navigation in urban scenes to indoor navigation settings. The code associated with this paper can be found at https://sites.google.com/view/struct2depth.


Sign in / Sign up

Export Citation Format

Share Document