scholarly journals M3C: Multimodel-and-Multicue-Based Tracking by Detection of Surrounding Vessels in Maritime Environment for USV

Electronics ◽  
2019 ◽  
Vol 8 (7) ◽  
pp. 723 ◽  
Author(s):  
Qiao ◽  
Liu ◽  
Zhang ◽  
Zhang ◽  
Wu ◽  
...  

It is crucial for unmanned surface vessels (USVs) to detect and track surrounding vessels in real time to avoid collisions at sea. However, the harsh maritime environment poses great challenges to multitarget tracking (MTT). In this paper, a novel tracking by detection framework that integrates the multimodel and multicue (M3C) pipeline is proposed, which aims at improving the detection and tracking performance. Regarding the multimodel, we predicted the maneuver probability of a target vessel via the gated recurrent unit (GRU) model with an attention mechanism, and fused their respective outputs as the output of a kinematic filter. We developed a hybrid affinity model based on multi cues, such as the motion, appearance, and attitude of the ego vessel in the data association stage. By using the proposed ship re-identification approach, the tracker had the capability of appearance matching via metric learning. Experimental evaluation of two public maritime datasets showed that our method achieved state-of-the-art performance, not only in identity switches (IDS) but also in frame rates.


2011 ◽  
Vol 45 (3) ◽  
pp. 14-24 ◽  
Author(s):  
Hugh J. Roarty ◽  
Erick Rivera Lemus ◽  
Ethan Handel ◽  
Scott M. Glenn ◽  
Donald E. Barrick ◽  
...  

AbstractHigh-frequency (HF) surface wave radar has been identified to be a gap-filling technology for Maritime Domain Awareness. Present SeaSonde HF radars have been designed to map surface currents but are able to track surface vessels in a dual-use mode. Rutgers and CODAR Ocean Sensors, Ltd., have collaborated on the development of vessel detection and tracking capabilities from compact HF radars, demonstrating that ships can be detected and tracked by multistatic HF radar in a multiship environment while simultaneously mapping ocean currents. Furthermore, the same vessel is seen simultaneously by the radar based on different processing parameters, mitigating the need to preselect a fixed set and thereby improving detection performance.



Electronics ◽  
2019 ◽  
Vol 8 (9) ◽  
pp. 984
Author(s):  
Dalei Qiao ◽  
Guangzhong Liu ◽  
Jun Zhang ◽  
Qiangyong Zhang ◽  
Gongxing Wu ◽  
...  

The authors wish to make the following corrections to our published paper [...]



2021 ◽  
Vol 15 ◽  
Author(s):  
Djalal Djarah ◽  
Abdallah Meraoumia ◽  
Mohamed Lakhdar Louazene

Background: Pedestrian detection and tracking is an important area of study in real-world applications such as mobile robots, human-computer interaction, video surveillance, pedestrian protection systems, etc. As a result, it has attracted the interest of the scientific community. Objective: Certainly, tracking people is critical for numerous utility areas which cover unusual situations detection, like vicinity evaluation and sometimes change direction in human gait and partial occlusions. Researchers primary focus is to develop surveillance system that can work in a dynamic environment, but there are major issues and challenges involved in designing such systems. So, it has become a major issue and challenge to design a tracking system that can be more suitable for such situations. To this end, this paper presents a comparative evaluation of the tracking-by-detection system along with the publicly available pedestrian benchmark databases. Method: Unlike recent works where the person detection and tracking are usually treated separately, our work explores the joint use of the popular Simple Online and Real-time Tracking (SORT) method and the relevant visual detectors. Consequently, the choice of the detector is an important factor in the evaluation of the system performance. Results: Experimental results demonstrate that the performance of the tracking-by-detection system is closely related to the optimal selection of the detector and should be required prior to a rigorous evaluation. Conclusion: The study demonstrates how sensitive the system performance as a whole is to the challenging of the dataset. Furthermore, the efficiency of the detector and the detector-tracker combination are also depending on the dataset.





Author(s):  
Guilherme Amaral ◽  
Hugo Silva ◽  
Flavio Lopes ◽  
Joao Pedro Ribeiro ◽  
Sara Freitas ◽  
...  


2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Qingfeng Huang ◽  
Yage Huang ◽  
Zhiwei Zhang ◽  
Yujie Zhang ◽  
Weijian Mi ◽  
...  

Truck-lifting accidents are common in container-lifting operations. Previously, the operation sites are needed to arrange workers for observation and guidance. However, with the development of automated equipment in container terminals, an automated accident detection method is required to replace manual workers. Considering the development of vision detection and tracking algorithms, this study designed a vision-based truck-lifting prevention system. This system uses a camera to detect and track the movement of the truck wheel hub during the operation to determine whether the truck chassis is being lifted. The hardware device of this system is easy to install and has good versatility for most container-lifting equipment. The accident detection algorithm combines convolutional neural network detection, traditional image processing, and a multitarget tracking algorithm to calculate the displacement and posture information of the truck during the operation. The experiments show that the measurement accuracy of this system reaches 52 mm, and it can effectively distinguish the trajectories of different wheel hubs, meeting the requirements for detecting lifting accidents.



2021 ◽  
Vol 2 ◽  
Author(s):  
Lisette. E. van der Zande ◽  
Oleksiy Guzhva ◽  
T. Bas Rodenburg

Modern welfare definitions not only require that the Five Freedoms are met, but animals should also be able to adapt to changes (i. e., resilience) and reach a state that the animals experience as positive. Measuring resilience is challenging since relatively subtle changes in animal behavior need to be observed 24/7. Changes in individual activity showed potential in previous studies to reflect resilience. A computer vision (CV) based tracking algorithm for pigs could potentially measure individual activity, which will be more objective and less time consuming than human observations. The aim of this study was to investigate the potential of state-of-the-art CV algorithms for pig detection and tracking for individual activity monitoring in pigs. This study used a tracking-by-detection method, where pigs were first detected using You Only Look Once v3 (YOLOv3) and in the next step detections were connected using the Simple Online Real-time Tracking (SORT) algorithm. Two videos, of 7 h each, recorded in barren and enriched environments were used to test the tracking. Three detection models were proposed using different annotation datasets: a young model where annotated pigs were younger than in the test video, an older model where annotated pigs were older than the test video, and a combined model where annotations from younger and older pigs were combined. The combined detection model performed best with a mean average precision (mAP) of over 99.9% in the enriched environment and 99.7% in the barren environment. Intersection over Union (IOU) exceeded 85% in both environments, indicating a good accuracy of the detection algorithm. The tracking algorithm performed better in the enriched environment compared to the barren environment. When false positive tracks where removed (i.e., tracks not associated with a pig), individual pigs were tracked on average for 22.3 min in the barren environment and 57.8 min in the enriched environment. Thus, based on proposed tracking-by-detection algorithm, pigs can be tracked automatically in different environments, but manual corrections may be needed to keep track of the individual throughout the video and estimate activity. The individual activity measured with proposed algorithm could be used as an estimate to measure resilience.



2019 ◽  
Vol 11 (18) ◽  
pp. 2155 ◽  
Author(s):  
Jie Wang ◽  
Sandra Simeonova ◽  
Mozhdeh Shahbazi

Along with the advancement of light-weight sensing and processing technologies, unmanned aerial vehicles (UAVs) have recently become popular platforms for intelligent traffic monitoring and control. UAV-mounted cameras can capture traffic-flow videos from various perspectives providing a comprehensive insight into road conditions. To analyze the traffic flow from remotely captured videos, a reliable and accurate vehicle detection-and-tracking approach is required. In this paper, we propose a deep-learning framework for vehicle detection and tracking from UAV videos for monitoring traffic flow in complex road structures. This approach is designed to be invariant to significant orientation and scale variations in the videos. The detection procedure is performed by fine-tuning a state-of-the-art object detector, You Only Look Once (YOLOv3), using several custom-labeled traffic datasets. Vehicle tracking is conducted following a tracking-by-detection paradigm, where deep appearance features are used for vehicle re-identification, and Kalman filtering is used for motion estimation. The proposed methodology is tested on a variety of real videos collected by UAVs under various conditions, e.g., in late afternoons with long vehicle shadows, in dawn with vehicles lights being on, over roundabouts and interchange roads where vehicle directions change considerably, and from various viewpoints where vehicles’ appearance undergo substantial perspective distortions. The proposed tracking-by-detection approach performs efficiently at 11 frames per second on color videos of 2720p resolution. Experiments demonstrated that high detection accuracy could be achieved with an average F1-score of 92.1%. Besides, the tracking technique performs accurately, with an average multiple-object tracking accuracy (MOTA) of 81.3%. The proposed approach also addressed the shortcomings of the state-of-the-art in multi-object tracking regarding frequent identity switching, resulting in a total of only one identity switch over every 305 tracked vehicles.



Sign in / Sign up

Export Citation Format

Share Document