camera sensor
Recently Published Documents


TOTAL DOCUMENTS

320
(FIVE YEARS 81)

H-INDEX

23
(FIVE YEARS 3)

Navigation ◽  
2021 ◽  
Vol 68 (4) ◽  
pp. 709-726
Author(s):  
Adyasha Mohanty ◽  
Shubh Gupta ◽  
Grace Xingxin Gao

2021 ◽  
Author(s):  
Yasser Khalil ◽  
Hussein T. Mouftah

<p>The safety and reliability of autonomous driving pivots on the accuracy of perception and motion prediction pipelines, which in turn reckons primarily on the sensors deployed onboard. Slight confusion in perception and motion prediction can result in catastrophic consequences due to misinterpretation in later pipelines. Therefore, researchers have recently devoted considerable effort towards developing accurate perception and motion prediction models. To that end, we propose LIDAR Camera network (LiCaNet) that leverages multi-modal fusion to further enhance the joint perception and motion prediction performance accomplished in our earlier work. LiCaNet expands on our previous fusion network by adding a camera image to the fusion of RV image with historical BEV data sourced from a LIDAR sensor. We present a comprehensive evaluation to validate the outstanding performance of LiCaNet compared to the state-of-the-art. Experiments reveal that utilizing a camera sensor results in a substantial perception gain over our previous fusion network and a steep reduction in displacement errors. Moreover, the majority of the achieved improvement falls within camera range, with the highest registered for small and distant objects, confirming the significance of incorporating a camera sensor into a fusion network.</p>


2021 ◽  
Author(s):  
Yasser Khalil ◽  
Hussein T. Mouftah

<p>The safety and reliability of autonomous driving pivots on the accuracy of perception and motion prediction pipelines, which in turn reckons primarily on the sensors deployed onboard. Slight confusion in perception and motion prediction can result in catastrophic consequences due to misinterpretation in later pipelines. Therefore, researchers have recently devoted considerable effort towards developing accurate perception and motion prediction models. To that end, we propose LIDAR Camera network (LiCaNet) that leverages multi-modal fusion to further enhance the joint perception and motion prediction performance accomplished in our earlier work. LiCaNet expands on our previous fusion network by adding a camera image to the fusion of RV image with historical BEV data sourced from a LIDAR sensor. We present a comprehensive evaluation to validate the outstanding performance of LiCaNet compared to the state-of-the-art. Experiments reveal that utilizing a camera sensor results in a substantial perception gain over our previous fusion network and a steep reduction in displacement errors. Moreover, the majority of the achieved improvement falls within camera range, with the highest registered for small and distant objects, confirming the significance of incorporating a camera sensor into a fusion network.</p>


Forests ◽  
2021 ◽  
Vol 12 (9) ◽  
pp. 1142
Author(s):  
Songyu Li ◽  
Håkan Lideskog

Research highlights: An automatic localization system for ground obstacles on harvested forest land based on existing mature hardware and software architecture has been successfully implemented. In the tested area, 98% of objects were successfully detected and could on average be positioned within 0.33 m from their true position in the full range 1–10 m from the camera sensor. Background and objectives: Forestry operations in forest environments are full of challenges; detection and localization of objects in complex forest terrains often require a lot of patience and energy from operators. Successful automatic real-time detection and localization of terrain objects not only can reduce the difficulty for operators but are essential for the automation of harvesting and logging tasks. We intend to implement a system prototype that can automatically locate ground obstacles on harvested forest land based on accessible hardware and common software infrastructure. Materials and Methods: An automatic object detection and localization system based on stereo camera sensing is described and evaluated in this paper. This demonstrated system detects and locates objects of interest automatically utilizing the YOLO (You Only Look Once) object detection algorithm and derivation of object positions in 3D space. System performance is evaluated by comparing the automatic detection results of the tests to manual labeling and positioning results. Results: Results show high reliability of the system for automatic detection and location of stumps and large stones and shows good potential for practical application. Overall, object detection on test tracks was 98% successful, and positional location errors were on average 0.33 m in the full range from 1–10 m from the camera sensor. Conclusions: The results indicate that object detection and localization can be used for better operator assessment of surroundings, as well as input to control machines and equipment for object avoidance or targeting.


Sign in / Sign up

Export Citation Format

Share Document