scholarly journals Relative Pose Based Redundancy Removal: Collaborative RGB-D Data Transmission in Mobile Visual Sensor Networks

Sensors ◽  
2018 ◽  
Vol 18 (8) ◽  
pp. 2430
Author(s):  
Xiaoqin Wang ◽  
Y. Şekercioğlu ◽  
Tom Drummond ◽  
Vincent Frémont ◽  
Enrico Natalizio ◽  
...  

In this paper, the Relative Pose based Redundancy Removal (RPRR) scheme is presented, which has been designed for mobile RGB-D sensor networks operating under bandwidth-constrained operational scenarios. The scheme considers a multiview scenario in which pairs of sensors observe the same scene from different viewpoints, and detect the redundant visual and depth information to prevent their transmission leading to a significant improvement in wireless channel usage efficiency and power savings. We envisage applications in which the environment is static, and rapid 3D mapping of an enclosed area of interest is required, such as disaster recovery and support operations after earthquakes or industrial accidents. Experimental results show that wireless channel utilization is improved by 250% and battery consumption is halved when the RPRR scheme is used instead of sending the sensor images independently.

Sensors ◽  
2020 ◽  
Vol 20 (23) ◽  
pp. 6754
Author(s):  
Ruyan Wang ◽  
Liuwei Tang ◽  
Tong Tang

Visual sensor networks (VSNs) can be widely used in multimedia, security monitoring, network camera, industrial detection, and other fields. However, with the development of new communication technology and the increase of the number of camera nodes in VSN, transmitting and compressing the huge amounts of video and image data generated by video and image sensors has become a major challenge. The next-generation video coding standard—versatile video coding (VVC), can effectively compress the visual data, but the higher compression rate is at the cost of heavy computational complexity. Therefore, it is vital to reduce the coding complexity for the VVC encoder to be used in VSNs. In this paper, we propose a sample adaptive offset (SAO) acceleration method by jointly considering the histogram of oriented gradient (HOG) features and the depth information for VVC, which reduces the computational complexity in VSNs. Specifically, first, the offset mode selection (select band offset (BO) mode or edge offset (EO) mode) is simplified by utilizing the partition depth of coding tree unit (CTU). Then, for EO mode, the directional pattern selection is simplified by using HOG features and support vector machine (SVM). Finally, experimental results show that the proposed method averagely saves 67.79% of SAO encoding time only with 0.52% BD-rate degradation compared to the state-of-the-art method in VVC reference software (VTM 5.0) for VSNs.


Author(s):  
Afaf Mosaif, Et. al.

In recent years, wireless sensor networks have been used in a wide range of applications such as smart cities, military, and environmental monitoring. Target tracking is one of the most interesting applications in this area of research, which mainly consists of detecting the targets that move in the area of interest and monitoring their motions. However, tracking a target using visual sensors is different and more difficult than that of scalar sensors due to the special characteristics of visual sensors, such as their directional limited field of view, and the nature and amount of the sensed data. In this paper, we first present the challenges of detection and target tracking in wireless visual sensor networks, then we propose a scheme that describes the basic steps of target tracking in these networks, we focus then on the tracking across camera nodes by presenting some metrics that can be considered when designing and evaluating this type of tracking approaches.


Sign in / Sign up

Export Citation Format

Share Document