video sequences
Recently Published Documents


TOTAL DOCUMENTS

2187
(FIVE YEARS 313)

H-INDEX

53
(FIVE YEARS 6)

PLoS ONE ◽  
2022 ◽  
Vol 17 (1) ◽  
pp. e0261248
Author(s):  
Aurelia Schütz ◽  
Katharina Kurz ◽  
Gesa Busch

Apart from improving husbandry conditions and animal welfare, there is a clear public demand to increase transparency in agricultural activities. Personal farm tours have shown to be appreciated by citizens but are limited in their impact because of hygiene requirements and accessibility. Virtual farm tours are a promising approach to overcome these limitations but evidence on their perceptions is missing. This study analyzes how a virtual farm tour is perceived by showing participants (n = 17) a 360-degree video of a conventional pig fattening pen on a tablet and via virtual reality (VR) glasses. Semi-structured in-depth interviews were conducted to analyze perceptions and level of immersion and to elicit differences between media devices. Participants’ perception of the pig fattening pen was rather poor and depended on the recording perspective as well as on the media device. However, housing conditions were perceived more positively compared to the image participants had in mind prior to the study, and thus the stable was considered as a rather positive example. Participants described virtual farm tours as suitable tool to improve transparency and information transfer and to gain insights into husbandry conditions. They appreciated the comfortable and entertaining character of both media devices and named various possibilities for implementation. However, VR glasses were favored regarding the higher realistic and entertaining value, while the tablet was considered beneficial in terms of usability. The presentation of video sequences without additional explanations about the farm or the housing conditions were claimed insufficient to give an adequate understanding of the seen content.


2022 ◽  
Vol 11 (1) ◽  
pp. 177-186
Author(s):  
Ashwith A ◽  
Azra Nasreen ◽  
Shobha G ◽  
Sitharama Iyengar ◽  
Anurag Sethuram

2021 ◽  
Vol 14 (1) ◽  
pp. 87
Author(s):  
Yeping Peng ◽  
Zhen Tang ◽  
Genping Zhao ◽  
Guangzhong Cao ◽  
Chao Wu

Unmanned air vehicle (UAV) based imaging has been an attractive technology to be used for wind turbine blades (WTBs) monitoring. In such applications, image motion blur is a challenging problem which means that motion deblurring is of great significance in the monitoring of running WTBs. However, an embarrassing fact for these applications is the lack of sufficient WTB images, which should include better pairs of sharp images and blurred images captured under the same conditions for network model training. To overcome the challenge of image pair acquisition, a training sample synthesis method is proposed. Sharp images of static WTBs were first captured, and then video sequences were prepared by running WTBs at different speeds. The blurred images were identified from the video sequences and matched to the sharp images using image difference. To expand the sample dataset, rotational motion blurs were simulated on different WTBs. Synthetic image pairs were then produced by fusing sharp images and images of simulated blurs. Finally, a total of 4000 image pairs were obtained. To conduct motion deblurring, a hybrid deblurring network integrated with DeblurGAN and DeblurGANv2 was deployed. The results show that the integration of DeblurGANv2 and Inception-ResNet-v2 provides better deblurred images, in terms of both metrics of signal-to-noise ratio (80.138) and structural similarity (0.950) than those obtained from the comparable networks of DeblurGAN and MobileNet-DeblurGANv2.


2021 ◽  
pp. 1-15
Author(s):  
V. Muhammed Anees ◽  
G. Santhosh Kumar

Crowd behaviour analysis and management have become a significant research problem for the last few years because of the substantial growth in the world population and their security requirements. There are numerous unsolved problems like crowd flow modelling and crowd behaviour detection, which are still open in this area, seeking great attention from the research community. Crowd flow modelling is one of such problems, and it is also an integral part of an intelligent surveillance system. Modelling of crowd flow has now become a vital concern in the development of intelligent surveillance systems. Real-time analysis of crowd behavior needs accurate models that represent crowded scenarios. An intelligent surveillance system supporting a good crowd flow model will help identify the risks in a wide range of emergencies and facilitate human safety. Mathematical models of crowd flow developed from real-time video sequences enable further analysis and decision making. A novel method identifying eight possible crowd flow behaviours commonly seen in the crowd video sequences is explained in this paper. The proposed method uses crowd flow localisation using the Gunnar-Farneback optical flow method. The Jacobian and Hessian matrix analysis along with corresponding eigenvalues helps to find stability points identifying the flow patterns. This work is carried out on 80 videos taken from UCF crowd and CUHK video datasets. Comparison with existing works from the literature proves our method yields better results.


2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Ran Li ◽  
Peinan Hao ◽  
Fengyuan Sun ◽  
Yanling Li ◽  
Lei You

With the increasing demand for internet of things (IoT) applications, machine-type video communications have become an indispensable means of communication. It is changing the way we live and work. In machine-type video communications, the quality and delay of the video transmission should be guaranteed to satisfy the requirements of communication devices at the condition of limited resources. It is necessary to reduce the burden of transmitting video by losing frames at the video sender and then to increase the frame rate of transmitting video at the receiver. In this paper, based on the pretrained network, we proposed a frame rate up-conversion (FRUC) algorithm to guarantee low-latency video transmitting in machine-type video communications. At the IoT node, by periodically discarding the video frames, the video sequences are significantly compressed. At the IoT cloud, a pretrained network is used to extract the feature layers of the transmitted video frames, which is fused into the bidirectional matching to produce the motion vectors (MVs) of the losing frames, and according to the output MVs, the motion-compensated interpolation is implemented to recover the original frame rate of the video sequence. Experimental results show that the proposed FRUC algorithm effectively improve both objective and subjective qualities of the transmitted video sequences.


2021 ◽  
Author(s):  
Nastazja D. Pilonis ◽  
Maria O’Donovan ◽  
Susan Richardson ◽  
Rebecca C. Fitzgerald ◽  
Massimiliano Pietro

Abstract Background Recognition of early signet-ring cell carcinoma (SRCC) in patients with hereditary diffuse gastric cancer (HDGC) undergoing endoscopic surveillance is challenging. We hypothesized that probe-based confocal laser endomicroscopy (pCLE) might help diagnose early cancerous lesions in the context of HDGC. The aim of this study was to identify pCLE diagnostic criteria for early SRCC. Methods Patients with HDGC were prospectively recruited and pCLE assessment was performed on areas suspicious for early SRCC and control regions. Targeted biopsies were taken for gold standard histologic assessment. In Phase I two investigators assessed video sequences off-line to identify pCLE features related to SRCC. In Phase II pCLE diagnostic criteria were evaluated in an independent video set by the investigators blinded to the histologic diagnosis. Sensitivity, specificity, accuracy, and interobserver agreement were calculated. Results 42 video sequences from 16 HDGC patients were included in Phase I. Four pCLE patterns associated to SRCC histologic features were identified: (A) glands with attenuated margins, (B) glands with spiculated or irregular shape, (C) heterogenous granular stroma with sparse glands, (D) enlarged vessels with tortuous shape. In Phase II, 38 video sequences from 15 patients were assessed. Criteria A and B and C had the highest diagnostic accuracy, with a κ for interobserver agreement ranging from 0.153 to 0.565. A panel comprising these 3 criteria with a cut-off of at least one positive criterion had a sensitivity of 80.9% (95%CI:58.1 - 94.5%) and a specificity of 70.6% (95%CI:44.0 - 89.7%) for a diagnosis of SRCC. Conclusions We have generated and validated off-line pCLE criteria for early SRCC. Future real-time validation of these criteria is required.


Author(s):  
Mazouzi Amine ◽  
Kerfa Djoudi ◽  
Ismail Rakip Karas

<span lang="EN-US">In this article, a new method of vehicles detecting and tracking is presented: A thresholding followed by a mathematical morphology treatment are used. The tracking phase uses the information about a vehicle. An original labeling is proposed in this article. It helps to reduce some artefacts that occur at the detection level. The main contribution of this article lies in the possibility of merging information of low level (detection) and high level (tracking). In other words, it is shown that many artefacts resulting from image processing (low level) can be detected, and eliminated thanks to the information contained in the labeling (high level). The proposed method has been tested on many video sequences and examples are given illustrating the merits of our approach.</span>


Sign in / Sign up

Export Citation Format

Share Document