highlight detection
Recently Published Documents


TOTAL DOCUMENTS

58
(FIVE YEARS 18)

H-INDEX

10
(FIVE YEARS 1)

Author(s):  
Yingwei Pan ◽  
Yue Chen ◽  
Qian Bao ◽  
Ning Zhang ◽  
Ting Yao ◽  
...  

Live video broadcasting normally requires a multitude of skills and expertise with domain knowledge to enable multi-camera productions. As the number of cameras keeps increasing, directing a live sports broadcast has now become more complicated and challenging than ever before. The broadcast directors need to be much more concentrated, responsive, and knowledgeable, during the production. To relieve the directors from their intensive efforts, we develop an innovative automated sports broadcast directing system, called Smart Director, which aims at mimicking the typical human-in-the-loop broadcasting process to automatically create near-professional broadcasting programs in real-time by using a set of advanced multi-view video analysis algorithms. Inspired by the so-called “three-event” construction of sports broadcast [ 14 ], we build our system with an event-driven pipeline consisting of three consecutive novel components: (1) the Multi-View Event Localization to detect events by modeling multi-view correlations, (2) the Multi-View Highlight Detection to rank camera views by the visual importance for view selection, and (3) the Auto-Broadcasting Scheduler to control the production of broadcasting videos. To our best knowledge, our system is the first end-to-end automated directing system for multi-camera sports broadcasting, completely driven by the semantic understanding of sports events. It is also the first system to solve the novel problem of multi-view joint event detection by cross-view relation modeling. We conduct both objective and subjective evaluations on a real-world multi-camera soccer dataset, which demonstrate the quality of our auto-generated videos is comparable to that of the human-directed videos. Thanks to its faster response, our system is able to capture more fast-passing and short-duration events which are usually missed by human directors.


2021 ◽  
Author(s):  
Gang Fu ◽  
Qing Zhang ◽  
Lei Zhu ◽  
Ping Li ◽  
Chunxia Xiao
Keyword(s):  

2021 ◽  
Vol 8 ◽  
Author(s):  
Baoxian Yu ◽  
Wanbing Chen ◽  
Qinghua Zhong ◽  
Han Zhang

Endoscopic imaging systems have been widely used in disease diagnosis and minimally invasive surgery. Practically, specular reflection (a.k.a. highlight) always exists in endoscopic images and significantly affects surgeons’ observation and judgment. Motivated by the fact that the values of the red channel in nonhighlight area of endoscopic images are higher than those of the green and blue ones, this paper proposes an adaptive specular highlight detection method for endoscopic images. Specifically, for each pixel, we design a criterion for specular highlight detection based on the ratio of the red channel to both the green and blue channels. With the designed criteria, we take advantage of image segmentation and then develop an adaptive threshold with respect to the differences between the red channel and the other ones of neighboring pixels. To validate the proposed method, we conduct experiments on clinical data and CVC-ClinicSpec open database. The experimental results demonstrate that the proposed method yields an averaged Precision, Accuracy, and F1-score rate of 88.76%, 99.60% and 72.56%, respectively, and outperforms the state-of-the-art approaches based on color distribution reported for endoscopic highlight detection.


2021 ◽  
pp. 1-1
Author(s):  
Zhaoyu Guo ◽  
Zhou Zhao ◽  
Weike Jin ◽  
Wang Dazhou ◽  
Liu Ruitao ◽  
...  
Keyword(s):  

Author(s):  
Jiawei Chen ◽  
Jian Wang ◽  
Xinchao Wang ◽  
Xingen Wang ◽  
Zunlei Feng ◽  
...  
Keyword(s):  

2020 ◽  
Vol 64 (5) ◽  
pp. 50408-1-50408-9
Author(s):  
Shoji Tominaga ◽  
Keita Hirai ◽  
Takahiko Horiuchi

Abstract The authors discuss the spectral estimation of multiple light sources from image data in a complex illumination environment. An approach is proposed to effectively estimate illuminant spectra and the corresponding light sources based on highlight areas that appear on dielectric object surfaces. First, the authors develop a highlight detection method using two types of convolution filters with Gaussian distributions, center-surround and low-pass filters. This method is available even for white surfaces, and it is independent of object color and of viewing and incidence angles. Second, they present an algorithm for estimating the illuminant spectra from extracted highlight areas. Each specular highlight area has a spectral composition corresponding to only one light source among multiple light sources. The spectral image data are projected onto a two-dimensional subspace, where a linear cluster in pixel distribution is detected for each highlight area. Third, the relative positional relationship between highlight areas among different object surfaces is used to identify the light sources on each surface. The authors develop an algorithm based on probabilistic relaxation labeling. The light source for each highlight and the corresponding spectral-power distribution are determined from the iterative labeling process. Finally, the feasibility of the proposed approach is examined in an experiment using a real complex environment, where dielectric objects are illuminated by multiple light sources of light-emitting diode, fluorescence, and incandescence.


2020 ◽  
Vol 34 (07) ◽  
pp. 12902-12909 ◽  
Author(s):  
Yingying Zhang ◽  
Junyu Gao ◽  
Xiaoshan Yang ◽  
Chang Liu ◽  
Yan Li ◽  
...  

With the increasing prevalence of portable computing devices, browsing unedited videos is time-consuming and tedious. Video highlight detection has the potential to significantly ease this situation, which discoveries moments of user's major or special interest in a video. Existing methods suffer from two problems. Firstly, most existing approaches only focus on learning holistic visual representations of videos but ignore object semantics for inferring video highlights. Secondly, current state-of-the-art approaches often adopt the pairwise ranking-based strategy, which cannot enjoy the global information to infer highlights. Therefore, we propose a novel video highlight framework, named VH-GNN, to construct an object-aware graph and model the relationships between objects from a global view. To reduce computational cost, we decompose the whole graph into two types of graphs: a spatial graph to capture the complex interactions of object within each frame, and a temporal graph to obtain object-aware representation of each frame and capture the global information. In addition, we optimize the framework via a proposed multi-stage loss, where the first stage aims to determine the highlight-probability and the second stage leverage the relationships between frames and focus on hard examples from the former stage. Extensive experiments on two standard datasets strongly evidence that VH-GNN obtains significant performance compared with state-of-the-arts.


2020 ◽  
Vol 79 (21-22) ◽  
pp. 15015-15024
Author(s):  
Han Wang ◽  
Kexin Wang ◽  
Yuqing Wu ◽  
Zhongzhi Wang ◽  
Ling Zou

Sign in / Sign up

Export Citation Format

Share Document