video segmentation
Recently Published Documents


TOTAL DOCUMENTS

820
(FIVE YEARS 96)

H-INDEX

46
(FIVE YEARS 4)

Author(s):  
Haonan Luo ◽  
Guosheng Lin ◽  
Yazhou Yao ◽  
Fayao Liu ◽  
Zichuan Liu ◽  
...  

2021 ◽  
Vol 2021 ◽  
pp. 1-13
Author(s):  
Ying Wang ◽  
Jianbo Wu ◽  
Hui Deng ◽  
Xianghui Zeng

With the development of machine learning, as a branch of machine learning, deep learning has been applied in many fields such as image recognition, image segmentation, video segmentation, and so on. In recent years, deep learning has also been gradually applied to food recognition. However, in the field of food recognition, the degree of complexity is high, the situation is complex, and the accuracy and speed of recognition are worrying. This paper tries to solve the above problems and proposes a food image recognition method based on neural network. Combining Tiny-YOLO and twin network, this method proposes a two-stage learning mode of YOLO-SIMM and designs two versions of YOLO-SiamV1 and YOLO-SiamV2. Through experiments, this method has a general recognition accuracy. However, there is no need for manual marking, and it has a good development prospect in practical popularization and application. In addition, a method for foreign body detection and recognition in food is proposed. This method can effectively separate foreign body from food by threshold segmentation technology. Experimental results show that this method can effectively distinguish desiccant from foreign matter and achieve the desired effect.


2021 ◽  
pp. 4181-4194
Author(s):  
Eman Hato

Shot boundary detection is the process of segmenting a video into basic units known as shots by discovering transition frames between shots. Researches have been conducted to accurately detect the shot boundaries. However, the acceleration of the shot detection process with higher accuracy needs improvement. A new method was introduced in this paper to find out the boundaries of abrupt shots in the video with high accuracy and lower computational cost. The proposed method consists of two stages. First, projection features were used to distinguish non boundary transitions and candidate transitions that may contain abrupt boundary. Only candidate transitions were conserved for next stage. Thus, the speed of shot detection was improved by reducing the detection scope. In the second stage, the candidate segments were refined using motion feature derived from the optical flow to remove non boundary frames. The results manifest that the proposed method achieved excellent detection accuracy (0.98 according to F-Score) and effectively speeded up detection process. In addition, the comparative analysis results confirmed the superior performance of the proposed method versus other methods.


Sensors ◽  
2021 ◽  
Vol 21 (21) ◽  
pp. 7156
Author(s):  
Guansheng Xing ◽  
Ziming Zhu

Lane and road marker segmentation is crucial in autonomous driving, and many related methods have been proposed in this field. However, most of them are based on single-frame prediction, which causes unstable results between frames. Some semantic multi-frame segmentation methods produce error accumulation and are not fast enough. Therefore, we propose a deep learning algorithm that takes into account the continuity information of adjacent image frames, including image sequence processing and an end-to-end trainable multi-input single-output network to jointly process the segmentation of lanes and road markers. In order to emphasize the location of the target with high probability in the adjacent frames and to refine the segmentation result of the current frame, we explicitly consider the time consistency between frames, expand the segmentation region of the previous frame, and use the optical flow of the adjacent frames to reverse the past prediction, then use it as an additional input of the network in training and reasoning, thereby improving the network’s attention to the target area of the past frame. We segmented lanes and road markers on the Baidu Apolloscape lanemark segmentation dataset and CULane dataset, and present benchmarks for different networks. The experimental results show that this method accelerates the segmentation speed of video lanes and road markers by 2.5 times, increases accuracy by 1.4%, and reduces temporal consistency by only 2.2% at most.


2021 ◽  
Author(s):  
Liangru Xiang ◽  
Zhijia Yu ◽  
Jianming Hu ◽  
Yi Zhang
Keyword(s):  

2021 ◽  
Vol 2021 ◽  
pp. 1-8
Author(s):  
Zhang Min-qing ◽  
Li Wen-ping

There are many different types of sports training films, and categorizing them can be difficult. As a result, this research introduces an autonomous video content classification system that makes managing large amounts of video data easier. This research provides a video feature extraction approach using a support vector machine (SVM) video classification algorithm and a mix of video and audio dual-mode characteristics. It automates the classification of cartoons, ads, music, news, and sports videos, as well as the detection of terrorist and violent moments in films. To begin, a new feature expression scheme, the MPEG-7 visual descriptor subcombination, is proposed based on an analysis of the existing video classification algorithms, with the goal of addressing the problems in these algorithms. This is accomplished by analyzing the visual differences of the five video classification algorithms. The model was able to extract 9 descriptors from the four characteristics of color, texture, shape, and motion, resulting in a new overall visual feature with good results. The results suggest that the algorithm optimizes video segmentation by highlighting disparities in feature selection between different categories of films. Second, the support vector machine’s multivideo classification performance is improved by the enhanced secondary prediction method. Finally, a comparison experiment with current related similar algorithms was conducted. The suggested method outperformed the competition in the accuracy of video classification in five different types of videos, as well as in the recognition of terrorist and violent incidents.


2021 ◽  
Author(s):  
Christopher B. Kuhn ◽  
Markus Hofbauer ◽  
Ziqin Xu ◽  
Goran Petrovic ◽  
Eckehard Steinbach

Author(s):  
Juan León Alcázar ◽  
María A. Bravo ◽  
Guillaume Jeanneret ◽  
Ali K. Thabet ◽  
Thomas Brox ◽  
...  
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document