scholarly journals Feature Extraction Model with Group-Based Classifier for Content Extraction from Video Data

2021 ◽  
Vol 35 (4) ◽  
pp. 325-330
Author(s):  
Gowrisankar Kalakoti ◽  
Prabakaran G

In today's PC illustration, numerous object locations of videos are quite critical duties to accomplish. Swiftly and reliably recognising and distinguishing the multiple aspects of a video is a crucial attribute for collaborating with one's condition (object). The core issue is that in theory, to ensure that no significant aspect is missing; all aspects of a content in a video must be scanned for elements on various different scales. It requires some investment and effort anyway, to really arrange the substance of a given content region and both time and computational limits that an operator can spend on classification are constrained. Two presumption procedures for accelerating the standard identifier are performed by the proposed method and demonstrate their capability by performing both identification efficiency and velocity. The main enhancement of our group-based classifier focuses on accelerating the grouping of sub features by planning the problem as a selection procedure for consecutive features. The subsequent improvement gives better multiscale features to distinguish objects of all sizes without rescaling the information image from a video. Extracting contents from video is an assortment of successive images with a steady time interim. So video can give more data about contents in it when situations are changing regarding time. Along these lines, physically taking care of contents with features are very unimaginable. In the proposed work, it is suggested that a Group-based Video Content Extraction Classifier (GbCCE) extracts content from a video by extracting relevant features using a group-based classifier. The proposed method is distinct from conventional approaches and the findings indicate that better output is demonstrated by the proposed method.

2014 ◽  
Vol 16 (3) ◽  
pp. 106-109
Author(s):  
S Rajarajeshwari ◽  
◽  
G.Kalaimathi Priya ◽  
S.Grace Mary ◽  
Dr.V.Sambath kumar

Human action in a video based application plays a significant role that alerts the researchers towards recognizing the motion of human. Other video applications also have video content extraction, summarization, and human computer interactions. The existing methods needs manual footnote of pertinent portion of actions of our interest. Recognition of human action can be done authentic without physical commentary of applicable parts of action of any one’s interest. In this paper we try to update the previous reviews on many ways of recognizing Human activities in videos that had different techniques like Hidden Markov model, feature extraction, segmentation etc. the recognition of human activity in applications like visual observation in mobile, human fall detection, video conference, robotics.


2020 ◽  
Vol 2020 (4) ◽  
pp. 116-1-116-7
Author(s):  
Raphael Antonius Frick ◽  
Sascha Zmudzinski ◽  
Martin Steinebach

In recent years, the number of forged videos circulating on the Internet has immensely increased. Software and services to create such forgeries have become more and more accessible to the public. In this regard, the risk of malicious use of forged videos has risen. This work proposes an approach based on the Ghost effect knwon from image forensics for detecting forgeries in videos that can replace faces in video sequences or change the mimic of a face. The experimental results show that the proposed approach is able to identify forgery in high-quality encoded video content.


Sign in / Sign up

Export Citation Format

Share Document