video classification
Recently Published Documents


TOTAL DOCUMENTS

413
(FIVE YEARS 163)

H-INDEX

26
(FIVE YEARS 5)

2022 ◽  
Vol 39 (2) ◽  
pp. 183-184
Author(s):  
Rajinder Singh Chaggar ◽  
Sneh Vinu Shah ◽  
Michael Berry ◽  
Rajan Saini ◽  
Sanooj Soni ◽  
...  

2022 ◽  
Author(s):  
Emmie Swize ◽  
Lance Champagne ◽  
Bruce Cox ◽  
Trevor Bihl

2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Yiping Gao

A large amount of useful information is included in the news video, and how to classify the news video information has become an important research topic in the field of multimedia technology. News videos are enormously informative, and employing manual classification methods is too time-consuming and vulnerable to subjective judgment. Therefore, developing an automated news video analysis and retrieval method becomes one of the most important research contents in the current multimedia information system. Therefore, this paper proposes a news video classification model based on ResNet-2 and transfer learning. First, a model-based transfer method was adopted to transfer the commonality knowledge of the pretrained model of the Inception-ResNet-v2 network on ImageNet, and a news video classification model was constructed. Then, a momentum update rule is introduced on the basis of the Adam algorithm, and an improved gradient descent method is proposed in order to obtain an optimal solution of the local minima of the function in the learning process. The experimental results show that the improved Adam algorithm can iteratively update the network weights through the adaptive learning rate to reach the fastest convergence. Compared with other convolutional neural network models, the modified Inception-ResNet-v2 network model achieves 91.47% classification accuracy for common news video datasets.


2021 ◽  
Author(s):  
Jaime Sancho ◽  
Manuel Villa ◽  
Gemma Urbanos ◽  
Marta Villanueva ◽  
Pallab Sutradhar ◽  
...  

Healthcare ◽  
2021 ◽  
Vol 9 (11) ◽  
pp. 1579
Author(s):  
Wansuk Choi ◽  
Seoyoon Heo

The purpose of this study was to classify ULTT videos through transfer learning with pre-trained deep learning models and compare the performance of the models. We conducted transfer learning by combining a pre-trained convolution neural network (CNN) model into a Python-produced deep learning process. Videos were processed on YouTube and 103,116 frames converted from video clips were analyzed. In the modeling implementation, the process of importing the required modules, performing the necessary data preprocessing for training, defining the model, compiling, model creation, and model fit were applied in sequence. Comparative models were Xception, InceptionV3, DenseNet201, NASNetMobile, DenseNet121, VGG16, VGG19, and ResNet101, and fine tuning was performed. They were trained in a high-performance computing environment, and validation and loss were measured as comparative indicators of performance. Relatively low validation loss and high validation accuracy were obtained from Xception, InceptionV3, and DenseNet201 models, which is evaluated as an excellent model compared with other models. On the other hand, from VGG16, VGG19, and ResNet101, relatively high validation loss and low validation accuracy were obtained compared with other models. There was a narrow range of difference between the validation accuracy and the validation loss of the Xception, InceptionV3, and DensNet201 models. This study suggests that training applied with transfer learning can classify ULTT videos, and that there is a difference in performance between models.


2021 ◽  
Vol 2021 ◽  
pp. 1-13
Author(s):  
Xu Han ◽  
Mingyang Pan ◽  
Haipeng Ge ◽  
Shaoxi Li ◽  
Jingfeng Hu ◽  
...  

At night, buoys and other navigation marks disappear to be replaced by fixed or flashing lights. Navigation marks are seen as a set of lights in various colors rather than their familiar outline. Deciphering that the meaning of the lights is a burden to navigators, it is also a new challenging research direction of intelligent sensing of navigation environment. The study studied initiatively the intelligent recognition of lights on navigation marks at night based on multilabel video classification methods. To capture effectively the characteristics of navigation mark’s lights, including both color and flashing phase, three different multilabel classification models based on binary relevance, label power set, and adapted algorithm were investigated and compared. According to the experiment’s results performed on a data set with 8000 minutes video, the model based on binary relevance, named NMLNet, has highest accuracy about 99.23% to classify 9 types of navigation mark’s lights. It also has the fastest computation speed with least network parameters. In the NMLNet, there are two branches for the classifications of color and flashing, respectively, and for the flashing classification, an improved MobileNet-v2 was used to capture the brightness characteristic of lights in each video frame, and an LSTM is used to capture the temporal dynamics of lights. Aiming to run on mobile devices on vessel, the MobileNet-v2 was used as backbone, and with the improvement of spatial attention mechanism, it achieved the accuracy near Resnet-50 while keeping its high speed.


Sign in / Sign up

Export Citation Format

Share Document