Transfer learning in deep neural network-based receiver coil sensitivity map estimation

Author(s):  
Madiha Arshad ◽  
Mahmood Qureshi ◽  
Omair Inam ◽  
Hammad Omer
2020 ◽  
Vol 152 ◽  
pp. S146-S147
Author(s):  
J. Perez-Alija ◽  
P. Gallego ◽  
M. Lizondo ◽  
J. Nuria ◽  
A. Latorre-Musoll ◽  
...  

2021 ◽  
Vol 10 (9) ◽  
pp. 25394-25398
Author(s):  
Chitra Desai

Deep learning models have demonstrated improved efficacy in image classification since the ImageNet Large Scale Visual Recognition Challenge started since 2010. Classification of images has further augmented in the field of computer vision with the dawn of transfer learning. To train a model on huge dataset demands huge computational resources and add a lot of cost to learning. Transfer learning allows to reduce on cost of learning and also help avoid reinventing the wheel. There are several pretrained models like VGG16, VGG19, ResNet50, Inceptionv3, EfficientNet etc which are widely used.   This paper demonstrates image classification using pretrained deep neural network model VGG16 which is trained on images from ImageNet dataset. After obtaining the convolutional base model, a new deep neural network model is built on top of it for image classification based on fully connected network. This classifier will use features extracted from the convolutional base model.


2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Xiaoping Guo

Traditional text annotation-based video retrieval is done by manually labeling videos with text, which is inefficient and highly subjective and generally cannot accurately describe the meaning of videos. Traditional content-based video retrieval uses convolutional neural networks to extract the underlying feature information of images to build indexes and achieves similarity retrieval of video feature vectors according to certain similarity measure algorithms. In this paper, by studying the characteristics of sports videos, we propose the histogram difference method based on using transfer learning and the four-step method based on block matching for mutation detection and fading detection of video shots, respectively. By adaptive thresholding, regions with large frame difference changes are marked as candidate regions for shots, and then the shot boundaries are determined by mutation detection algorithm. Combined with the characteristics of sports video, this paper proposes a key frame extraction method based on clustering and optical flow analysis, and experimental comparison with the traditional clustering method. In addition, this paper proposes a key frame extraction algorithm based on clustering and optical flow analysis for key frame extraction of sports video. The algorithm effectively removes the redundant frames, and the extracted key frames are more representative. Through extensive experiments, the keyword fuzzy finding algorithm based on improved deep neural network and ontology semantic expansion proposed in this paper shows a more desirable retrieval performance, and it is feasible to use this method for video underlying feature extraction, annotation, and keyword finding, and one of the outstanding features of the algorithm is that it can quickly and effectively retrieve the desired video in a large number of Internet video resources, reducing the false detection rate and leakage rate while improving the fidelity, which basically meets people’s daily needs.


2019 ◽  
Vol 158 ◽  
pp. 20-29 ◽  
Author(s):  
Aydin Kaya ◽  
Ali Seydi Keceli ◽  
Cagatay Catal ◽  
Hamdi Yalin Yalic ◽  
Huseyin Temucin ◽  
...  

Author(s):  
Telmo Amaral ◽  
Luís M. Silva ◽  
Luís A. Alexandre ◽  
Chetak Kandaswamy ◽  
Joaquim Marques de Sá ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document