scholarly journals Image Quality Evaluation of Sanda Sports Video Based on BP Neural Network Perception

2021 ◽  
Vol 2021 ◽  
pp. 1-8
Author(s):  
Kai Fan ◽  
Xiaoye Gu

In the special sports camera, there are subframes. A lens is composed of multiple frames. It will be unclear if a frame is cut out. The definition of video screenshots lies in the quality of video. To get clear screenshots, we need to find clear video. The purpose of this paper is to analyze and evaluate the quality of sports video images. Through the semantic analysis and program design of video using computer language, the video images are matched with the data model constructed by research, and the real-time analysis of sports video images is formed, so as to achieve the real-time analysis effect of sports techniques and tactics. In view of the defects of rough image segmentation and high spatial distortion rate in current sports video image evaluation methods, this paper proposes a sports video image evaluation method based on BP neural network perception. The results show that the optimized algorithm can overcome the slow convergence of weights of traditional algorithm and the oscillation in error convergence of variable step size algorithm. The optimized algorithm will significantly reduce the learning error of neural network and the overall error of network quality classification and greatly improve the accuracy of evaluation. Sanda motion video image quality evaluation method based on BP (back propagation) neural network perception has high spatial accuracy, good noise iteration performance, and low spatial distortion rate, so it can accurately evaluate Sanda motion video image quality.

Entropy ◽  
2019 ◽  
Vol 21 (11) ◽  
pp. 1070 ◽  
Author(s):  
Jinhua Liu ◽  
Mulian Xu ◽  
Xinye Xu ◽  
Yuanyuan Huang

The image quality evaluation method, based on the convolutional neural network (CNN), achieved good evaluation performance. However, this method can easily lead the visual quality of image sub-blocks to change with the spatial position after the image is processed by various distortions. Consequently, the visual quality of the entire image is difficult to reflect objectively. On this basis, this study combines wavelet transform and CNN method to propose an image quality evaluation method based on wavelet CNN. The low-frequency, horizontal, vertical, and diagonal sub-band images decomposed by wavelet transform are selected as the inputs of convolution neural network. The feature information in multiple directions is extracted by convolution neural network. Then, the information entropy of each sub-band image is calculated and used as the weight of each sub-band image quality. Finally, the quality evaluation values of four sub-band images are weighted and fused to obtain the visual quality values of the entire image. Experimental results show that the proposed method gains advantage from the global and local information of the image, thereby further improving its effectiveness and generalization.


2021 ◽  
Vol 13 (19) ◽  
pp. 3859
Author(s):  
Joby M. Prince Czarnecki ◽  
Sathishkumar Samiappan ◽  
Meilun Zhou ◽  
Cary Daniel McCraine ◽  
Louis L. Wasson

The radiometric quality of remotely sensed imagery is crucial for precision agriculture applications because estimations of plant health rely on the underlying quality. Sky conditions, and specifically shadowing from clouds, are critical determinants in the quality of images that can be obtained from low-altitude sensing platforms. In this work, we first compare common deep learning approaches to classify sky conditions with regard to cloud shadows in agricultural fields using a visible spectrum camera. We then develop an artificial-intelligence-based edge computing system to fully automate the classification process. Training data consisting of 100 oblique angle images of the sky were provided to a convolutional neural network and two deep residual neural networks (ResNet18 and ResNet34) to facilitate learning two classes, namely (1) good image quality expected, and (2) degraded image quality expected. The expectation of quality stemmed from the sky condition (i.e., density, coverage, and thickness of clouds) present at the time of the image capture. These networks were tested using a set of 13,000 images. Our results demonstrated that ResNet18 and ResNet34 classifiers produced better classification accuracy when compared to a convolutional neural network classifier. The best overall accuracy was obtained by ResNet34, which was 92% accurate, with a Kappa statistic of 0.77. These results demonstrate a low-cost solution to quality control for future autonomous farming systems that will operate without human intervention and supervision.


Author(s):  
Zhiying Leng ◽  
Zhentao Wang

Abstract As an essential method for security inspection in nuclear facilities, digital radiography has the ability to find hidden contraband efficiently. However, the images obtained by current scanning digital radiography system can be degraded by several factors, such as statistical noise and response time of detectors. At high scanning speed, the statistical noise and vibration of the system deteriorates the quality of images. In addition, the reduction of image quality will influence the accuracy of image observation and recognition. To meet the demand of detection efficiency and quality, it is necessary to guarantee the quality of images under high scanning speed. Thus, to improve image quality of vehicles’ digital radiography at a certain scanning speed, we proposed an approach (VDR-CNN) to reduce or eliminate image noise, which is a convolutional neural network (CNN) with residual learning. The high-quality images obtained at low scanning speed of system served as the ground-truth image for VDR-CNN, while the low-quality counterpart corresponding to the high scanning speed served as the input. Then, the two images mentioned above constitute a training pair. By training this network with a set of training pairs, the mapping function of promoting image quality will be automatically learned so that the restored image can be obtained from the low-quality counterpart through the trained VDR-CNN. Moreover, this method avoids the difficulty in figuring and analyzing the complicated image degradation model. A series of experiments was carried out through the 60Co inspection system developed by Institute of Nuclear and New Energy Technology, Tsinghua University. The experimental result shows that this method has attained a satisfying result in denoising and preserving details of images and outperforms BM3D algorithm in terms of both image quality improvement and the processing speed. In conclusion, the proposed method improves the image quality of vehicles’ digital radiography and it is proved better than traditional methods.


2013 ◽  
Vol 373-375 ◽  
pp. 1220-1223
Author(s):  
Hong Yi Li ◽  
Ze Xi Li ◽  
Chao Jie Wang ◽  
Yuan Feng Han ◽  
Di Zhao ◽  
...  

In recent years, people have been paying increasingly attention on monitoring the quality of drinking water, which becomes rather necessary after natural disasters such as the Beijing 7.21 rainstorm, considering that the drinking water is one of the main medium for epidemic spreading. Most of the existing evaluation methods have their bases on concise mathematical models, which often fail to describe the complex essential nonlinear relations between the water quality and the chemical material in it. In this paper, we propose the evaluation method by using the SOM neural network, a unsupervised method that is able to classify, and therefore evaluate, given water samples. In order to promote the convergence rate and the precision of SOM neural network when dealing with high dimensional and highly correlated samples, we add a PCA preprocessing procedure. Experiment results demonstrate that the improved SOM neural network could evaluate the water quality with high precision.


Author(s):  
Yasin Bakış ◽  
Xiaojun Wang ◽  
Hank Bart

Over 1 billion biodiversity collection specimens ranging from fungi to fish to fossils are housed in more than 1,600 natural history collections across the United States. The digitization of these specimens has risen significantly within the last few decades and this is only likely to increase, as the use of digitized data gains more importance every day. Numerous experiments with automated image analysis have proven the practicality and usefulness of digitized biodiversity images by computational techniques such as neural networks and image processing. However, most of the computational techniques to analyze images of biodiversity collection specimens require a good curation of this data. One of the challenges in curating multimedia data of biodiversity collection specimens is the quality of the multimedia objects—in our case, two dimensional images. To tackle the image quality problem, multimedia needs to be captured in a specific format and presented with appropriate descriptors. In this study we present an analysis of two image repositories each consisting of 2D images of fish specimens from several institutions—the Integrated Digitized Biocollections (iDigBio) and the Great Lakes Invasives Network (GLIN). Approximately 70 thousand images have been processed from the GLIN repository and 450 thousand images have been processed from the iDigBio repository and their suitability assessed for use in neural network-based species identification and trait extraction applications. Our findings showed that images that came from the GLIN dataset were more successful for image processing and machine learning purposes. Almost 40% of the species have been represented with less than 10 images while only 20% have more than 100 images per species. We identified and captured 20 metadata descriptors that define quality and usability of the image. According to the captured metadata information, 70% of the GLIN dataset images were found to be useful for further analysis according to the overall image quality score. Quality issues with the remaining images included: curved specimens, non-fish objects in the images such as tags, labels and rocks that obstructed the view of the specimen; color, focus and brightness issues; folded or overlapping parts as well as missing parts. We used both the web interface and the API (Application Programming Interface) for downloading images from iDigBio. We searched for all fish genera, families and classes in three different searches with the images-only option selected. Then we combined all of the search results and removed duplicates. Our search on the iDigBio database for fish taxa returned approximately 450 thousand records with images. We narrowed this down to 90 thousand fish images aided by the multimedia metadata with the downloaded search results, excluding some non-fish images, fossil samples, X-ray and CT (computed tomography) scans and several others. Only 44% of these 90 thousand images were found to be suitable for further analysis. In this study, we discovered some of the limitations of biodiversity image datasets and built an infrastructure for assessing the quality of biodiversity images for neural network analysis. Our experience with the fish images gathered from two different image repositories has enabled describing image quality metadata features. With the help of these metadata descriptors, one can simply create a dataset for a desired image quality for the purpose of analysis. Likewise, the availability of the metadata descriptors will help advance our understanding of quality issues, while helping data technicians, curators and the other digitization staff be more aware of multimedia.


Author(s):  
Pavel Tšukrejev ◽  
Kaarel Kruuser ◽  
Georgy Gorbachev ◽  
Kristo Karjust ◽  
Jüri Majak

One of the most important steps during manufacturing of solar modules is lamination. This paper focuses on monitoring of behavior of used encapsulant Ethylene/Vinyl-Acetate (EVA) and impact on overall quality of module during lamination. Monitoring is performed by employing external thermocouple sensor inside the lamination chamber as well as by. Real-time analysis of the results helps to predict the quality of final product in terms of ensuring lamination quality in real time and provides possibility to tune the process during manufacturing cycle to achieve the best result of encapsulant cross-linking.


Author(s):  
Abi Soliga ◽  
Godlin Jasil

Blind Image Quality Assessment (BIQA) methods are the most part feeling mindful. The BIQA method learns regression models from preparing images with human subjective scores to predict the perceptual nature of test images. The general quality of image and the nature of every image patches are measured by normal pooling. By coordinating the components of normal picture measurements got from different signs, we take a multivariate Gaussian model of picture patches from an accumulation of unblemished regular pictures. The proposed radial bias function neural network method is used to evaluate the quality of images and this method represents the structure of picture distortions with flexibility.


Author(s):  
Lu Chen ◽  
He Being

Aiming at the problem of low accuracy of the current English interpretation teaching quality evaluation, a teaching quality evaluation method based on a genetic algorithm (GA) optimized RBF neural network is proposed. First, the principal component analysis is used to select the teaching quality evaluation index, and then design The RBF neural network teaching evaluation model is used, and GA is used to optimize the initial weights of the RBF neural network. Experimental results show that this method can effectively evaluate the quality of English interpretation teaching, and has high accuracy and real-time performance.


Sign in / Sign up

Export Citation Format

Share Document