A No-Reference Image Quality Model for Object Detection on Embedded Cameras

Author(s):  
Lingchao Kong ◽  
Ademola Ikusan ◽  
Rui Dai ◽  
Jingyi Zhu ◽  
Dara Ros

Automatic video analysis tools are an indispensable component in imaging applications. Object detection, the first and the most important step for automatic video analysis, is implemented in many embedded cameras. The accuracy of object detection relies on the quality of images that are processed. This paper proposes a new image quality model for predicting the performance of object detection on embedded cameras. A video data set is constructed that considers different factors for quality degradation in the imaging process, such as reduced resolution, noise, and blur. The performances of commonly used low-complexity object detection algorithms are obtained for the data set. A no-reference regression model based on a bagging ensemble of regression trees is built to predict the accuracy of object detection using observable features in an image. Experimental results show that the proposed model provides more accurate predictions of image quality for object detection than commonly known image quality measures.

2020 ◽  
Vol 6 (8) ◽  
pp. 75
Author(s):  
Domonkos Varga

The goal of no-reference image quality assessment (NR-IQA) is to predict the quality of an image as perceived by human observers without using any pristine, reference images. In this study, an NR-IQA algorithm is proposed which is driven by a novel feature vector containing statistical and perceptual features. Different from other methods, normalized local fractal dimension distribution and normalized first digit distributions in the wavelet and spatial domains are incorporated into the statistical features. Moreover, powerful perceptual features, such as colorfulness, dark channel feature, entropy, and mean of phase congruency image, are also incorporated to the proposed model. Experimental results on five large publicly available databases (KADID-10k, ESPL-LIVE HDR, CSIQ, TID2013, and TID2008) show that the proposed method is able to outperform other state-of-the-art methods.


2020 ◽  
Vol 17 (3) ◽  
pp. 172988142093271
Author(s):  
Xiali Li ◽  
Manjun Tian ◽  
Shihan Kong ◽  
Licheng Wu ◽  
Junzhi Yu

To tackle the water surface pollution problem, a vision-based water surface garbage capture robot has been developed in our lab. In this article, we present a modified you only look once v3-based garbage detection method, allowing real-time and high-precision object detection in dynamic aquatic environments. More specifically, to improve the real-time detection performance, the detection scales of you only look once v3 are simplified from 3 to 2. Besides, to guarantee the accuracy of detection, the anchor boxes of our training data set are reclustered for replacing some of the original you only look once v3 prior anchor boxes that are not appropriate to our data set. By virtue of the proposed detection method, the capture robot has the capability of cleaning floating garbage in the field. Experimental results demonstrate that both detection speed and accuracy of the modified you only look once v3 are better than those of other object detection algorithms. The obtained results provide valuable insight into the high-speed detection and grasping of dynamic objects in complex aquatic environments autonomously and intelligently.


2021 ◽  
Vol 15 ◽  
Author(s):  
Tianshu Song ◽  
Leida Li ◽  
Hancheng Zhu ◽  
Jiansheng Qian

Image quality assessment (IQA) for authentic distortions in the wild is challenging. Though current IQA metrics have achieved decent performance for synthetic distortions, they still cannot be satisfactorily applied to realistic distortions because of the generalization problem. Improving generalization ability is an urgent task to make IQA algorithms serviceable in real-world applications, while relevant research is still rare. Fundamentally, image quality is determined by both distortion degree and intelligibility. However, current IQA metrics mostly focus on the distortion aspect and do not fully investigate the intelligibility, which is crucial for achieving robust quality estimation. Motivated by this, this paper presents a new framework for building highly generalizable image quality model by integrating the intelligibility. We first analyze the relation between intelligibility and image quality. Then we propose a bilateral network to integrate the above two aspects of image quality. During the fusion process, feature selection strategy is further devised to avoid negative transfer. The framework not only catches the conventional distortion features but also integrates intelligibility features properly, based on which a highly generalizable no-reference image quality model is achieved. Extensive experiments are conducted based on five intelligibility tasks, and the results demonstrate that the proposed approach outperforms the state-of-the-art metrics, and the intelligibility task consistently improves metric performance and generalization ability.


Author(s):  
Shuqiang Jiang ◽  
Yonghong Tian ◽  
Qingming Huang ◽  
Tiejun Huang ◽  
Wen Gao

With the explosive growth in the amount of video data and rapid advance in computing power, extensive research efforts have been devoted to content-based video analysis. In this chapter, the authors will give a broad discussion on this research area by covering different topics such as video structure analysis, object detection and tracking, event detection, visual attention analysis, and so forth. In the meantime, different video representation and indexing models are also presented.


2020 ◽  
Vol 12 (19) ◽  
pp. 3118
Author(s):  
Danqing Xu ◽  
Yiquan Wu

High-altitude remote sensing target detection has problems related to its low precision and low detection rate. In order to enhance the performance of detecting remote sensing targets, a new YOLO (You Only Look Once)-V3-based algorithm was proposed. In our improved YOLO-V3, we introduced the concept of multi-receptive fields to enhance the performance of feature extraction. Therefore, the proposed model was termed Multi-Receptive Fields Fusion YOLO (MRFF-YOLO). In addition, to address the flaws of YOLO-V3 in detecting small targets, we increased the detection layers from three to four. Moreover, in order to avoid gradient fading, the structure of improved DenseNet was chosen in the detection layers. We compared our approach (MRFF-YOLO) with YOLO-V3 and other state-of-the-art target detection algorithms on an Remote Sensing Object Detection (RSOD) dataset and a dataset of Object Detection in Aerial Images (UCS-AOD). With a series of improvements, the mAP (mean average precision) of MRFF-YOLO increased from 77.10% to 88.33% in the RSOD dataset and increased from 75.67% to 90.76% in the UCS-AOD dataset. The leaking detection rates are also greatly reduced, especially for small targets. The experimental results showed that our approach achieved better performance than traditional YOLO-V3 and other state-of-the-art models for remote sensing target detection.


Author(s):  
Long Ngo Hoang Truong ◽  
Omar E. Mora ◽  
Wen Cheng ◽  
Hairui Tang ◽  
Mankirat Singh

Surface distress is an indication of poor or unfavorable pavement performance or signs of impending failure that can be classified into a fracture, distortion, or disintegration. To mitigate the risk of failing roadways, effective methods to detect road distress are needed. Recent studies associated with the detection of road distress using object detection algorithms are encouraging. Although current methodologies are favorable, some of them seem to be inefficient, time-consuming, and costly. For these reasons, the present study presents a methodology based on the mask regions with convolutional neural network model, which is coupled with the new object detection framework Detectron2 to train the model that utilizes roadway imagery acquired from an unmanned aerial system (UAS). For a comprehensive understanding of the performance of the proposed model, different settings are tested in the study. First, the deep learning models are trained based on both high- and low-resolution datasets. Second, three different backbone models are explored. Finally, a set of threshold values are tested. The corresponding experimental results suggest that the proposed methodology and UAS imagery can be used as efficient tools to detect road distress with an average precision score up to 95%.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Chang Liu ◽  
Samad M.E. Sepasgozar ◽  
Sara Shirowzhan ◽  
Gelareh Mohammadi

Purpose The practice of artificial intelligence (AI) is increasingly being promoted by technology developers. However, its adoption rate is still reported as low in the construction industry due to a lack of expertise and the limited reliable applications for AI technology. Hence, this paper aims to present the detailed outcome of experimentations evaluating the applicability and the performance of AI object detection algorithms for construction modular object detection. Design/methodology/approach This paper provides a thorough evaluation of two deep learning algorithms for object detection, including the faster region-based convolutional neural network (faster RCNN) and single shot multi-box detector (SSD). Two types of metrics are also presented; first, the average recall and mean average precision by image pixels; second, the recall and precision by counting. To conduct the experiments using the selected algorithms, four infrastructure and building construction sites are chosen to collect the required data, including a total of 990 images of three different but common modular objects, including modular panels, safety barricades and site fences. Findings The results of the comprehensive evaluation of the algorithms show that the performance of faster RCNN and SSD depends on the context that detection occurs. Indeed, surrounding objects and the backgrounds of the objects affect the level of accuracy obtained from the AI analysis and may particularly effect precision and recall. The analysis of loss lines shows that the loss lines for selected objects depend on both their geometry and the image background. The results on selected objects show that faster RCNN offers higher accuracy than SSD for detection of selected objects. Research limitations/implications The results show that modular object detection is crucial in construction for the achievement of the required information for project quality and safety objectives. The detection process can significantly improve monitoring object installation progress in an accurate and machine-based manner avoiding human errors. The results of this paper are limited to three construction sites, but future investigations can cover more tasks or objects from different construction sites in a fully automated manner. Originality/value This paper’s originality lies in offering new AI applications in modular construction, using a large first-hand data set collected from three construction sites. Furthermore, the paper presents the scientific evaluation results of implementing recent object detection algorithms across a set of extended metrics using the original training and validation data sets to improve the generalisability of the experimentation. This paper also provides the practitioners and scholars with a workflow on AI applications in the modular context and the first-hand referencing data.


Author(s):  
Shaikh Shakil Abdul Rajjak ◽  
A. K. Kureshi

Imaging sensors with higher resolution and higher frame rates are becoming more popular for wide-area video surveillance (VS) and other applications as technology advances Using Mask-RCNN, we proposed Multiple-Object Detection and Segmentation in High-Resolution Video based on Deep Learning. The ResNet-50 ResNet-101 is used as the backbone in the proposed R-CNN Mask FPN model. The deep residual network’s design overcomes the problem of lower learning efficiency due to the network’s deepening. To reach the objective of the smallest overall error, the deep residual network divided the training series into one training block, minimizing the error of each block. It is roughly divided into five convolutional layer stages. The output scale is cut in half at each point. We used mixed precision FP16 and FP32 for training the model and achieved great speed in training time reduction in inference time for object. The COCO 2014 data set is used to train and validate the proposed model with mixed precision, leading to faster performance. The results of the experiments show that the proposed model can run at 30–48 frames per second with 85% accuracy.


Author(s):  
Michael schatz ◽  
Joachim Jäger ◽  
Marin van Heel

Lumbricus terrestris erythrocruorin is a giant oxygen-transporting macromolecule in the blood of the common earth worm (worm "hemoglobin"). In our current study, we use specimens (kindly provided by Drs W.E. Royer and W.A. Hendrickson) embedded in vitreous ice (1) to avoid artefacts encountered with the negative stain preparation technigue used in previous studies (2-4).Although the molecular structure is well preserved in vitreous ice, the low contrast and high noise level in the micrographs represent a serious problem in image interpretation. Moreover, the molecules can exhibit many different orientations relative to the object plane of the microscope in this type of preparation. Existing techniques of analysis requiring alignment of the molecular views relative to one or more reference images often thus yield unsatisfactory results.We use a new method in which first rotation-, translation- and mirror invariant functions (5) are derived from the large set of input images, which functions are subsequently classified automatically using multivariate statistical techniques (6). The different molecular views in the data set can therewith be found unbiasedly (5). Within each class, all images are aligned relative to that member of the class which contributes least to the classes′ internal variance (6). This reference image is thus the most typical member of the class. Finally the aligned images from each class are averaged resulting in molecular views with enhanced statistical resolution.


Sign in / Sign up

Export Citation Format

Share Document