scholarly journals Intelligent Image Saliency Detection Method Based on Convolution Neural Network Combining Global and Local Information

2022 ◽  
Vol 2022 ◽  
pp. 1-9
Author(s):  
Songshang Zou ◽  
Wenshu Chen ◽  
Hao Chen

Image saliency object detection can rapidly extract useful information from image scenes and further analyze it. At present, the traditional saliency target detection technology still has the edge of outstanding target that cannot be well preserved. Convolutional neural network (CNN) can extract highly general deep features from the images and effectively express the essential feature information of the images. This paper designs a model which applies CNN in deep saliency object detection tasks. It can efficiently optimize the edges of foreground objects and realize highly efficient image saliency detection through multilayer continuous feature extraction, refinement of layered boundary, and initial saliency feature fusion. The experimental result shows that the proposed method can achieve more robust saliency detection to adjust itself to complex background environment.

Author(s):  
Bo Li ◽  
Zhengxing Sun ◽  
Yuqi Guo

Image saliency detection has recently witnessed rapid progress due to deep neural networks. However, there still exist many important problems in the existing deep learning based methods. Pixel-wise convolutional neural network (CNN) methods suffer from blurry boundaries due to the convolutional and pooling operations. While region-based deep learning methods lack spatial consistency since they deal with each region independently. In this paper, we propose a novel salient object detection framework using a superpixelwise variational autoencoder (SuperVAE) network. We first use VAE to model the image background and then separate salient objects from the background through the reconstruction residuals. To better capture semantic and spatial contexts information, we also propose a perceptual loss to take advantage from deep pre-trained CNNs to train our SuperVAE network. Without the supervision of mask-level annotated data, our method generates high quality saliency results which can better preserve object boundaries and maintain the spatial consistency. Extensive experiments on five wildly-used benchmark datasets show that the proposed method achieves superior or competitive performance compared to other algorithms including the very recent state-of-the-art supervised methods.


Sensors ◽  
2020 ◽  
Vol 20 (23) ◽  
pp. 6779
Author(s):  
Byung-Gil Han ◽  
Joon-Goo Lee ◽  
Kil-Taek Lim ◽  
Doo-Hyun Choi

With the increase in research cases of the application of a convolutional neural network (CNN)-based object detection technology, studies on the light-weight CNN models that can be performed in real time on the edge-computing devices are also increasing. This paper proposed scalable convolutional blocks that can be easily designed CNN networks of You Only Look Once (YOLO) detector which have the balanced processing speed and accuracy of the target edge-computing devices considering different performances by exchanging the proposed blocks simply. The maximum number of kernels of the convolutional layer was determined through simple but intuitive speed comparison tests for three edge-computing devices to be considered. The scalable convolutional blocks were designed in consideration of the limited maximum number of kernels to detect objects in real time on these edge-computing devices. Three scalable and fast YOLO detectors (SF-YOLO) which designed using the proposed scalable convolutional blocks compared the processing speed and accuracy with several conventional light-weight YOLO detectors on the edge-computing devices. When compared with YOLOv3-tiny, SF-YOLO was seen to be 2 times faster than the previous processing speed but with the same accuracy as YOLOv3-tiny, and also, a 48% improved processing speed than the YOLOv3-tiny-PRN which is the processing speed improvement model. Also, even in the large SF-YOLO model that focuses on the accuracy performance, it achieved a 10% faster processing speed with better accuracy of 40.4% [email protected] in the MS COCO dataset than YOLOv4-tiny model.


2021 ◽  
pp. 104243
Author(s):  
Zhenyu Wang ◽  
Yunzhou Zhang ◽  
Yan Liu ◽  
Shichang Liu ◽  
Sonya Coleman ◽  
...  

2019 ◽  
Vol 9 (23) ◽  
pp. 5220
Author(s):  
Sun ◽  
Deng ◽  
Liu ◽  
Deng

In order to address the problems of various interference factors and small sample acquisition in surface floating object detection, an object detection algorithm combining spatial and frequency domains is proposed. Firstly, a rough texture detection is performed in a spatial domain. A Fused Histogram of Oriented Gradient (FHOG) is combined with a Gray Level Co-occurrence Matrix (GLCM) to describe global and local information of floating objects, and sliding windows are classified by Support Vector Machines (SVM) with new texture features. Then, a novel frequency-based saliency detection method used in complex scenes is proposed. It adopts global and local low-rank decompositions to remove redundant regions caused by multiple interferences and retain floating objects. The final detection result is obtained by a strategy of combining bounding boxes from different processing domains. Experimental results show that the overall performance of the proposed method is superior to other popular methods, including traditional image segmentation, saliency detection, hand-crafted texture detection, and Convolutional Neural Network Based (CNN-based) object detection. The proposed method is characterized by small sample training and strong anti-interference ability in complex water scenes like ripple, reflection, and uneven illumination. The average precision of the proposed is 97.2%, with only 0.504 seconds of time consumption.


2021 ◽  
Vol 33 (11) ◽  
pp. 1688-1697
Author(s):  
Zheng Chen ◽  
Xiaoli Zhao ◽  
Jiaying Zhang ◽  
Mingchen Yin ◽  
Hanchen Ye ◽  
...  

2016 ◽  
Vol 4 (6) ◽  
pp. 170-182 ◽  
Author(s):  
Devrat Arya ◽  
Jaimala Jha

The research is ongoing in CBIR it is getting much popular. In this retrieval of image is done using a technique that searches the necessary features of image. The main work of CBIR is to get retrieve efficient, perfect and fast results.In this algorithm, fused multi-feature for color, texture and figure features. A global and local descriptor (GLD) is proposed in this paper, called Global Correlation Descriptor (GCD) and Discrete Wavelet Transform (DWT), to excerpt color and surface feature respectively so that these features have the same effect in CBIR. In addition, Global Correlation Vector (GCV) and Directional Global Correlation Vector (DGCV) is proposed in this paper which can integrate the advantages of histogram statistics and Color Structure Descriptor (CSD) to characterize color and consistency features respectively. Also, this paper is implemented by Hu moment (HM) for shape feature, it extract 8 moments for image. For the classification process, apply kernel Support vector machine (SVM). The experimental result has computed precision, recall, f_measure and execution time. Also, worked on two datasets: Corel-1000 and Soccer-280.


2021 ◽  
Vol 9 (7) ◽  
pp. 753
Author(s):  
Tao Liu ◽  
Bo Pang ◽  
Lei Zhang ◽  
Wei Yang ◽  
Xiaoqiang Sun

Unmanned surface vehicles (USVs) have been extensively used in various dangerous maritime tasks. Vision-based sea surface object detection algorithms can improve the environment perception abilities of USVs. In recent years, the object detection algorithms based on neural networks have greatly enhanced the accuracy and speed of object detection. However, the balance between speed and accuracy is a difficulty in the application of object detection algorithms for USVs. Most of the existing object detection algorithms have limited performance when they are applied in the object detection technology for USVs. Therefore, a sea surface object detection algorithm based on You Only Look Once v4 (YOLO v4) was proposed. Reverse Depthwise Separable Convolution (RDSC) was developed and applied to the backbone network and feature fusion network of YOLO v4. The number of weights of the improved YOLO v4 is reduced by more than 40% compared with the original number. A large number of ablation experiments were conducted on the improved YOLO v4 in the sea ship dataset SeaShips and a buoy dataset SeaBuoys. The experimental results showed that the detection speed of the improved YOLO v4 increased by more than 20%, and mAP increased by 1.78% and 0.95%, respectively, in the two datasets. The improved YOLO v4 effectively improved the speed and accuracy in the sea surface object detection task. The improved YOLO v4 algorithm fused with RDSC has a smaller network size and better real-time performance. It can be easily applied in the hardware platforms with weak computing power and has shown great application potential in the sea surface object detection.


Sign in / Sign up

Export Citation Format

Share Document