scholarly journals MSF-Net: Multi-Scale Feature Learning Network for Classification of Surface Defects of Multifarious Sizes

Sensors ◽  
2021 ◽  
Vol 21 (15) ◽  
pp. 5125
Author(s):  
Pengcheng Xu ◽  
Zhongyuan Guo ◽  
Lei Liang ◽  
Xiaohang Xu

In the field of surface defect detection, the scale difference of product surface defects is often huge. The existing defect detection methods based on Convolutional Neural Networks (CNNs) are more inclined to express macro and abstract features, and the ability to express local and small defects is insufficient, resulting in an imbalance of feature expression capabilities. In this paper, a Multi-Scale Feature Learning Network (MSF-Net) based on Dual Module Feature (DMF) extractor is proposed. DMF extractor is mainly composed of optimized Concatenated Rectified Linear Units (CReLUs) and optimized Inception feature extraction modules, which increases the diversity of feature receptive fields while reducing the amount of calculation; the feature maps of the middle layer with different sizes of receptive fields are merged to increase the richness of the receptive fields of the last layer of feature maps; the residual shortcut connections, batch normalization layer and average pooling layer are used to replace the fully connected layer to improve training efficiency, and make the multi-scale feature learning ability more balanced at the same time. Two representative multi-scale defect data sets are used for experiments, and the experimental results verify the advancement and effectiveness of the proposed MSF-Net in the detection of surface defects with multi-scale features.

Symmetry ◽  
2018 ◽  
Vol 10 (10) ◽  
pp. 457 ◽  
Author(s):  
Dandan Zhu ◽  
Lei Dai ◽  
Ye Luo ◽  
Guokai Zhang ◽  
Xuan Shao ◽  
...  

Previous saliency detection methods usually focused on extracting powerful discriminative features to describe images with a complex background. Recently, the generative adversarial network (GAN) has shown a great ability in feature learning for synthesizing high quality natural images. Since the GAN shows a superior feature learning ability, we present a new multi-scale adversarial feature learning (MAFL) model for image saliency detection. In particular, we build this model, which is composed of two convolutional neural network (CNN) modules: the multi-scale G-network takes natural images as inputs and generates the corresponding synthetic saliency map, and we design a novel layer in the D-network, namely a correlation layer, which is used to determine whether one image is a synthetic saliency map or ground-truth saliency map. Quantitative and qualitative comparisons on several public datasets show the superiority of our approach.


2019 ◽  
Vol 11 (5) ◽  
pp. 594 ◽  
Author(s):  
Shuo Zhuang ◽  
Ping Wang ◽  
Boran Jiang ◽  
Gang Wang ◽  
Cong Wang

With the rapid advances in remote-sensing technologies and the larger number of satellite images, fast and effective object detection plays an important role in understanding and analyzing image information, which could be further applied to civilian and military fields. Recently object detection methods with region-based convolutional neural network have shown excellent performance. However, these two-stage methods contain region proposal generation and object detection procedures, resulting in low computation speed. Because of the expensive manual costs, the quantity of well-annotated aerial images is scarce, which also limits the progress of geospatial object detection in remote sensing. In this paper, on the one hand, we construct and release a large-scale remote-sensing dataset for geospatial object detection (RSD-GOD) that consists of 5 different categories with 18,187 annotated images and 40,990 instances. On the other hand, we design a single shot detection framework with multi-scale feature fusion. The feature maps from different layers are fused together through the up-sampling and concatenation blocks to predict the detection results. High-level features with semantic information and low-level features with fine details are fully explored for detection tasks, especially for small objects. Meanwhile, a soft non-maximum suppression strategy is put into practice to select the final detection results. Extensive experiments have been conducted on two datasets to evaluate the designed network. Results show that the proposed approach achieves a good detection performance and obtains the mean average precision value of 89.0% on a newly constructed RSD-GOD dataset and 83.8% on the Northwestern Polytechnical University very high spatial resolution-10 (NWPU VHR-10) dataset at 18 frames per second (FPS) on a NVIDIA GTX-1080Ti GPU.


2019 ◽  
Vol 11 (2) ◽  
pp. 142 ◽  
Author(s):  
Wenping Ma ◽  
Hui Yang ◽  
Yue Wu ◽  
Yunta Xiong ◽  
Tao Hu ◽  
...  

In this paper, a novel change detection approach based on multi-grained cascade forest(gcForest) and multi-scale fusion for synthetic aperture radar (SAR) images is proposed. It detectsthe changed and unchanged areas of the images by using the well-trained gcForest. Most existingchange detection methods need to select the appropriate size of the image block. However, thesingle size image block only provides a part of the local information, and gcForest cannot achieve agood effect on the image representation learning ability. Therefore, the proposed approach choosesdifferent sizes of image blocks as the input of gcForest, which can learn more image characteristicsand reduce the influence of the local information of the image on the classification result as well.In addition, in order to improve the detection accuracy of those pixels whose gray value changesabruptly, the proposed approach combines gradient information of the difference image with theprobability map obtained from the well-trained gcForest. Therefore, the image edge information canbe enhanced and the accuracy of edge detection can be improved by extracting the image gradientinformation. Experiments on four data sets indicate that the proposed approach outperforms otherstate-of-the-art algorithms.


2020 ◽  
Vol 10 (23) ◽  
pp. 8434
Author(s):  
Peiran Peng ◽  
Ying Wang ◽  
Can Hao ◽  
Zhizhong Zhu ◽  
Tong Liu ◽  
...  

Fabric defect detection is very important in the textile quality process. Current deep learning algorithms are not effective in detecting tiny and extreme aspect ratio fabric defects. In this paper, we proposed a strong detection method, Priori Anchor Convolutional Neural Network (PRAN-Net), for fabric defect detection to improve the detection and location accuracy of fabric defects and decrease the inspection time. First, we used Feature Pyramid Network (FPN) by selected multi-scale feature maps to reserve more detailed information of tiny defects. Secondly, we proposed a trick to generate sparse priori anchors based on fabric defects ground truth boxes instead of fixed anchors to locate extreme defects more accurately and efficiently. Finally, a classification network is used to classify and refine the position of the fabric defects. The method was validated on two self-made fabric datasets. Experimental results indicate that our method significantly improved the accuracy and efficiency of detecting fabric defects and is more suitable to the automatic fabric defect detection.


2020 ◽  
Vol 194 ◽  
pp. 102881
Author(s):  
Michael Edwards ◽  
Xianghua Xie ◽  
Robert I. Palmer ◽  
Gary K.L. Tam ◽  
Rob Alcock ◽  
...  

2020 ◽  
Vol 16 (3) ◽  
pp. 132-145
Author(s):  
Gang Liu ◽  
Chuyi Wang

Neural network models have been widely used in the field of object detecting. The region proposal methods are widely used in the current object detection networks and have achieved well performance. The common region proposal methods hunt the objects by generating thousands of the candidate boxes. Compared to other region proposal methods, the region proposal network (RPN) method improves the accuracy and detection speed with several hundred candidate boxes. However, since the feature maps contains insufficient information, the ability of RPN to detect and locate small-sized objects is poor. A novel multi-scale feature fusion method for region proposal network to solve the above problems is proposed in this article. The proposed method is called multi-scale region proposal network (MS-RPN) which can generate suitable feature maps for the region proposal network. In MS-RPN, the selected feature maps at multiple scales are fine turned respectively and compressed into a uniform space. The generated fusion feature maps are called refined fusion features (RFFs). RFFs incorporate abundant detail information and context information. And RFFs are sent to RPN to generate better region proposals. The proposed approach is evaluated on PASCAL VOC 2007 and MS COCO benchmark tasks. MS-RPN obtains significant improvements over the comparable state-of-the-art detection models.


2021 ◽  
Author(s):  
Jesús García Fernández ◽  
Siamak Mehrkanoon

2021 ◽  
Author(s):  
Hung-Hao Chen ◽  
Chia-Hung Wang ◽  
Hsueh-Wei Chen ◽  
Pei-Yung Hsiao ◽  
Li-Chen Fu ◽  
...  

The current fusion-based methods transform LiDAR data into bird’s eye view (BEV) representations or 3D voxel, leading to information loss and heavy computation cost of 3D convolution. In contrast, we directly consume raw point clouds and perform fusion between two modalities. We employ the concept of region proposal network to generate proposals from two streams, respectively. In order to make two sensors compensate the weakness of each other, we utilize the calibration parameters to project proposals from one stream onto the other. With the proposed multi-scale feature aggregation module, we are able to combine the extracted regionof-interest-level (RoI-level) features of RGB stream from different receptive fields, resulting in fertilizing feature richness. Experiments on KITTI dataset show that our proposed network outperforms other fusion-based methods with meaningful improvements as compared to 3D object detection methods under challenging setting.


2020 ◽  
Vol 100 ◽  
pp. 107149 ◽  
Author(s):  
Wenchi Ma ◽  
Yuanwei Wu ◽  
Feng Cen ◽  
Guanghui Wang

Sign in / Sign up

Export Citation Format

Share Document