scholarly journals An Image Enhancement Method for Few-shot Classification

Author(s):  
Yirui Wu ◽  
Benze Wu ◽  
Yunfei Zhang ◽  
Shaohua Wan

Abstract With the development of 5G/6G, IoT, and cloud systems, the amount of data generated, transmitted, and calculated is increasing, and fast and effective close-range image classification becomes more and more important. But many methods require a large number of samples to support in order to achieve sufficient functions. This allows the entire network to zoom in to meet a large number of effective feature extractions, which reduces the efficiency of small sample classification to a certain extent. In order to solve these problems, we propose an image enhancement method for the problems of few-shot classification. This method is an expanded convolutional network with data enhancement function. This network can not only meet the features required for image classification without increasing the number of samples, but also has the advantage of using a large number of effective features without sacrificing efficiency. structure. The cutout structure can enhance the matrix in the data image input process by adding a fixed area 0 mask. The structure of FAU uses dilated convolution and uses the characteristics of the sequence to improve the efficiency of the network. We conduct a comparative experiment on the miniImageNet and CUB datasets, and the proposed method is superior to the comparative method in terms of effectiveness and efficiency measurement in the 1-shot and 5-shot cases.

Agriculture ◽  
2021 ◽  
Vol 11 (10) ◽  
pp. 997
Author(s):  
Yun Peng ◽  
Aichen Wang ◽  
Jizhan Liu ◽  
Muhammad Faheem

Accurate fruit segmentation in images is the prerequisite and key step for precision agriculture. In this article, aiming at the segmentation of grape cluster with different varieties, 3 state-of-the-art semantic segmentation networks, i.e., Fully Convolutional Network (FCN), U-Net, and DeepLabv3+ applied on six different datasets were studied. We investigated: (1) the segmentation performance difference of the 3 studied networks; (2) The impact of different input representations on segmentation performance; (3) The effect of image enhancement method to improve the poor illumination of images and further improve the segmentation performance; (4) The impact of the distance between grape clusters and camera on segmentation performance. The experiment results show that compared with FCN and U-Net the DeepLabv3+ combined with transfer learning is more suitable for the task with an intersection over union (IoU) of 84.26%. Five different input representations, namely RGB, HSV, L*a*b, HHH, and YCrCb obtained different IoU, ranging from 81.5% to 88.44%. Among them, the L*a*b got the highest IoU. Besides, the adopted Histogram Equalization (HE) image enhancement method could improve the model’s robustness against poor illumination conditions. Through the HE preprocessing, the IoU of the enhanced dataset increased by 3.88%, from 84.26% to 88.14%. The distance between the target and camera also affects the segmentation performance, no matter in which dataset, the closer the distance, the better the segmentation performance was. In a word, the conclusion of this research provides some meaningful suggestions for the study of grape or other fruit segmentation.


2020 ◽  
Vol 12 (22) ◽  
pp. 3839
Author(s):  
Xiaomin Tian ◽  
Long Chen ◽  
Xiaoli Zhang ◽  
Erxue Chen

Deep learning has become an effective method for hyperspectral image classification. However, the high band correlation and data volume associated with airborne hyperspectral images, and the insufficiency of training samples, present challenges to the application of deep learning in airborne image classification. Prototypical networks are practical deep learning networks that have demonstrated effectiveness in handling small-sample classification. In this study, an improved prototypical network is proposed (by adding L2 regularization to the convolutional layer and dropout to the maximum pooling layer) to address the problem of overfitting in small-sample classification. The proposed network has an optimal sample window for classification, and the window size is related to the area and distribution of the study area. After performing dimensionality reduction using principal component analysis, the time required for training using hyperspectral images shortened significantly, and the test accuracy increased drastically. Furthermore, when the size of the sample window was 27 × 27 after dimensionality reduction, the overall accuracy of forest species classification was 98.53%, and the Kappa coefficient was 0.9838. Therefore, by using an improved prototypical network with a sample window of an appropriate size, the network yielded desirable classification results, thereby demonstrating its suitability for the fine classification and mapping of tree species.


2020 ◽  
Vol 2020 ◽  
pp. 1-10 ◽  
Author(s):  
Chaohui Tang ◽  
Qingxin Zhu ◽  
Wenjun Wu ◽  
Wenlin Huang ◽  
Chaoqun Hong ◽  
...  

In the past few years, deep learning has become a research hotspot and has had a profound impact on computer vision. Deep CNN has been proven to be the most important and effective model for image processing, but due to the lack of training samples and huge number of learning parameters, it is easy to tend to overfit. In this work, we propose a new two-stage CNN image classification network, named “Improved Convolutional Neural Networks with Image Enhancement for Image Classification” and PLANET in abbreviation, which uses a new image data enhancement method called InnerMove to enhance images and augment the number of training samples. InnerMove is inspired by the “object movement” scene in computer vision and can improve the generalization ability of deep CNN models for image classification tasks. Sufficient experiment results show that PLANET utilizing InnerMove for image enhancement outperforms the comparative algorithms, and InnerMove has a more significant effect than the comparative data enhancement methods for image classification tasks.


Author(s):  
Ashish Dwivedi ◽  
Nirupma Tiwari

Image enhancement (IE) is very important in the field where visual appearance of an image is the main. Image enhancement is the process of improving the image in such a way that the resulting or output image is more suitable than the original image for specific task. With the help of image enhancement process the quality of image can be improved to get good quality images so that they can be clear for human perception or for the further analysis done by machines.Image enhancement method enhances the quality, visual appearance, improves clarity of images, removes blurring and noise, increases contrast and reveals details. The aim of this paper is to study and determine limitations of the existing IE techniques. This paper will provide an overview of different IE techniques commonly used. We Applied DWT on original RGB image then we applied FHE (Fuzzy Histogram Equalization) after DWT we have done the wavelet shrinkage on Three bands (LH, HL, HH). After that we fuse the shrinkage image and FHE image together and we get the enhance image.


Author(s):  
ZHAO Baiting ◽  
WANG Feng ◽  
JIA Xiaofen ◽  
GUO Yongcun ◽  
WANG Chengjun

Background:: Aiming at the problems of color distortion, low clarity and poor visibility of underwater image caused by complex underwater environment, a wavelet fusion method UIPWF for underwater image enhancement is proposed. Methods:: First of all, an improved NCB color balance method is designed to identify and cut the abnormal pixels, and balance the color of R, G and B channels by affine transformation. Then, the color correction map is converted to CIELab color space, and the L component is equalized with contrast limited adaptive histogram to obtain the brightness enhancement map. Finally, different fusion rules are designed for low-frequency and high-frequency components, the pixel level wavelet fusion of color balance image and brightness enhancement image is realized to improve the edge detail contrast on the basis of protecting the underwater image contour. Results:: The experiments demonstrate that compared with the existing underwater image processing methods, UIPWF is highly effective in the underwater image enhancement task, improves the objective indicators greatly, and produces visually pleasing enhancement images with clear edges and reasonable color information. Conclusion:: The UIPWF method can effectively mitigate the color distortion, improve the clarity and contrast, which is applicable for underwater image enhancement in different environments.


2021 ◽  
Vol 9 (2) ◽  
pp. 225
Author(s):  
Farong Gao ◽  
Kai Wang ◽  
Zhangyi Yang ◽  
Yejian Wang ◽  
Qizhong Zhang

In this study, an underwater image enhancement method based on local contrast correction (LCC) and multi-scale fusion is proposed to resolve low contrast and color distortion of underwater images. First, the original image is compensated using the red channel, and the compensated image is processed with a white balance. Second, LCC and image sharpening are carried out to generate two different image versions. Finally, the local contrast corrected images are fused with sharpened images by the multi-scale fusion method. The results show that the proposed method can be applied to water degradation images in different environments without resorting to an image formation model. It can effectively solve color distortion, low contrast, and unobvious details of underwater images.


Mathematics ◽  
2021 ◽  
Vol 9 (6) ◽  
pp. 595
Author(s):  
Huajun Song ◽  
Rui Wang

Aimed at the two problems of color deviation and poor visibility of the underwater image, this paper proposes an underwater image enhancement method based on the multi-scale fusion and global stretching of dual-model (MFGS), which does not rely on the underwater optical imaging model. The proposed method consists of three stages: Compared with other color correction algorithms, white-balancing can effectively eliminate the undesirable color deviation caused by medium attenuation, so it is selected to correct the color deviation in the first stage. Then, aimed at the problem of the poor performance of the saliency weight map in the traditional fusion processing, this paper proposed an updated strategy of saliency weight coefficient combining contrast and spatial cues to achieve high-quality fusion. Finally, by analyzing the characteristics of the results of the above steps, it is found that the brightness and clarity need to be further improved. The global stretching of the full channel in the red, green, blue (RGB) model is applied to enhance the color contrast, and the selective stretching of the L channel in the Commission International Eclairage-Lab (CIE-Lab) model is implemented to achieve a better de-hazing effect. Quantitative and qualitative assessments on the underwater image enhancement benchmark dataset (UIEBD) show that the enhanced images of the proposed approach achieve significant and sufficient improvements in color and visibility.


Sign in / Sign up

Export Citation Format

Share Document