scholarly journals Maize Disease Recognition Based On Image Enhancement And OSCRNet

Author(s):  
Hongji Zhang ◽  
Zhou Guoxiong ◽  
Aibin Chen ◽  
Jiayong Li ◽  
Mingxuan Li ◽  
...  

Abstract Background: Under natural light irradiation, there are significant challenges in the identification of maize leaf diseases because of the difficulties in extracting lesion features from constantly changing environments, uneven illumination reflection of the incident light source and many other factors.Results: In the present paper, a novel maize image recognition method was proposed. Firstly, an image enhancement framework of the maize leaf was designed, and a multi-scale image enhancement algorithm with color restoration was established to enhance the characteristics of the maize leaf in a complex environment and to solve the problems of high noise and blur of maize images. Subsequently, an OSCRNet maize leaf recognition network model based on the traditional ResNet backbone architecture was designed. In the OSCRNet maize leaf recognition network model, an octave convolution with characteristics to accelerate network training was adopted, reducing unnecessary redundant spatial information in the maize leaf images. Additionally, a self-calibrated convolution with multi-scale features was employed to realize the interactions of different feature information in the maize leaf images, enhance feature extraction, and solve the problems of similarity of maize disease features and easy learning disorders. Concurrently, batch normalization was employed to prevent network overfitting and enhance the robustness of the model. The experiment was conducted on the maize leaf image data set. The highest identification accuracy of rust, grey leaf disease, northern fusarium wilt, and healthy maize was 94.67%, 92.34%, 89.31% and 96.63%, respectively. Conclusions: The aforementioned methods were beneficial in solving the problems of slow efficiency, low accuracy and image recognition training, and also outperformed other comparison models. The present method demonstrates strong robustness for maize disease images collected in the natural environment, providing a reference for the intelligent diagnosis of other plant leaf diseases.

2021 ◽  
Vol 13 (23) ◽  
pp. 4743
Author(s):  
Wei Yuan ◽  
Wenbo Xu

The segmentation of remote sensing images by deep learning technology is the main method for remote sensing image interpretation. However, the segmentation model based on a convolutional neural network cannot capture the global features very well. A transformer, whose self-attention mechanism can supply each pixel with a global feature, makes up for the deficiency of the convolutional neural network. Therefore, a multi-scale adaptive segmentation network model (MSST-Net) based on a Swin Transformer is proposed in this paper. Firstly, a Swin Transformer is used as the backbone to encode the input image. Then, the feature maps of different levels are decoded separately. Thirdly, the convolution is used for fusion, so that the network can automatically learn the weight of the decoding results of each level. Finally, we adjust the channels to obtain the final prediction map by using the convolution with a kernel of 1 × 1. By comparing this with other segmentation network models on a WHU building data set, the evaluation metrics, mIoU, F1-score and accuracy are all improved. The network model proposed in this paper is a multi-scale adaptive network model that pays more attention to the global features for remote sensing segmentation.


2020 ◽  
Author(s):  
Fengli Lu ◽  
Chengcai Fu ◽  
Guoying Zhang ◽  
Jie Shi

Abstract Accurate segmentation of fractures in coal rock CT images is important for safe production and the development of coalbed methane. However, the coal rock fractures formed through natural geological evolution, which are complex, low contrast and different scales. Furthermore, there is no published data set of coal rock. In this paper, we proposed adaptive multi-scale feature fusion based residual U-uet (AMSFFR-U-uet) for fracture segmentation in coal rock CT images. The dilated residual blocks (DResBlock) with dilated ratio (1,2,3) are embedded into encoding branch of the U-uet structure, which can improve the ability of extract feature of network and capture different scales fractures. Furthermore, feature maps of different sizes in the encoding branch are concatenated by adaptive multi-scale feature fusion (AMSFF) module. And AMSFF can not only capture different scales fractures but also improve the restoration of spatial information. To alleviate the lack of coal rock fractures training data, we applied a set of comprehensive data augmentation operations to increase the diversity of training samples. Our network, U-net and Res-U-net are tested on our test set of coal rock CT images with five different region coal rock samples. The experimental results show that our proposed approach improve the average Dice coefficient by 2.9%, the average precision by 7.2% and the average Recall by 9.1% , respectively. Therefore, AMSFFR-U-net can achieve better segmentation results of coal rock fractures, and has stronger generalization ability and robustness.


2021 ◽  
Vol 9 (2) ◽  
pp. 225
Author(s):  
Farong Gao ◽  
Kai Wang ◽  
Zhangyi Yang ◽  
Yejian Wang ◽  
Qizhong Zhang

In this study, an underwater image enhancement method based on local contrast correction (LCC) and multi-scale fusion is proposed to resolve low contrast and color distortion of underwater images. First, the original image is compensated using the red channel, and the compensated image is processed with a white balance. Second, LCC and image sharpening are carried out to generate two different image versions. Finally, the local contrast corrected images are fused with sharpened images by the multi-scale fusion method. The results show that the proposed method can be applied to water degradation images in different environments without resorting to an image formation model. It can effectively solve color distortion, low contrast, and unobvious details of underwater images.


Mathematics ◽  
2021 ◽  
Vol 9 (6) ◽  
pp. 595
Author(s):  
Huajun Song ◽  
Rui Wang

Aimed at the two problems of color deviation and poor visibility of the underwater image, this paper proposes an underwater image enhancement method based on the multi-scale fusion and global stretching of dual-model (MFGS), which does not rely on the underwater optical imaging model. The proposed method consists of three stages: Compared with other color correction algorithms, white-balancing can effectively eliminate the undesirable color deviation caused by medium attenuation, so it is selected to correct the color deviation in the first stage. Then, aimed at the problem of the poor performance of the saliency weight map in the traditional fusion processing, this paper proposed an updated strategy of saliency weight coefficient combining contrast and spatial cues to achieve high-quality fusion. Finally, by analyzing the characteristics of the results of the above steps, it is found that the brightness and clarity need to be further improved. The global stretching of the full channel in the red, green, blue (RGB) model is applied to enhance the color contrast, and the selective stretching of the L channel in the Commission International Eclairage-Lab (CIE-Lab) model is implemented to achieve a better de-hazing effect. Quantitative and qualitative assessments on the underwater image enhancement benchmark dataset (UIEBD) show that the enhanced images of the proposed approach achieve significant and sufficient improvements in color and visibility.


Author(s):  
Lingyu Yan ◽  
Jiarun Fu ◽  
Chunzhi Wang ◽  
Zhiwei Ye ◽  
Hongwei Chen ◽  
...  

AbstractWith the development of image recognition technology, face, body shape, and other factors have been widely used as identification labels, which provide a lot of convenience for our daily life. However, image recognition has much higher requirements for image conditions than traditional identification methods like a password. Therefore, image enhancement plays an important role in the process of image analysis for images with noise, among which the image of low-light is the top priority of our research. In this paper, a low-light image enhancement method based on the enhanced network module optimized Generative Adversarial Networks(GAN) is proposed. The proposed method first applied the enhancement network to input the image into the generator to generate a similar image in the new space, Then constructed a loss function and minimized it to train the discriminator, which is used to compare the image generated by the generator with the real image. We implemented the proposed method on two image datasets (DPED, LOL), and compared it with both the traditional image enhancement method and the deep learning approach. Experiments showed that our proposed network enhanced images have higher PNSR and SSIM, the overall perception of relatively good quality, demonstrating the effectiveness of the method in the aspect of low illumination image enhancement.


Author(s):  
Xiongzhi Ai ◽  
Jiawei Zhuang ◽  
Yonghua Wang ◽  
Pin Wan ◽  
Yu Fu

AbstractUltrasonic image examination is the first choice for the diagnosis of thyroid papillary carcinoma. However, there are some problems in the ultrasonic image of thyroid papillary carcinoma, such as poor definition, tissue overlap and low resolution, which make the ultrasonic image difficult to be diagnosed. Capsule network (CapsNet) can effectively address tissue overlap and other problems. This paper investigates a new network model based on capsule network, which is named as ResCaps network. ResCaps network uses residual modules and enhances the abstract expression of the model. The experimental results reveal that the characteristic classification accuracy of ResCaps3 network model for self-made data set of thyroid papillary carcinoma was $$81.06\%$$ 81.06 % . Furthermore, Fashion-MNIST data set is also tested to show the reliability and validity of ResCaps network model. Notably, the ResCaps network model not only improves the accuracy of CapsNet significantly, but also provides an effective method for the classification of lesion characteristics of thyroid papillary carcinoma ultrasonic images.


2019 ◽  
Vol 22 (13) ◽  
pp. 2907-2921 ◽  
Author(s):  
Xinwen Gao ◽  
Ming Jian ◽  
Min Hu ◽  
Mohan Tanniru ◽  
Shuaiqing Li

With the large-scale construction of urban subways, the detection of tunnel defects becomes particularly important. Due to the complexity of tunnel environment, it is difficult for traditional tunnel defect detection algorithms to detect such defects quickly and accurately. This article presents a deep learning FCN-RCNN model that can detect multiple tunnel defects quickly and accurately. The algorithm uses a Faster RCNN algorithm, Adaptive Border ROI boundary layer and a three-layer structure of the FCN algorithm. The Adaptive Border ROI boundary layer is used to reduce data set redundancy and difficulties in identifying interference during data set creation. The algorithm is compared with single FCN algorithm with no Adaptive Border ROI for different defect types. The results show that our defect detection algorithm not only addresses interference due to segment patching, pipeline smears and obstruction but also the false detection rate decreases from 0.371, 0.285, 0.307 to 0.0502, respectively. Finally, corrected by cylindrical projection model, the false detection rate is further reduced from 0.0502 to 0.0190 and the identification accuracy of water leakage defects is improved.


Sign in / Sign up

Export Citation Format

Share Document