pixel classification
Recently Published Documents


TOTAL DOCUMENTS

317
(FIVE YEARS 15)

H-INDEX

35
(FIVE YEARS 0)

2021 ◽  
pp. 773-782
Author(s):  
T. Hitendra Sarma ◽  
Syam Kakarla


Agriculture ◽  
2021 ◽  
Vol 11 (10) ◽  
pp. 916
Author(s):  
Christian Nansen ◽  
Gabriel Del Villar ◽  
Alexander Recalde ◽  
Elvis Alvarado ◽  
Krishna Chennapragada

It has been recognized for decades that low and inconsistent spray coverages of pesticide applications represent a major challenge to successful and sustainable crop protection. Deployment of water-sensitive spray cards combined with image analysis can provide valuable and quantitative insight into spray coverage. Herein we provide description of a novel and freely available smartphone app, “Smart Spray”, for both iOS and Android smart devices (iOS and Google app stores). More specifically, we provide a theoretical description of spray coverage, and we describe how Smart Spray and similar image-processing software packages can be used as decision support tools and quality control for pesticide spray applications. Performance assessment of the underlying pixel classification algorithm is presented, and we detail practical recommendations on how to use Smart Spray to maximize accuracy and consistency of spray coverage predictions. Smart Spray was developed as part of ongoing efforts to: (1) maximize the performance of pesticide sprays, (2) minimize pest-induced yield loss and to potentially reduce the amount of pesticide used, (2) reduce the risk of target pests developing pesticide resistance, (3) reduce the risk of spray drift, and (4) optimize spray application costs by introducing a quality control.



2021 ◽  
Vol 2033 (1) ◽  
pp. 012163
Author(s):  
Taisong Xiong ◽  
Haicong Li ◽  
Zhu Li ◽  
Yuanyuan Huang


Author(s):  
E O Rodrigues ◽  
L. O. Rodrigues ◽  
J. J. Lima ◽  
D. Casanova ◽  
F. Favarim ◽  
...  


2021 ◽  
Vol 45 (6) ◽  
pp. 317-323
Author(s):  
Ji Hong Chung ◽  
San Kim ◽  
Dong Kee Sohn ◽  
Han Seo Ko


Author(s):  
Mohammed El Amine Bechar ◽  
Nesma Settouti ◽  
Inês Domingues


2021 ◽  
Vol 13 (7) ◽  
pp. 1292
Author(s):  
Mingqiang Guo ◽  
Zhongyang Yu ◽  
Yongyang Xu ◽  
Ying Huang ◽  
Chunfeng Li

Mangroves play an important role in many aspects of ecosystem services. Mangroves should be accurately extracted from remote sensing imagery to dynamically map and monitor the mangrove distribution area. However, popular mangrove extraction methods, such as the object-oriented method, still have some defects for remote sensing imagery, such as being low-intelligence, time-consuming, and laborious. A pixel classification model inspired by deep learning technology was proposed to solve these problems. Three modules in the proposed model were designed to improve the model performance. A multiscale context embedding module was designed to extract multiscale context information. Location information was restored by the global attention module, and the boundary of the feature map was optimized by the boundary fitting unit. Remote sensing imagery and mangrove distribution ground truth labels obtained through visual interpretation were applied to build the dataset. Then, the dataset was used to train deep convolutional neural network (CNN) for extracting the mangrove. Finally, comparative experiments were conducted to prove the potential for mangrove extraction. We selected the Sentinel-2A remote sensing data acquired on 13 April 2018 in Hainan Dongzhaigang National Nature Reserve in China to conduct a group of experiments. After processing, the data exhibited 2093 × 2214 pixels, and a mangrove extraction dataset was generated. The dataset was made from Sentinel-2A satellite, which includes five original bands, namely R, G, B, NIR, and SWIR-1, and six multispectral indices, namely normalization difference vegetation index (NDVI), modified normalized difference water index (MNDWI), forest discrimination index (FDI), wetland forest index (WFI), mangrove discrimination index (MDI), and the first principal component (PCA1). The dataset has a total of 6400 images. Experimental results based on datasets show that the overall accuracy of the trained mangrove extraction network reaches 97.48%. Our method benefits from CNN and achieves a more accurate intersection and union ratio than other machine learning and pixel classification methods by analysis. The designed model global attention module, multiscale context embedding, and boundary fitting unit are helpful for mangrove extraction.



2021 ◽  
pp. 107196
Author(s):  
Putu Desiana Wulaning Ayu ◽  
Sri Hartati ◽  
Aina Musdholifah ◽  
Detty S. Nurdiati


2021 ◽  
Vol 14 (1) ◽  
pp. 420-432
Author(s):  
Putu Ayu ◽  
◽  
Sri Hartati ◽  

This study analyses the use of a pixel classification model to segment amniotic fluid areas on ultrasound (US) images characterized by noise, blurry edge, artifacts, and low contrast. In contrast with the previous methods, this study constrains a training set of pixels based on neighbourhood information with the rectangle window sampling method used to determine the characteristics of each pixel in its environment specifically. The feature extraction is no longer based on the global characteristics of the object rather by taking the value of each pixel in the object area using the sampling window. This research also combines the local first-order statistical methods and gray level information in the window to obtain the characteristics of each pixel. Furthermore, Random Forest and Decision Tree (C.45) were used to classify each pixel into four classes, namely amniotic fluid, placenta, uterus, and fetal body. The classification performance testing of pixel sampling data showed that the Random forest with 5 × 7 window sizes achieved the highest performance at 99.5% accuracy, precision, and recall, respectively. Furthermore, the proposed model was evaluated using 50 new test US images to segment the amniotic fluid area. According to experimental result, proposed models can produce better segmentation area with an increase in the IoU value by 18.3% or a Jaccard coefficient value rate of 0.183 in the range of 0-1 with the previous state of the art method. Furthermore, the proposed model reduces the error rate and improves accuracy by 6.61% and 84.77%, respectively.



Sign in / Sign up

Export Citation Format

Share Document