scholarly journals Classification of Daily Crop Phenology in PhenoCams Using Deep Learning and Hidden Markov Models

2022 ◽  
Vol 14 (2) ◽  
pp. 286
Author(s):  
Shawn D. Taylor ◽  
Dawn M. Browning

Near-surface cameras, such as those in the PhenoCam network, are a common source of ground truth data in modelling and remote sensing studies. Despite having locations across numerous agricultural sites, few studies have used near-surface cameras to track the unique phenology of croplands. Due to management activities, crops do not have a natural vegetation cycle which many phenological extraction methods are based on. For example, a field may experience abrupt changes due to harvesting and tillage throughout the year. A single camera can also record several different plants due to crop rotations, fallow fields, and cover crops. Current methods to estimate phenology metrics from image time series compress all image information into a relative greenness metric, which discards a large amount of contextual information. This can include the type of crop present, whether snow or water is present on the field, the crop phenology, or whether a field lacking green plants consists of bare soil, fully senesced plants, or plant residue. Here, we developed a modelling workflow to create a daily time series of crop type and phenology, while also accounting for other factors such as obstructed images and snow covered fields. We used a mainstream deep learning image classification model, VGG16. Deep learning classification models do not have a temporal component, so to account for temporal correlation among images, our workflow incorporates a hidden Markov model in the post-processing. The initial image classification model had out of sample F1 scores of 0.83–0.85, which improved to 0.86–0.91 after all post-processing steps. The resulting time series show the progression of crops from emergence to harvest, and can serve as a daily, local-scale dataset of field states and phenological stages for agricultural research.

2021 ◽  
Author(s):  
Shawn D Taylor ◽  
Dawn M Browning

Near surface cameras, such as those in the PhenoCam network, are a common source of ground truth data in modelling and remote sensing studies. Despite having locations across numerous agricultural sites, few studies have used near surface cameras to track the unique phenology of croplands. Due to management activities, crops do not have a natural vegetation cycle which many phenological extraction methods are based on. For example, a field may experience abrupt changes due to harvesting and tillage throughout the year. A single camera can also record several different plants due to crop rotations, fallow fields, and cover crops. Current methods to estimate phenology metrics from image time series compress all image information into a relative greenness metric, which discards a large amount of contextual information. This can include the type of crop present, whether snow or water is present on the field, the crop phenology, or whether a field lacking green plants consists of bare soil, fully senesced plants, or plant residue. Here we developed a modelling workflow to create a daily time series of crop type and phenology, while also accounting for other factors such as obstructed images and snow covered fields. We used a mainstream deep learning image classification model, VGG16. Deep learning classification models do not have a temporal component, so to account for temporal correlation among images our workflow incorporates a hidden markov model in the post-processing. The initial image classification model had out of sample F1 scores of 0.83-0.85, which improved to 0.86-0.91 after all post-processing steps. The resulting time series show the progression of crops from emergence to harvest, and can serve as a daily, local scale dataset of field states and phenological stages for agricultural research.


Author(s):  
Koyel Datta Gupta ◽  
Deepak Kumar Sharma ◽  
Shakib Ahmed ◽  
Harsh Gupta ◽  
Deepak Gupta ◽  
...  

Sensors ◽  
2019 ◽  
Vol 19 (15) ◽  
pp. 3434 ◽  
Author(s):  
Nattaya Mairittha ◽  
Tittaya Mairittha ◽  
Sozo Inoue

Labeling activity data is a central part of the design and evaluation of human activity recognition systems. The performance of the systems greatly depends on the quantity and “quality” of annotations; therefore, it is inevitable to rely on users and to keep them motivated to provide activity labels. While mobile and embedded devices are increasingly using deep learning models to infer user context, we propose to exploit on-device deep learning inference using a long short-term memory (LSTM)-based method to alleviate the labeling effort and ground truth data collection in activity recognition systems using smartphone sensors. The novel idea behind this is that estimated activities are used as feedback for motivating users to collect accurate activity labels. To enable us to perform evaluations, we conduct the experiments with two conditional methods. We compare the proposed method showing estimated activities using on-device deep learning inference with the traditional method showing sentences without estimated activities through smartphone notifications. By evaluating with the dataset gathered, the results show our proposed method has improvements in both data quality (i.e., the performance of a classification model) and data quantity (i.e., the number of data collected) that reflect our method could improve activity data collection, which can enhance human activity recognition systems. We discuss the results, limitations, challenges, and implications for on-device deep learning inference that support activity data collection. Also, we publish the preliminary dataset collected to the research community for activity recognition.


2021 ◽  
Author(s):  
Xikun Wei ◽  
Guojie Wang ◽  
Donghan Feng ◽  
Zheng Duan ◽  
Daniel Fiifi Tawia Hagan ◽  
...  

Abstract. Future global temperature change would have significant effects on society and ecosystems. Earth system models (ESM) are the primary tools to explore the future climate change. However, ESMs still exist great uncertainty and often run at a coarse spatial resolution (The majority of ESMs at about 2 degree). Accurate temperature data at high spatial resolution are needed to improve our understanding of the temperature variation and for many applications. We innovatively apply the deep-learning(DL) method from the Super resolution (SR) in the computer vision to merge 31 ESMs data and the proposed method can perform data merge, bias-correction and spatial-downscaling simultaneously. The SR algorithms are designed to enhance image quality and outperform much better than the traditional methods. The CRU TS (Climate Research Unit gridded Time Series) is considered as reference data in the model training process. In order to find a suitable DL method for our work, we choose five SR methodologies made by different structures. Those models are compared based on multiple evaluation metrics (Mean square error(MSE), mean absolute error(MAE) and Pearson correlation coefficient(R)) and the optimal model is selected and used to merge the monthly historical data during 1850–1900 and monthly future scenarios data (SSP1-2.6, SSP2-4.5, SSP3-7.0, SSP5-8.5) during 2015–2100 at the high spatial resolution of 0.5 degree. Results showed that the merged data have considerably improved performance than any of the individual ESM data and the ensemble mean (EM) of all ESM data in terms of both spatial and temporal aspects. The MAE displays a great improvement and the spatial distribution of the MAE become larger and larger along the latitudes in north hemisphere, presenting like a ‘tertiary class echelon’ condition. The merged product also presents excellent performance when the observation data is smooth with few fluctuations in time series. Additionally, this work proves that the DL model can be transferred to deal with the data merge, bias-correction and spatial-downscaling successfully when enough training data are available. Data can be accessed at https://doi.org/10.5281/zenodo.5746632 (Wei et al., 2021).


2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Yong Liang ◽  
Qi Cui ◽  
Xing Luo ◽  
Zhisong Xie

Rock classification is a significant branch of geology which can help understand the formation and evolution of the planet, search for mineral resources, and so on. In traditional methods, rock classification is usually done based on the experience of a professional. However, this method has problems such as low efficiency and susceptibility to subjective factors. Therefore, it is of great significance to establish a simple, fast, and accurate rock classification model. This paper proposes a fine-grained image classification network combining image cutting method and SBV algorithm to improve the classification performance of a small number of fine-grained rock samples. The method uses image cutting to achieve data augmentation without adding additional datasets and uses image block voting scoring to obtain richer complementary information, thereby improving the accuracy of image classification. The classification accuracy of 32 images is 75%, 68.75%, and 75%. The results show that the method proposed in this paper has a significant improvement in the accuracy of image classification, which is 34.375%, 18.75%, and 43.75% higher than that of the original algorithm. It verifies the effectiveness of the algorithm in this paper and at the same time proves that deep learning has great application value in the field of geology.


Water ◽  
2021 ◽  
Vol 13 (3) ◽  
pp. 298
Author(s):  
Jiwen Tang ◽  
Damien Arvor ◽  
Thomas Corpetti ◽  
Ping Tang

Irrigation systems play an important role in agriculture. Center pivot irrigation systems are popular in many countries as they are labor-saving and water consumption efficient. Monitoring the distribution of center pivot irrigation systems can provide important information for agricultural production, water consumption and land use. Deep learning has become an effective method for image classification and object detection. In this paper, a new method to detect the precise shape of center pivot irrigation systems is proposed. The proposed method combines a lightweight real-time object detection network (PVANET) based on deep learning, an image classification model (GoogLeNet) and accurate shape detection (Hough transform) to detect and accurately delineate center pivot irrigation systems and their associated circular shape. PVANET is lightweight and fast and GoogLeNet can reduce the false detections associated with PVANET, while Hough transform can accurately detect the shape of center pivot irrigation systems. Experiments with Sentinel-2 images in Mato Grosso achieved a precision of 95% and a recall of 95.5%, which demonstrated the effectiveness of the proposed method. Finally, with the accurate shape of center pivot irrigation systems detected, the area of irrigation in the region was estimated.


2021 ◽  
Vol 11 (13) ◽  
pp. 5832
Author(s):  
Wei Gou ◽  
Zheng Chen

Chinese Spelling Error Correction is a hot subject in the field of natural language processing. Researchers have already produced many great solutions, from the initial rule-based solution to the current deep learning method. At present, SpellGCN, proposed by Alibaba’s team, achieves the best results of which character level precision over SIGHAN2013 is 98.4%. However, when we apply this algorithm to practical error correction tasks, it produces many false error correction results. We believe that this is because the corpus used for model training contains significantly more errors than the text used for model correcting. In response to this problem, we propose performing a post-processing operation on the error correction tasks. We employ the initial model’s output as a candidate character, obtain various features of the character itself and its context, and then use a classification model to filter the initial model’s false error correction results. The post-processing idea introduced in this paper can apply to most Chinese Spelling Error Correction models to improve their performance over practical error correction tasks.


2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Zhongguo Yang ◽  
Irshad Ahmed Abbasi ◽  
Fahad Algarni ◽  
Sikandar Ali ◽  
Mingzhu Zhang

Nowadays, an Internet of Things (IoT) device consists of algorithms, datasets, and models. Due to good performance of deep learning methods, many devices integrated well-trained models in them. IoT empowers users to communicate and control physical devices to achieve vital information. However, these models are vulnerable to adversarial attacks, which largely bring potential risks to the normal application of deep learning methods. For instance, very little changes even one point in the IoT time-series data could lead to unreliable or wrong decisions. Moreover, these changes could be deliberately generated by following an adversarial attack strategy. We propose a robust IoT data classification model based on an encode-decode joint training model. Furthermore, thermometer encoding is taken as a nonlinear transformation to the original training examples that are used to reconstruct original time series examples through the encode-decode model. The trained ResNet model based on reconstruction examples is more robust to the adversarial attack. Experiments show that the trained model can successfully resist to fast gradient sign method attack to some extent and improve the security of the time series data classification model.


Sign in / Sign up

Export Citation Format

Share Document