scholarly journals Recognition of Bloom/Yield in Crop Images Using Deep Learning Models for Smart Agriculture: A Review

Agronomy ◽  
2021 ◽  
Vol 11 (4) ◽  
pp. 646
Author(s):  
Bini Darwin ◽  
Pamela Dharmaraj ◽  
Shajin Prince ◽  
Daniela Elena Popescu ◽  
Duraisamy Jude Hemanth

Precision agriculture is a crucial way to achieve greater yields by utilizing the natural deposits in a diverse environment. The yield of a crop may vary from year to year depending on the variations in climate, soil parameters and fertilizers used. Automation in the agricultural industry moderates the usage of resources and can increase the quality of food in the post-pandemic world. Agricultural robots have been developed for crop seeding, monitoring, weed control, pest management and harvesting. Physical counting of fruitlets, flowers or fruits at various phases of growth is labour intensive as well as an expensive procedure for crop yield estimation. Remote sensing technologies offer accuracy and reliability in crop yield prediction and estimation. The automation in image analysis with computer vision and deep learning models provides precise field and yield maps. In this review, it has been observed that the application of deep learning techniques has provided a better accuracy for smart farming. The crops taken for the study are fruits such as grapes, apples, citrus, tomatoes and vegetables such as sugarcane, corn, soybean, cucumber, maize, wheat. The research works which are carried out in this research paper are available as products for applications such as robot harvesting, weed detection and pest infestation. The methods which made use of conventional deep learning techniques have provided an average accuracy of 92.51%. This paper elucidates the diverse automation approaches for crop yield detection techniques with virtual analysis and classifier approaches. Technical hitches in the deep learning techniques have progressed with limitations and future investigations are also surveyed. This work highlights the machine vision and deep learning models which need to be explored for improving automated precision farming expressly during this pandemic.

2021 ◽  
Vol 22 (15) ◽  
pp. 7911
Author(s):  
Eugene Lin ◽  
Chieh-Hsin Lin ◽  
Hsien-Yuan Lane

A growing body of evidence currently proposes that deep learning approaches can serve as an essential cornerstone for the diagnosis and prediction of Alzheimer’s disease (AD). In light of the latest advancements in neuroimaging and genomics, numerous deep learning models are being exploited to distinguish AD from normal controls and/or to distinguish AD from mild cognitive impairment in recent research studies. In this review, we focus on the latest developments for AD prediction using deep learning techniques in cooperation with the principles of neuroimaging and genomics. First, we narrate various investigations that make use of deep learning algorithms to establish AD prediction using genomics or neuroimaging data. Particularly, we delineate relevant integrative neuroimaging genomics investigations that leverage deep learning methods to forecast AD on the basis of incorporating both neuroimaging and genomics data. Moreover, we outline the limitations as regards to the recent AD investigations of deep learning with neuroimaging and genomics. Finally, we depict a discussion of challenges and directions for future research. The main novelty of this work is that we summarize the major points of these investigations and scrutinize the similarities and differences among these investigations.


Sensors ◽  
2021 ◽  
Vol 21 (15) ◽  
pp. 5039
Author(s):  
Tae-Hyun Kim ◽  
Hye-Rin Kim ◽  
Yeong-Jun Cho

In this study, we present a framework for product quality inspection based on deep learning techniques. First, we categorize several deep learning models that can be applied to product inspection systems. In addition, we explain the steps for building a deep-learning-based inspection system in detail. Second, we address connection schemes that efficiently link deep learning models to product inspection systems. Finally, we propose an effective method that can maintain and enhance a product inspection system according to improvement goals of the existing product inspection systems. The proposed system is observed to possess good system maintenance and stability owing to the proposed methods. All the proposed methods are integrated into a unified framework and we provide detailed explanations of each proposed method. In order to verify the effectiveness of the proposed system, we compare and analyze the performance of the methods in various test scenarios. We expect that our study will provide useful guidelines to readers who desire to implement deep-learning-based systems for product inspection.


Author(s):  
Vu Tuan Hai ◽  
Dang Thanh Vu ◽  
Huynh Ho Thi Mong Trinh ◽  
Pham The Bao

Recent advances in deep learning models have shown promising potential in object removal, which is the task of replacing undesired objects with appropriate pixel values using known context. Object removal-based deep learning can commonly be solved by modeling it as the Img2Img (image to image) translation or Inpainting. Instead of dealing with a large context, this paper aims at a specific application of object removal, that is, erasing braces trace out of an image having teeth with braces (called braces2teeth problem). We solved the problem by three methods corresponding to different datasets. Firstly, we use the CycleGAN model to deal with the problem that paired training data is not available. In the second case, we try to create pseudo-paired data to train the Pix2Pix model. In the last case, we utilize GraphCut combining generative inpainting model to build a user-interactive tool that can improve the result in case the user is not satisfied with previous results. To our best knowledge, this study is one of the first attempts to take the braces2teeth problem into account by using deep learning techniques and it can be applied in various fields, from health care to entertainment.


2021 ◽  
Author(s):  
Ramy Abdallah ◽  
Clare E. Bond ◽  
Robert W.H. Butler

<p>Machine learning is being presented as a new solution for a wide range of geoscience problems. Primarily machine learning has been used for 3D seismic data processing, seismic facies analysis and well log data correlation. The rapid development in technology with open-source artificial intelligence libraries and the accessibility of affordable computer graphics processing units (GPU) makes the application of machine learning in geosciences increasingly tractable. However, the application of artificial intelligence in structural interpretation workflows of subsurface datasets is still ambiguous. This study aims to use machine learning techniques to classify images of folds and fold-thrust structures. Here we show that convolutional neural networks (CNNs) as supervised deep learning techniques provide excellent algorithms to discriminate between geological image datasets. Four different datasets of images have been used to train and test the machine learning models. These four datasets are a seismic character dataset with five classes (faults, folds, salt, flat layers and basement), folds types with three classes (buckle, chevron and conjugate), fault types with three classes (normal, reverse and thrust) and fold-thrust geometries with three classes (fault bend fold, fault propagation fold and detachment fold). These image datasets are used to investigate three machine learning models. One Feedforward linear neural network model and two convolutional neural networks models (Convolution 2d layer transforms sequential model and Residual block model (ResNet with 9, 34, and 50 layers)). Validation and testing datasets forms a critical part of testing the model’s performance accuracy. The ResNet model records the highest performance accuracy score, of the machine learning models tested. Our CNN image classification model analysis provides a framework for applying machine learning to increase structural interpretation efficiency, and shows that CNN classification models can be applied effectively to geoscience problems. The study provides a starting point to apply unsupervised machine learning approaches to sub-surface structural interpretation workflows.</p>


2021 ◽  
Author(s):  
Amit Kumar Srivast ◽  
Nima Safaei ◽  
Saeed Khaki ◽  
Gina Lopez ◽  
Wenzhi Zeng ◽  
...  

Abstract Crop yield forecasting depends on many interactive factors including crop genotype, weather, soil, and management practices. This study analyzes the performance of machine learning and deep learning methods for winter wheat yield prediction using extensive datasets of weather, soil, and crop phenology. We propose a convolutional neural network (CNN) which uses the 1-dimentional convolution operation to capture the time dependencies of environmental variables. The proposed CNN, evaluated along with other machine learning models for winter wheat yield prediction in Germany, outperformed all other models tested. To address the seasonality, weekly features were used that explicitly take soil moisture and meteorological events into account. Our results indicated that nonlinear models such as deep learning models and XGboost are more effective in finding the functional relationship between the crop yield and input data compared to linear models and deep neural networks had a higher prediction accuracy than XGboost. One of the main limitations of machine learning models is their black box property. Therefore, we moved beyond prediction and performed feature selection, as it provides key results towards explaining yield prediction (variable importance by time). As such, our study indicates which variables have the most significant effect on winter wheat yield.


2019 ◽  
Author(s):  
Ismael Araujo ◽  
Juan Gamboa ◽  
Adenilton Silva

To recognize patterns that are usually imperceptible by human beings has been one of the main advantages of using machine learning algorithms The use of Deep Learning techniques has been promising to the classification problems, especially the ones related to image classification. The classification of gases detected by an artificial nose is one other area where Deep Learning techniques can be used to seek classification improvements. Succeeding in a classification task can result in many advantages to quality control, as well as to preventing accidents. In this work, it is presented some Deep Learning models specifically created to the task of gas classification.


2021 ◽  
Vol 40 ◽  
pp. 03030
Author(s):  
Mehdi Surani ◽  
Ramchandra Mangrulkar

Over the past years the exponential growth of social media usage has given the power to every individual to share their opinions freely. This has led to numerous threats allowing users to exploit their freedom of speech, thus spreading hateful comments, using abusive language, carrying out personal attacks, and sometimes even to the extent of cyberbullying. However, determining abusive content is not a difficult task and many social media platforms have solutions available already but at the same time, many are searching for more efficient ways and solutions to overcome this issue. Traditional models explore machine learning models to identify negative content posted on social media. Shaming categories are explored, and content is put in place according to the label. Such categorization is easy to detect as the contextual language used is direct. However, the use of irony to mock or convey contempt is also a part of public shaming and must be considered while categorizing the shaming labels. In this research paper, various shaming types, namely toxic, severe toxic, obscene, threat, insult, identity hate, and sarcasm are predicted using deep learning approaches like CNN and LSTM. These models have been studied along with traditional models to determine which model gives the most accurate results.


2021 ◽  
Author(s):  
◽  
Aijing Feng

The world population is estimated to increase by 2 billion in the next 30 years, and global crop production needs to double by 2050 to meet the projected demands from rising population, diet shifts, and increasing biofuels consumption. Improving the production of the major crops has become an increasing concern for the global research community. However, crop development and yield are complex and determined by many factors, such as crop genotypes (varieties), growing environments (e.g., weather, soil, microclimate and location), and agronomic management strategies (e.g., seed treatment and placement, planting, fertilizer and pest management). To develop next-generation and high-efficiency agriculture production systems, we will have to solve the complex equation consisting of the interactions of genotype, environment and management (GxExM) using emerging technologies. Precision agriculture is a promising agriculture practice to increase profitability and reduce environmental impact using site-specific and accurate measurement of crop, soil and environment. The success of precision agriculture technology heavily relies on access to accurate and high-resolution spatiotemporal data and reliable prediction models of crop development and yield. Soil texture and weather conditions are important factors related to crop growth and yield. The percentages of sand, clay and silt in the soil affect the movement of air and water, as well as the water holding capacity. Weather conditions, including temperature, wind, humidity and solar irradiance, are determining factors for crop evapotranspiration and water requirements. Compared to crop yield, which is easy to measure and quantify, crop development effects due to the soil texture and weather conditions within a season can be challenging to measure and quantify. Evaluation of crop development by visual observation at field scale is time-consuming and subjective. In recent years, sensor-based methods have provided a promising way to measure and quantify crop development. Unmanned aerial vehicles (UAVs) equipped with visual sensors, multispectral sensors and/or hyperspectral sensors have been used as a high-throughput data collection tool by many researchers to monitor crop development efficiently at the desired time and at field-scale. In this study, UAV-based remote sensing technologies combining with soil texture and weather conditions were used to study the crop emergence, crop development and yield under the effects of varying soil texture and weather conditions in a cotton research field. Soil texture, i.e., sand and clay content, calculated using apparent soil electrical conductivity (EC [subscript a]) based on a model from a previous study, was used to estimate soil characteristics, including field capacity, wilting point and total available water. Weather data were obtained from a weather station 400 m from the field. UAV imagery data were collected using a high-resolution RGB camera, a multispectral camera and a thermal camera from the crop emergence to before harvesting on a monthly basis. An automatic method to count emerged crop seedlings based on image technologies and a deep learning model was developed for near real-time cotton emergence evaluation. The soil and elevation effects on the stand count and seedling size were explored. The effects of soil texture and weather conditions on cotton growth variation were examined using multispectral images and thermal images during the crop development growth stages. The cotton yield variations due to soil texture and weather conditions were estimated using multiple-year UAV imagery data, soil texture, weather conditions and deep learning techniques. The results showed that field elevation had a high impact on cotton emergence (stand count and seedling size) and clay content had a negative impact on cotton emergence in this study. Monthly growth variations of cotton under different soil textures during crop development growth stages were significant in both 2018 and 2019. Soil clay content in shallow layers (0-40 cm) affected crop development in early growth stages (June and July) while clay content in deep layers (40-70 cm) affected the mid-season growth stages (August and September). Thermal images were more efficient in identifying regions of water stress compared to the water stress coefficient Ks calculated using data of soil texture and weather conditions. Results showed that cotton yield for each one of the three years (2017-2019) could be predicted using the model trained with data of the other two years with prediction errors of MAE = 247 (8.9 [percent]) to 384 kg ha [superscript -1] (13.7 [percent]), which showed that quantifying yield variability for a future year based on soil texture, weather conditions and UAV imagery was feasible. Results from this research indicated that the integration of soil and weather information and UAV-based image data is a promising way to understand the effects of soil and weather on crop emergence, crop development and yield.


2020 ◽  
Author(s):  
Vruddhi Shah ◽  
Rinkal Keniya ◽  
Akanksha Shridharani ◽  
Manav Punjabi ◽  
Jainam Shah ◽  
...  

Early diagnosis of the coronavirus disease in 2019 (COVID-19) is essential for controlling this pandemic. COVID-19 has been spreading rapidly all over the world. There is no vaccine available for this virus yet. Fast and accurate COVID-19 screening is possible using computed tomography (CT) scan images. The deep learning techniques used in the proposed method was based on a convolutional neural network (CNN). Our manuscript focuses on differentiating the CT scan images of COVID-19 and non-COVID 19 CT using different deep learning techniques. A self developed model named CTnet-10 was designed for the COVID-19 diagnosis, having an accuracy of 82.1 %. Also, other models that we tested are DenseNet-169, VGG-16, ResNet-50, InceptionV3, and VGG-19. The VGG-19 proved to be superior with an accuracy of 94.52 % as compared to all other deep learning models. Automated diagnosis of COVID-19 from the CT scan pictures can be used by the doctors as a quick and efficient method for COVID-19 screening.


Sign in / Sign up

Export Citation Format

Share Document