scholarly journals Ensemble Averaging of Transfer Learning Models for Identification of Nutritional Deficiency in Rice Plant

Electronics ◽  
2022 ◽  
Vol 11 (1) ◽  
pp. 148
Author(s):  
Mayuri Sharma ◽  
Keshab Nath ◽  
Rupam Kumar Sharma ◽  
Chandan Jyoti Kumar ◽  
Ankit Chaudhary

Computer vision-based automation has become popular in detecting and monitoring plants’ nutrient deficiencies in recent times. The predictive model developed by various researchers were so designed that it can be used in an embedded system, keeping in mind the availability of computational resources. Nevertheless, the enormous popularity of smart phone technology has opened the door of opportunity to common farmers to have access to high computing resources. To facilitate smart phone users, this study proposes a framework of hosting high end systems in the cloud where processing can be done, and farmers can interact with the cloud-based system. With the availability of high computational power, many studies have been focused on applying convolutional Neural Networks-based Deep Learning (CNN-based DL) architectures, including Transfer learning (TL) models on agricultural research. Ensembling of various TL architectures has the potential to improve the performance of predictive models by a great extent. In this work, six TL architectures viz. InceptionV3, ResNet152V2, Xception, DenseNet201, InceptionResNetV2, and VGG19 are considered, and their various ensemble models are used to carry out the task of deficiency diagnosis in rice plants. Two publicly available datasets from Mendeley and Kaggle are used in this study. The ensemble-based architecture enhanced the highest classification accuracy to 100% from 99.17% in the Mendeley dataset, while for the Kaggle dataset; it was enhanced to 92% from 90%.

2021 ◽  
Vol 11 (15) ◽  
pp. 6811
Author(s):  
Emanuel Marques Queiroga ◽  
Carolina Rodríguez Enríquez ◽  
Cristian Cechinel ◽  
Alén Perez Casas ◽  
Virgínia Rodés Paragarino ◽  
...  

This paper describes the application of Data Science and Educational Data Mining techniques to data from 4529 students, seeking to identify behavior patterns and generate early predictive models at the Universidad de la República del Uruguay. The paper describes the use of data from different sources (a Virtual Learning Environment, survey, and academic system) to generate predictive models and discover the most impactful variables linked to student success. The combination of different data sources demonstrated a high predictive power, achieving prediction rates with outstanding discrimination at the fourth week of a course. The analysis showed that students with more interactions inside the Virtual Learning Environment tended to have more success in their disciplines. The results also revealed some relevant attributes that influenced the students’ success, such as the number of subjects the student was enrolled in, the students’ mother’s education, and the students’ neighborhood. From the results emerged some institutional policies, such as the allocation of computational resources for the Virtual Learning Environment infrastructure and its widespread use, the development of tools for following the trajectory of students, and the detection of students at-risk of failure. The construction of an interdisciplinary exchange bridge between sociology, education, and data science is also a significant contribution to the academic community that may help in constructing university educational policies.


2013 ◽  
Vol 475-476 ◽  
pp. 1150-1153 ◽  
Author(s):  
Yan Zeng Gao ◽  
Ling Yan Wei

Smart home can apply new internet of things concepts along cloud service technologies. This paper introduces a novel method for smart home system building. The system is driven by use case and it is composed of home control center, zigbee end devices, smart phone applications and cloud server. The home control center is based on arm-linux embedded system, it is the relay of cloud server and home devices. Wireless network of smart home devices was designed according to zigbee. A smart phone application was developed as the role of the user interface.


Sensors ◽  
2018 ◽  
Vol 18 (10) ◽  
pp. 3472 ◽  
Author(s):  
Yuan Wu ◽  
Xiangxu Chen ◽  
Jiajun Shi ◽  
Kejie Ni ◽  
Liping Qian ◽  
...  

Blockchain has emerged as a decentralized and trustable ledger for recording and storing digital transactions. The mining process of Blockchain, however, incurs a heavy computational workload for miners to solve the proof-of-work puzzle (i.e., a series of the hashing computation), which is prohibitive from the perspective of the mobile terminals (MTs). The advanced multi-access mobile edge computing (MEC), which enables the MTs to offload part of the computational workloads (for solving the proof-of-work) to the nearby edge-servers (ESs), provides a promising approach to address this issue. By offloading the computational workloads via multi-access MEC, the MTs can effectively increase their successful probabilities when participating in the mining game and gain the consequent reward (i.e., winning the bitcoin). However, as a compensation to the ESs which provide the computational resources to the MTs, the MTs need to pay the ESs for the corresponding resource-acquisition costs. Thus, to investigate the trade-off between obtaining the computational resources from the ESs (for solving the proof-of-work) and paying for the consequent cost, we formulate an optimization problem in which the MTs determine their acquired computational resources from different ESs, with the objective of maximizing the MTs’ social net-reward in the mining process while keeping the fairness among the MTs. In spite of the non-convexity of the formulated problem, we exploit its layered structure and propose efficient distributed algorithms for the MTs to individually determine their optimal computational resources acquired from different ESs. Numerical results are provided to validate the effectiveness of our proposed algorithms and the performance of our proposed multi-access MEC for Blockchain.


Author(s):  
Rafael Nogueras ◽  
Carlos Cotta

Computational environments emerging from the pervasiveness of networked devices offer a plethora of opportunities and challenges. The latter arise from their dynamic, inherently volatile nature that tests the resilience of algorithms running on them. Here we consider the deployment of population-based optimization algorithms on such environments, using the island model of memetic algorithms for this purpose. These memetic algorithms are endowed with self-★ properties that give them the ability to work autonomously in order to optimize their performance and to react to the instability of computational resources. The main focus of this work is analyzing the performance of these memetic algorithms when the underlying computational substrate is not only volatile but also heterogeneous in terms of the computational power of each of its constituent nodes. To this end, we use a simulated environment that allows experimenting with different volatility rates and heterogeneity scenarios (that is, different distributions of computational power among computing nodes), and we study different strategies for distributing the search among nodes. We observe that the addition of self-scaling and self-healing properties makes the memetic algorithm very robust to both system instability and computational heterogeneity. Additionally, a strategy based on distributing single islands on each computational node is shown to perform globally better than placing many such islands on each of them (either proportionally to their computing power or subject to an intermediate compromise).


Author(s):  
Mamta Bisht ◽  
Richa Gupta

Script recognition is the first necessary preliminary step for text recognition. In the deep learning era, for this task two essential requirements are the availability of a large labeled dataset for training and computational resources to train models. But if we have limitations on these requirements then we need to think of alternative methods. This provides an impetus to explore the field of transfer learning, in which the previously trained model knowledge established in the benchmark dataset can be reused in another smaller dataset for another task, thus saving computational power as it requires to train only less number of parameters from the total parameters in the model. Here we study two pre-trained models and fine-tune them for script classification tasks. Firstly, the VGG-16 pre-trained model is fine-tuned for publically available CVSI-15 and MLe2e datasets for script recognition. Secondly, a well-performed model on Devanagari handwritten characters dataset has been adopted and fine-tuned for the Kaggle Devanagari numeral dataset for numeral recognition. The performance of proposed fine-tune models is related to the nature of the target dataset as similar or dissimilar from the original dataset and it has been analyzed with widely used optimizers.


2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Vishu Gupta ◽  
Kamal Choudhary ◽  
Francesca Tavazza ◽  
Carelyn Campbell ◽  
Wei-keng Liao ◽  
...  

AbstractArtificial intelligence (AI) and machine learning (ML) have been increasingly used in materials science to build predictive models and accelerate discovery. For selected properties, availability of large databases has also facilitated application of deep learning (DL) and transfer learning (TL). However, unavailability of large datasets for a majority of properties prohibits widespread application of DL/TL. We present a cross-property deep-transfer-learning framework that leverages models trained on large datasets to build models on small datasets of different properties. We test the proposed framework on 39 computational and two experimental datasets and find that the TL models with only elemental fractions as input outperform ML/DL models trained from scratch even when they are allowed to use physical attributes as input, for 27/39 (≈ 69%) computational and both the experimental datasets. We believe that the proposed framework can be widely useful to tackle the small data challenge in applying AI/ML in materials science.


1996 ◽  
Vol 21 (1) ◽  
pp. 282-283
Author(s):  
M. O. Way ◽  
R. G. Wallace

Abstract The experiments were conducted at the TAMU Agricultural Research and Extension Center at Beaumont. Experiment I was water-seeded rice with continuous flood. The experiment was designed as a RCB with 6 treatments and 4 replications. Each plot was 15 ft X 8 ft and was surrounded by a metal barrier to prevent movement of insecticide. On 12 May plots were treated with Ordram 1 at 27 lb/acre and fertilized with urea at 110.5 lb N/acre followed by a light incorporation into dry, cloddy soil (League) with a rake. Plots were then flooded and sown (12 May) with presprouted Gulfmont seed at 130 lb dry seed/acre. To prepare presprouted seed, dry seed was soaked in water for 24 h then drained and allowed to air dry for 24 h before planting. Flood depth was about 4 inches and rice emerged through water 18 May–6 d after planting. Karate treatments were applied with a 4 nozzle (tip size 800067, 50 mesh screen), hand-held, CO2 pressurized spray rig. Final spray volume was 30 gpa. Furadan was applied with a hand-held shaker jar at the rate and time shown in Table 1. On 12 Jun (25 d after emergence of rice through water) 5, 4 inch diam X 4 inch deep soil cores (each core contained at least 1 rice plant) were removed from each plot, washed, and immature RWW recovered. At maturity (24 Aug) plots were harvested with a small combine and yields adjusted to 12% moisture. Insect counts were transformed using x+0.5 and all data analyzed by 2 way ANOVA and, where appropriate, DMRT.


Author(s):  
Zainab Khyioon Abdalrdha

The mobile phone environment represents one of the important environments in encryption various multimedia (audio, image, and video) , and this depends on the type of algorithm used in the encryption process, as phones have limited memory and computational resources, Therefore the selection of the algorithm must be compatible with the mobile environment in terms of speed, safety and flexibility in addition to choosing an algorithm that The simplicity and safety of the image encryption process was investigated with lightweight and efficient computing. In this paper, Hybrid Cube Encryption (HiSec) was used. When implementing this algorithm in a smart phone environment, the results showed the ease of encrypting images and retrieving the original image, in addition to that it only requires small computational resources, and the algorithm was very effective in encrypting images on mobile phones. This suggested method has been implemented in the mobile environment with android OS. The proposed method has been programmed in JAVA, and the method has been tried on different types of mobile phones (such as Huawei Nova 2, Huawei Nova 7, HTC, NOT 8, Galaxy S 20, and HONOR).


Webology ◽  
2021 ◽  
Vol 18 (2) ◽  
pp. 439-448
Author(s):  
Parameswar Kanuparthi ◽  
Vaibhav Bejgam ◽  
V. Madhu Viswanatham

Agriculture, the primary sector of Indian economy. It contributes around 18 percent of overall GDP (Gross Domestic Product). More than fifty percent of Indians belong to an agricultural background. There is a necessary to rapidly increase the agriculture production in India due to the vast increasing of population. The significant crop type for most of the people in India is rice but it was one of the crops that has been mostly affected by the cause of diseases in majority of the cases. This results in reduced yield that lead to loss for farmers. The major challenges faced while cultivating the rice crops is getting infected by the diseases due to the various effects that include environmental conditions, pesticides used and natural disasters. Early detection of rice diseases will eventually help farmers to get out from disasters and help in better yield. In this paper, we are proposing a new method of ensembling the transfer learning models to detect the rice plant and classify the diseases using images. Using this model, the three most common rice crop diseases are detected such as Brown spot, Leaf smut and Bacterial leaf blight. Generally, transfer learning uses pre-trained models and gives better accuracy for the image datasets. Also, ensembling of machine learning algorithms (combining two or more ML algorithms) will help in reducing the generalization error and also makes the model more robust. Ensemble learning is becoming trendier as it reduces generalization error as well as makes the model more robust. The ensembling technique that was used in the paper is majority voting. Here we are proposing a novel model that ensembles three transfer learning models which are InceptionV3, MobileNetV2 and DenseNet121 with an accuracy of 96.42%.


Sign in / Sign up

Export Citation Format

Share Document