scholarly journals Advanced Agro Field & Crop Surveillance Systems

Agriculture becoming the major driver for Indian economy, applying some of the latest technological digital innovations to solve critical Agri-based challenges are becoming vital to improve the productivity and lower the cost of operations. Primary productivity index of agriculture is directly dependent on how much the crops escaped from attacks either by pests or by external intruders. Applying some of the advanced machine learning techniques in Computer Vision and multiple object detection algorithms in the field of Agriculture surveillance generates huge interest among farmer communities. In this paper, an aapproach which includes deployment of sensors to monitor the whole cultivation area, fixing appropriate cameras and detecting motions in the agro field, is proposed for Agro field surveillance. An orchestrated deployment of necessary sensing devices such as motion-sensing, capturing video based on demand and passes it on to the deep learning algorithms for further synthesis. The model is developed and trained leveraging technologies such as tensorflow, keras with google Colab, Jupyter notebook environment that runs entirely in the google cloud that requires very minimal setup. To evaluate the model, the authors create a test set which contains 200 captured events, more than 60,000 images that are relevant for this scope and available in public to train Deep Learning CNN based models.

Complexity ◽  
2020 ◽  
Vol 2020 ◽  
pp. 1-12 ◽  
Author(s):  
Syed Atif Ali Shah ◽  
Irfan Uddin ◽  
Furqan Aziz ◽  
Shafiq Ahmad ◽  
Mahmoud Ahmad Al-Khasawneh ◽  
...  

Organizations can grow, succeed, and sustain if their employees are committed. The main assets of an organization are those employees who are giving it a required number of hours per month, in other words, those employees who are punctual towards their attendance. Absenteeism from work is a multibillion-dollar problem, and it costs money and decreases revenue. At the time of hiring an employee, organizations do not have an objective mechanism to predict whether an employee will be punctual towards attendance or will be habitually absent. For some organizations, it can be very difficult to deal with those employees who are not punctual, as firing may be either not possible or it may have a huge cost to the organization. In this paper, we propose Neural Networks and Deep Learning algorithms that can predict the behavior of employees towards punctuality at workplace. The efficacy of the proposed method is tested with traditional machine learning techniques, and the results indicate 90.6% performance in Deep Neural Network as compared to 73.3% performance in a single-layer Neural Network and 82% performance in Decision Tree, SVM, and Random Forest. The proposed model will provide a useful mechanism to organizations that are interested to know the behavior of employees at the time of hiring and can reduce the cost of paying to inefficient or habitually absent employees. This paper is a first study of its kind to analyze the patterns of absenteeism in employees using deep learning algorithms and helps the organization to further improve the quality of life of employees and hence reduce absenteeism.


2022 ◽  
Vol 6 (1) ◽  
pp. 9
Author(s):  
Dweepna Garg ◽  
Priyanka Jain ◽  
Ketan Kotecha ◽  
Parth Goel ◽  
Vijayakumar Varadarajan

In recent years, face detection has achieved considerable attention in the field of computer vision using traditional machine learning techniques and deep learning techniques. Deep learning is used to build the most recent and powerful face detection algorithms. However, partial face detection still remains to achieve remarkable performance. Partial faces are occluded due to hair, hat, glasses, hands, mobile phones, and side-angle-captured images. Fewer facial features can be identified from such images. In this paper, we present a deep convolutional neural network face detection method using the anchor boxes section strategy. We limited the number of anchor boxes and scales and chose only relevant to the face shape. The proposed model was trained and tested on a popular and challenging face detection benchmark dataset, i.e., Face Detection Dataset and Benchmark (FDDB), and can also detect partially covered faces with better accuracy and precision. Extensive experiments were performed, with evaluation metrics including accuracy, precision, recall, F1 score, inference time, and FPS. The results show that the proposed model is able to detect the face in the image, including occluded features, more precisely than other state-of-the-art approaches, achieving 94.8% accuracy and 98.7% precision on the FDDB dataset at 21 frames per second (FPS).


Mathematics ◽  
2020 ◽  
Vol 8 (12) ◽  
pp. 2258
Author(s):  
Madhab Raj Joshi ◽  
Lewis Nkenyereye ◽  
Gyanendra Prasad Joshi ◽  
S. M. Riazul Islam ◽  
Mohammad Abdullah-Al-Wadud ◽  
...  

Enhancement of Cultural Heritage such as historical images is very crucial to safeguard the diversity of cultures. Automated colorization of black and white images has been subject to extensive research through computer vision and machine learning techniques. Our research addresses the problem of generating a plausible colored photograph of ancient, historically black, and white images of Nepal using deep learning techniques without direct human intervention. Motivated by the recent success of deep learning techniques in image processing, a feed-forward, deep Convolutional Neural Network (CNN) in combination with Inception- ResnetV2 is being trained by sets of sample images using back-propagation to recognize the pattern in RGB and grayscale values. The trained neural network is then used to predict two a* and b* chroma channels given grayscale, L channel of test images. CNN vividly colorizes images with the help of the fusion layer accounting for local features as well as global features. Two objective functions, namely, Mean Squared Error (MSE) and Peak Signal-to-Noise Ratio (PSNR), are employed for objective quality assessment between the estimated color image and its ground truth. The model is trained on the dataset created by ourselves with 1.2 K historical images comprised of old and ancient photographs of Nepal, each having 256 × 256 resolution. The loss i.e., MSE, PSNR, and accuracy of the model are found to be 6.08%, 34.65 dB, and 75.23%, respectively. Other than presenting the training results, the public acceptance or subjective validation of the generated images is assessed by means of a user study where the model shows 41.71% of naturalness while evaluating colorization results.


Vibration ◽  
2021 ◽  
Vol 4 (2) ◽  
pp. 341-356
Author(s):  
Jessada Sresakoolchai ◽  
Sakdirat Kaewunruen

Various techniques have been developed to detect railway defects. One of the popular techniques is machine learning. This unprecedented study applies deep learning, which is a branch of machine learning techniques, to detect and evaluate the severity of rail combined defects. The combined defects in the study are settlement and dipped joint. Features used to detect and evaluate the severity of combined defects are axle box accelerations simulated using a verified rolling stock dynamic behavior simulation called D-Track. A total of 1650 simulations are run to generate numerical data. Deep learning techniques used in the study are deep neural network (DNN), convolutional neural network (CNN), and recurrent neural network (RNN). Simulated data are used in two ways: simplified data and raw data. Simplified data are used to develop the DNN model, while raw data are used to develop the CNN and RNN model. For simplified data, features are extracted from raw data, which are the weight of rolling stock, the speed of rolling stock, and three peak and bottom accelerations from two wheels of rolling stock. In total, there are 14 features used as simplified data for developing the DNN model. For raw data, time-domain accelerations are used directly to develop the CNN and RNN models without processing and data extraction. Hyperparameter tuning is performed to ensure that the performance of each model is optimized. Grid search is used for performing hyperparameter tuning. To detect the combined defects, the study proposes two approaches. The first approach uses one model to detect settlement and dipped joint, and the second approach uses two models to detect settlement and dipped joint separately. The results show that the CNN models of both approaches provide the same accuracy of 99%, so one model is good enough to detect settlement and dipped joint. To evaluate the severity of the combined defects, the study applies classification and regression concepts. Classification is used to evaluate the severity by categorizing defects into light, medium, and severe classes, and regression is used to estimate the size of defects. From the study, the CNN model is suitable for evaluating dipped joint severity with an accuracy of 84% and mean absolute error (MAE) of 1.25 mm, and the RNN model is suitable for evaluating settlement severity with an accuracy of 99% and mean absolute error (MAE) of 1.58 mm.


2021 ◽  
pp. 1-55
Author(s):  
Emma A. H. Michie ◽  
Behzad Alaei ◽  
Alvar Braathen

Generating an accurate model of the subsurface for the purpose of assessing the feasibility of a CO2 storage site is crucial. In particular, how faults are interpreted is likely to influence the predicted capacity and integrity of the reservoir; whether this is through identifying high risk areas along the fault, where fluid is likely to flow across the fault, or by assessing the reactivation potential of the fault with increased pressure, causing fluid to flow up the fault. New technologies allow users to interpret faults effortlessly, and in much quicker time, utilizing methods such as Deep Learning. These Deep Learning techniques use knowledge from Neural Networks to allow end-users to compute areas where faults are likely to occur. Although these new technologies may be attractive due to reduced interpretation time, it is important to understand the inherent uncertainties in their ability to predict accurate fault geometries. Here, we compare Deep Learning fault interpretation versus manual fault interpretation, and can see distinct differences to those faults where significant ambiguity exists due to poor seismic resolution at the fault; we observe an increased irregularity when Deep Learning methods are used over conventional manual interpretation. This can result in significant differences between the resulting analyses, such as fault reactivation potential. Conversely, we observe that well-imaged faults show a close similarity between the resulting fault surfaces when both Deep Learning and manual fault interpretation methods are employed, and hence we also observe a close similarity between any attributes and fault analyses made.


Author(s):  
V Umarani ◽  
A Julian ◽  
J Deepa

Sentiment analysis has gained a lot of attention from researchers in the last year because it has been widely applied to a variety of application domains such as business, government, education, sports, tourism, biomedicine, and telecommunication services. Sentiment analysis is an automated computational method for studying or evaluating sentiments, feelings, and emotions expressed as comments, feedbacks, or critiques. The sentiment analysis process can be automated using machine learning techniques, which analyses text patterns faster. The supervised machine learning technique is the most used mechanism for sentiment analysis. The proposed work discusses the flow of sentiment analysis process and investigates the common supervised machine learning techniques such as multinomial naive bayes, Bernoulli naive bayes, logistic regression, support vector machine, random forest, K-nearest neighbor, decision tree, and deep learning techniques such as Long Short-Term Memory and Convolution Neural Network. The work examines such learning methods using standard data set and the experimental results of sentiment analysis demonstrate the performance of various classifiers taken in terms of the precision, recall, F1-score, RoC-Curve, accuracy, running time and k fold cross validation and helps in appreciating the novelty of the several deep learning techniques and also giving the user an overview of choosing the right technique for their application.


Polymers ◽  
2021 ◽  
Vol 13 (18) ◽  
pp. 3100
Author(s):  
Anusha Mairpady ◽  
Abdel-Hamid I. Mourad ◽  
Mohammad Sayem Mozumder

The selection of nanofillers and compatibilizing agents, and their size and concentration, are always considered to be crucial in the design of durable nanobiocomposites with maximized mechanical properties (i.e., fracture strength (FS), yield strength (YS), Young’s modulus (YM), etc). Therefore, the statistical optimization of the key design factors has become extremely important to minimize the experimental runs and the cost involved. In this study, both statistical (i.e., analysis of variance (ANOVA) and response surface methodology (RSM)) and machine learning techniques (i.e., artificial intelligence-based techniques (i.e., artificial neural network (ANN) and genetic algorithm (GA)) were used to optimize the concentrations of nanofillers and compatibilizing agents of the injection-molded HDPE nanocomposites. Initially, through ANOVA, the concentrations of TiO2 and cellulose nanocrystals (CNCs) and their combinations were found to be the major factors in improving the durability of the HDPE nanocomposites. Further, the data were modeled and predicted using RSM, ANN, and their combination with a genetic algorithm (i.e., RSM-GA and ANN-GA). Later, to minimize the risk of local optimization, an ANN-GA hybrid technique was implemented in this study to optimize multiple responses, to develop the nonlinear relationship between the factors (i.e., the concentration of TiO2 and CNCs) and responses (i.e., FS, YS, and YM), with minimum error and with regression values above 95%.


Sign in / Sign up

Export Citation Format

Share Document