scholarly journals Identification of Plant Types by Leaf Textures Based on the Backpropagation Neural Network

Author(s):  
Taufik Hidayat ◽  
Asyaroh Ramadona Nilawati

The number of species of plants or flora in Indonesia is abundant. The wealth of Indonesia's flora species is not to be doubted. Almost every region in Indonesia has one or some distinctive plant(s) which may not exist in other countries. In enhancing the potential diversity of tropical plant resources, good management and utilization of biodiversity is required. Based on such diversity, plant classification becomes a challenge to do. The most common way to recognize between one plant and another is to identify the leaf of each plant. Leaf-based classification is an alternative and the most effective way to do because leaves will exist all the time, while fruits and flowers may only exist at any given time. In this study, the researchers will identify plants based on the textures of the leaves. Leaf feature extraction is done by calculating the area value, perimeter, and additional features of leaf images such as shape roundness and slenderness. The results of the extraction will then be selected for training by using the backpropagation neural network. The result of the training (the formation of the training set) will be calculated to produce the value of recognition accuracy with which the feature value of the dataset of the leaf images is then to be matched. The result of the identification of plant species based on leaf texture characteristics is expected to accelerate the process of plant classification based on the characteristics of the leaves.

Author(s):  
Chih-Ta Yen ◽  
Jia-De Lin

This study employed wearable inertial sensors integrated with an activity-recognition algorithm to recognize six types of daily activities performed by humans, namely walking, ascending stairs, descending stairs, sitting, standing, and lying. The sensor system consisted of a microcontroller, a three-axis accelerometer, and a three-axis gyro; the algorithm involved collecting and normalizing the activity signals. To simplify the calculation process and to maximize the recognition accuracy, the data were preprocessed through linear discriminant analysis; this reduced their dimensionality and captured their features, thereby reducing the feature space of the accelerometer and gyro signals; they were then verified through the use of six classification algorithms. The new contribution is that after feature extraction, data classification results indicated that an artificial neural network was the most stable and effective of the six algorithms. In the experiment, 20 participants equipped the wearable sensors on their waists to record the aforementioned six types of daily activities and to verify the effectiveness of the sensors. According to the cross-validation results, the combination of linear discriminant analysis and an artificial neural network was the most stable classification algorithm for data generalization; its activity-recognition accuracy was 87.37% on the training data and 80.96% on the test data.


2021 ◽  
Vol 3 (1) ◽  
pp. 96-107
Author(s):  
Budiman Rabbani

Abstract The camera is one of the tools used to collect images. Cameras are often used for the safety of homes, highways and others. Then in this study camera captures are used to support fire objects because fire is one of the causes of safety that can be controlled. Therefore, by utilizing a capture camera will see the best model of backpropagation neural network by combining the local binary patern (LBP) feature extraction method and the Gray Level Co-occurrence Matrix (GLCM) to access the fire. Then to evaluate the performance of the model created using three parameters that contain accuracy, recall, precision. The data in this study consisted of videos with variations of fire and non-fire videos. Based on the final results of the study, accuracy, remember, the best precision obtained simultaneously 96%, 97%, 97%. Then the validation process was done using 30 videos with a variation of 15 fire videos and 15 non-fire videos and obtained an accuracy of 86.6% with an average time value of 6.029 minutes.


Sensors ◽  
2018 ◽  
Vol 18 (9) ◽  
pp. 3039 ◽  
Author(s):  
Jiaqi Shao ◽  
Changwen Qu ◽  
Jianwei Li ◽  
Shujuan Peng

With the continuous development of the convolutional neural network (CNN) concept and other deep learning technologies, target recognition in Synthetic Aperture Radar (SAR) images has entered a new stage. At present, shallow CNNs with simple structure are mostly applied in SAR image target recognition, even though their feature extraction ability is limited to a large extent. What’s more, research on improving SAR image target recognition efficiency and imbalanced data processing is relatively scarce. Thus, a lightweight CNN model for target recognition in SAR image is designed in this paper. First, based on visual attention mechanism, the channel attention by-pass and spatial attention by-pass are introduced to the network to enhance the feature extraction ability. Then, the depthwise separable convolution is used to replace the standard convolution to reduce the computation cost and heighten the recognition efficiency. Finally, a new weighted distance measure loss function is introduced to weaken the adverse effect of data imbalance on the recognition accuracy of minority class. A series of recognition experiments based on two open data sets of MSTAR and OpenSARShip are implemented. Experimental results show that compared with four advanced networks recently proposed, our network can greatly diminish the model size and iteration time while guaranteeing the recognition accuracy, and it can effectively alleviate the adverse effects of data imbalance on recognition results.


Author(s):  
Rajesh K. V. N. ◽  
Lalitha Bhaskari D.

Plants are very important for the existence of human life. The total number of plant species is nearing 400 thousand as of date. With such a huge number of plant species, there is a need for intelligent systems for plant species recognition. The leaf is one of the most important and prominent parts of a plant and is available throughout the year. Leaf plays a major role in the identification of plants. Plant leaf recognition (PLR) is the process of automatically recognizing the plant species based on the image of the plant leaf. Many researchers have worked in this area of PLR using image processing, feature extraction, machine learning, and convolution neural network techniques. As a part of this chapter, the authors review several such latest methods of PLR and present the work done by various authors in the past five years in this area. The authors propose a generalized architecture for PLR based on this study and describe the major steps in PLR in detail. The authors then present a brief summary of the work that they are doing in this area of PLR for Ayurvedic plants.


Author(s):  
Juan Ran ◽  
Yu Shi ◽  
Jinhao Yu ◽  
Delong Li

This paper discusses how to efficiently recognize flowers based on a convolutional neural network (CNN) using multiple features. Our proposed work consists of three phases including segmentation by Otsu thresholding with particle swarm optimization algorithms, feature extraction of color, shape, texture and recognition with the LeNet-5 neural network. In the feature extraction, an improved H component with the definition of WGB value is applied to extract the color feature, and a new algorithm based on local binary pattern (LBP) is proposed to enhance the accuracy of texture extraction. Besides this, we replace ReLU with Mish as activation function in the network design, and therefore increase the accuracy by 8% accuracy according to our comparison. The Oxford-102 and Oxford-17 datasets are adopted for benchmarking. The experimental results show that the combination of color features and texture features generates the highest recognition accuracy as 92.56% on Oxford-102 and 93% on Oxford-17.


GigaScience ◽  
2019 ◽  
Vol 8 (11) ◽  
Author(s):  
Robail Yasrab ◽  
Jonathan A Atkinson ◽  
Darren M Wells ◽  
Andrew P French ◽  
Tony P Pridmore ◽  
...  

Abstract Background In recent years quantitative analysis of root growth has become increasingly important as a way to explore the influence of abiotic stress such as high temperature and drought on a plant's ability to take up water and nutrients. Segmentation and feature extraction of plant roots from images presents a significant computer vision challenge. Root images contain complicated structures, variations in size, background, occlusion, clutter and variation in lighting conditions. We present a new image analysis approach that provides fully automatic extraction of complex root system architectures from a range of plant species in varied imaging set-ups. Driven by modern deep-learning approaches, RootNav 2.0 replaces previously manual and semi-automatic feature extraction with an extremely deep multi-task convolutional neural network architecture. The network also locates seeds, first order and second order root tips to drive a search algorithm seeking optimal paths throughout the image, extracting accurate architectures without user interaction. Results We develop and train a novel deep network architecture to explicitly combine local pixel information with global scene information in order to accurately segment small root features across high-resolution images. The proposed method was evaluated on images of wheat (Triticum aestivum L.) from a seedling assay. Compared with semi-automatic analysis via the original RootNav tool, the proposed method demonstrated comparable accuracy, with a 10-fold increase in speed. The network was able to adapt to different plant species via transfer learning, offering similar accuracy when transferred to an Arabidopsis thaliana plate assay. A final instance of transfer learning, to images of Brassica napus from a hydroponic assay, still demonstrated good accuracy despite many fewer training images. Conclusions We present RootNav 2.0, a new approach to root image analysis driven by a deep neural network. The tool can be adapted to new image domains with a reduced number of images, and offers substantial speed improvements over semi-automatic and manual approaches. The tool outputs root architectures in the widely accepted RSML standard, for which numerous analysis packages exist (http://rootsystemml.github.io/), as well as segmentation masks compatible with other automated measurement tools. The tool will provide researchers with the ability to analyse root systems at larget scales than ever before, at a time when large scale genomic studies have made this more important than ever.


Author(s):  
Mrudula Nimbarte ◽  
Kishor Bhoyar

<span>In the recent years, face recognition across aging has become very popular and challenging task in the area of face recognition.  Many researchers have contributed in this area, but still there is a significant gap to fill in. Selection of feature extraction and classification algorithms plays an important role in this area. Deep Learning with Convolutional Neural Networks provides us a combination of feature extraction and classification in a single structure. In this paper, we have presented a novel idea of 7-Layer CNN architecture for solving the problem of aging for recognizing facial images across aging. We have done extensive experimentations to test the performance of the proposed system using two standard datasets FGNET and MORPH</span><span>(Album II). Rank-1 recognition accuracy of our proposed system is 76.6% on FGNET and 92.5% on MORPH</span><span>(Album II). Experimental results show the significant improvement over available state-of- the-arts with the proposed CNN architecture and the classifier.</span>


Author(s):  
Elviawaty Muisa Zamzami ◽  
Septi Hayanti ◽  
Erna Budhiarti Nababan

Handwritten character recognition is considered a complex problem since one’s handwritten character has its characteristics.  Data used for this research was a photo of handwritten or scanned handwritten.  In this research, Backpropagation Neural Network (BPNN) was used to recognize handwritten Batak Toba character, wherein preprocessing stage feature extraction was done using Diagonal Based Feature Extraction (DBFE) to obtain feature value.  Furthermore, the feature value will be used as an input to BPNN. The total number of data used was190 data, where 114 data was used for the training process and another 76 data was used for testing. From the testing process carried out, the accuracy obtained was 87,19 %.


Sign in / Sign up

Export Citation Format

Share Document