Identification of Rubber Leaf Diseases Based on Neural Network Concept

Author(s):  
S.R. Shwetha ◽  
Shivaprakash Koliwad ◽  
C. Srikrishna Shastri ◽  
J. Jayanth
2001 ◽  
Vol 72 (1) ◽  
pp. 513-516 ◽  
Author(s):  
Young-Mu Jeon ◽  
Yong-Su Na ◽  
Myung-Rak Kim ◽  
Y. S. Hwang

10.29007/xwg4 ◽  
2018 ◽  
Author(s):  
Ripal Patel ◽  
Shubha Pandey ◽  
Chirag Patel ◽  
Robinson Paul

Flicker is the most common and an intolerable blemish present in signal processing world that leads to distortion in the transmitted frame of a video string. To dodge such misinterpretations a technique for detection of the flickering frame in a video is depicted in this research. Earlier methods were based on removing flicker by calculating the threshold of the consecutive frame difference and then finding the flickering frame. The proposed method in this research includes finding flickering frame using neural network concept. Therefore, the advantage of the practice disclosed here is that it removes the tedious calculation part of the threshold value and thereby the computational part becomes easier with added accurate result.


2020 ◽  
Vol 10 (4) ◽  
pp. 1479 ◽  
Author(s):  
Sandeli Priyanwada Kasthuri Arachchi ◽  
Timothy K. Shih ◽  
Noorkholis Luthfil Hakim

Video classification is an essential process for analyzing the pervasive semantic information of video content in computer vision. Traditional hand-crafted features are insufficient when classifying complex video information due to the similarity of visual contents with different illumination conditions. Prior studies of video classifications focused on the relationship between the standalone streams themselves. In this paper, by leveraging the effects of deep learning methodologies, we propose a two-stream neural network concept, named state-exchanging long short-term memory (SE-LSTM). With the model of spatial motion state-exchanging, the SE-LSTM can classify dynamic patterns of videos using appearance and motion features. The SE-LSTM extends the general purpose of LSTM by exchanging the information with previous cell states of both appearance and motion stream. We propose a novel two-stream model Dual-CNNSELSTM utilizing the SE-LSTM concept combined with a Convolutional Neural Network, and use various video datasets to validate the proposed architecture. The experimental results demonstrate that the performance of the proposed two-stream Dual-CNNSELSTM architecture significantly outperforms other datasets, achieving accuracies of 81.62%, 79.87%, and 69.86% with hand gestures, fireworks displays, and HMDB51 datasets, respectively. Furthermore, the overall results signify that the proposed model is most suited to static background dynamic patterns classifications.


1993 ◽  
Vol 5 (4) ◽  
pp. 648-664 ◽  
Author(s):  
John G. Elias

The electronic architecture and dynamic signal processing capabilities of an artificial dendritic tree that can be used to process and classify dynamic signals is described. The electrical circuit architecture is modeled after neurons that have spatially extensive dendritic trees. The artificial dendritic tree is a hybrid VLSI circuit and is sensitive to both temporal and spatial signal characteristics. It does not use the conventional neural network concept of weights, and as such it does not use multipliers, adders, look-up-tables, microprocessors, or other complex computational units to process signals. The weights of conventional neural networks, which take the form of numerical, resistive, voltage, or current values, but do not have any spatial or temporal content, are replaced with connections whose spatial location have both a temporal and scaling significance.


Steganography is one expanding filed in the area of Data Security. Steganography has attractive number of application from a vast number of researchers. The most existing technique in steganogarphy is Least Significant Bit (LSB) encoding. Now a day there has been so many new approaches employing with different techniques like deep learning. Those techniques are used to address the problems of steganography. Now a day’s many of the exisiting algorithms are based on the image to data, image to image steganography. In this paper we hide secret audio into the digital image with the help of deep learning techniques. We use a joint deep neural network concept it consist of two sub models. The first model is responsible for hiding digital audio into a digital image. The second model is responsible for returning a digital audio from the stego image. Various vast experiments are conducted with a set of 24K images and also for various sizes of images. From the experiments it can be seen proposed method is performing more effective than the existing methods. The proposed method also concentrates the integrity of the digital image and audio files.


2003 ◽  
Vol 1 (1) ◽  
pp. 10-28
Author(s):  
Benedito D. Baptista Filho ◽  
Eduardo L. L. Cabral

Sign in / Sign up

Export Citation Format

Share Document