scholarly journals Movie Genre Classification from RGB Movie Poster Image Using Deep Feed-forward Network

Author(s):  
Mushtaque Ahmed Korai ◽  
Altaf Hussain Bouk ◽  
Abdul Hameed Sindhi
2015 ◽  
Vol 793 ◽  
pp. 483-488
Author(s):  
N. Aminudin ◽  
Marayati Marsadek ◽  
N.M. Ramli ◽  
T.K.A. Rahman ◽  
N.M.M. Razali ◽  
...  

The computation of security risk index in identifying the system’s condition is one of the major concerns in power system analysis. Traditional method of this assessment is highly time consuming and infeasible for direct on-line implementation. Thus, this paper presents the application of Multi-Layer Feed Forward Network (MLFFN) to perform the prediction of voltage collapse risk index due to the line outage occurrence. The proposed ANN model consider load at the load buses as well as weather condition at the transmission lines as the input. In realizing the effectiveness of the proposed method, the results are compared with Generalized Regression Neural Network (GRNN) method. The results revealed that the MLFFN method shows a significant improvement over GRNN performance in terms of least error produced.


2015 ◽  
Vol 2015 ◽  
pp. 1-12 ◽  
Author(s):  
Yasir Hassan Ali ◽  
Roslan Abd Rahman ◽  
Raja Ishak Raja Hamzah

The thickness of an oil film lubricant can contribute to less gear tooth wear and surface failure. The purpose of this research is to use artificial neural network (ANN) computational modelling to correlate spur gear data from acoustic emissions, lubricant temperature, and specific film thickness (λ). The approach is using an algorithm to monitor the oil film thickness and to detect which lubrication regime the gearbox is running either hydrodynamic, elastohydrodynamic, or boundary. This monitoring can aid identification of fault development. Feed-forward and recurrent Elman neural network algorithms were used to develop ANN models, which are subjected to training, testing, and validation process. The Levenberg-Marquardt back-propagation algorithm was applied to reduce errors. Log-sigmoid and Purelin were identified as suitable transfer functions for hidden and output nodes. The methods used in this paper shows accurate predictions from ANN and the feed-forward network performance is superior to the Elman neural network.


Author(s):  
Tanujit Chakraborty

Decision tree algorithms have been among the most popular algorithms for interpretable (transparent) machine learning since the early 1980s. On the other hand, deep learning methods have boosted the capacity of machine learning algorithms and are now being used for non-trivial applications in various applied domains. But training a fully-connected deep feed-forward network by gradient-descent backpropagation is slow and requires arbitrary choices regarding the number of hidden units and layers. In this paper, we propose near-optimal neural regression trees, intending to make it much faster than deep feed-forward networks and for which it is not essential to specify the number of hidden units in the hidden layers of the neural network in advance. The key idea is to construct a decision tree and then simulate the decision tree with a neural network. This work aims to build a mathematical formulation of neural trees and gain the complementary benefits of both sparse optimal decision trees and neural trees. We propose near-optimal sparse neural trees (NSNT) that is shown to be asymptotically consistent and robust in nature. Additionally, the proposed NSNT model obtain a fast rate of convergence which is near-optimal up to some logarithmic factor. We comprehensively benchmark the proposed method on a sample of 80 datasets (40 classification datasets and 40 regression datasets) from the UCI machine learning repository. We establish that the proposed method is likely to outperform the current state-of-the-art methods (random forest, XGBoost, optimal classification tree, and near-optimal nonlinear trees) for the majority of the datasets.


Sign in / Sign up

Export Citation Format

Share Document