Neural Network Models for the Estimation of Product Costs

Author(s):  
Sergio Cavalieri ◽  
Paolo Maccarrone ◽  
Roberto Pinto

The estimation of the production cost per unit of a product during its design phase can be extremely difficult, especially if information about previous similar products is missing. On the other hand, most of the costs that will be sustained during the production activity are implicitly determined mainly in the design phase, depending on the choice of characteristics and performance of the new product. Hence, the earlier the information about costs becomes available, the better the trade-off between costs and product performances can be managed. These considerations have led to the development of different design rules and techniques, such as Design to Cost, which

2018 ◽  
Vol 13 (No. 1) ◽  
pp. 11-17 ◽  
Author(s):  
M. Mokarram ◽  
M. Najafi-Ghiri ◽  
A.R. Zarei

Soil fertility refers to the ability of a soil to supply plant nutrients. Naturally, micro and macro elements are made available to plants by breakdown of the mineral and organic materials in the soil. Artificial neural network (ANN) provides deeper understanding of human cognitive capabilities. Among various methods of ANN and learning an algorithm, self-organizing maps (SOM) are one of the most popular neural network models. The aim of this study was to classify the factors influencing soil fertility in Shiraz plain, southern Iran. The relationships among soil features were studied using the SOM in which, according to qualitative data, the clustering tendency of soil fertility was investigated using seven parameters (N, P, K, Fe, Zn, Mn, and Cu). The results showed that for soil fertility there is a close relationship between P and N, and also between P and Zn. The other parameters, such as K, Fe, Mn, and Cu, are not mutually related. The results showed that there are six clusters for soil fertility and also that group 1 soils are more fertile than the other.


2011 ◽  
Vol 23 (4) ◽  
pp. 1047-1069 ◽  
Author(s):  
Fabiano Ribeiro ◽  
Manfred Opper

We discuss the expectation propagation (EP) algorithm for approximate Bayesian inference using a factorizing posterior approximation. For neural network models, we use a central limit theorem argument to make EP tractable when the number of parameters is large. For two types of models, we show that EP can achieve optimal generalization performance when data are drawn from a simple distribution.


2018 ◽  
Vol 7 (3.15) ◽  
pp. 141 ◽  
Author(s):  
Nurbaity Sabri ◽  
Zalilah Abdul Aziz ◽  
Zaidah Ibrahim ◽  
Muhammad Akmal Rasydan Bin Mohd Rosni ◽  
Abdul Hafiz bin Abd Ghapul

This research compares the recognition performance between pre-trained models, GoogLeNet and AlexNet, with basic Convolution Neural Network (CNN) for leaf recognition. Lately, CNN has gained a lot of interest in image processing applications. Numerous pre-trained models have been introduced and the most popular pre-trained models are GoogLeNet and AlexNet. Each model has its own layers of convolution and computational complexity. A great success has been achieved using these classification models in computer vision and this research investigates their performances for leaf recognition using MalayaKew (MK), an open access leaf dataset. GoogLeNet achieves a perfect 100% accuracy, outperforms both AlexNet and basic CNN. On the other hand, the processing time for GoogLeNet is longer compared to the other models due to the high number of layers in its architecture.  


2021 ◽  
Author(s):  
Dipanwita Sinha Mukherjee ◽  
Naveen Yeri

<div>Initializing weights are important for fast convergence and performance improvement of Artificial Neural Network models. This study proposes a heuristic method to initialize weights for Neural Network with Fibonacci sequence. Experiments have been carried out with different network structures and datasets and results have been compared with other initialization techniques such as Zero, Random, Xavier and He. It has been observed that for small sized datasets, Fibonacci initialization technique reports 94% of test accuracy which is better than Random (85%) and close to Xavier (93%) and He (96%) initialization methods. Also, for medium sized dataset, we have noted that performance of Fibonacci weight initialization method is comparable with the same for Random, Xavier and He initialization techniques.</div>


Electronics ◽  
2021 ◽  
Vol 10 (20) ◽  
pp. 2508
Author(s):  
Muhammad Zubair Rehman ◽  
Nazri Mohd. Nawi ◽  
Mohammad Arshad ◽  
Abdullah Khan

Pashto is one of the most ancient and historical languages in the world and is spoken in Pakistan and Afghanistan. Various languages like Urdu, English, Chinese, and Japanese have OCR applications, but very little work has been conducted on the Pashto language in this perspective. It becomes more difficult for OCR applications to recognize handwritten characters and digits, because handwriting is influenced by the writer’s hand dynamics. Moreover, there was no publicly available dataset for handwritten Pashto digits before this study. Due to this, there was no work performed on the recognition of Pashto handwritten digits and characters combined. To achieve this objective, a dataset of Pashto handwritten digits consisting of 60,000 images was created. The trio deep learning Convolutional Neural Network, i.e., CNN, LeNet, and Deep CNN were trained and tested with both Pashto handwritten characters and digits datasets. From the simulations, the Deep CNN achieved 99.42 percent accuracy for Pashto handwritten digits, 99.17 percent accuracy for handwritten characters, and 70.65 percent accuracy for combined digits and characters. Similarly, LeNet and CNN models achieved slightly less accuracies (LeNet; 98.82, 99.15, and 69.82 percent and CNN; 98.30, 98.74, and 66.53 percent) for Pashto handwritten digits, Pashto characters, and the combined Pashto digits and characters recognition datasets, respectively. Based on these results, the Deep CNN model is the best model in terms of accuracy and loss as compared to the other two models.


1996 ◽  
Vol 07 (02) ◽  
pp. 203-212 ◽  
Author(s):  
M. ZAKI ◽  
A. GHALWASH ◽  
A.A. ELKOUNY

The main emphasis of this paper is to present an approach for combining supervised and unsupervised neural network models to the issue of speaker recognition. To enhance the overall operation and performance of recognition, the proposed strategy integrates the two techniques, forming one global model called the cascaded model. We first present a simple conventional technique based on the distance measured between a test vector and a reference vector for different speakers in the population. This particular distance metric has the property of weighting down the components in those directions along which the intraspeaker variance is large. The reason for presenting this method is to clarify the discrepancy in performance between the conventional and neural network approach. We then introduce the idea of using unsupervised learning technique, presented by the winner-take-all model, as a means of recognition. Due to several tests that have been conducted and in order to enhance the performance of this model, dealing with noisy patterns, we have preceded it with a supervised learning model—the pattern association model—which acts as a filtration stage. This work includes both the design and implementation of both conventional and neural network approaches to recognize the speakers templates—which are introduced to the system via a voice master card and preprocessed before extracting the features used in the recognition. The conclusion indicates that the system performance in case of neural network is better than that of the conventional one, achieving a smooth degradation in respect of noisy patterns, and higher performance in respect of noise-free patterns.


2021 ◽  
Author(s):  
Dipanwita Sinha Mukherjee ◽  
Naveen Yeri

<div>Initializing weights are important for fast convergence and performance improvement of Artificial Neural Network models. This study proposes a heuristic method to initialize weights for Neural Network with Fibonacci sequence. Experiments have been carried out with different network structures and datasets and results have been compared with other initialization techniques such as Zero, Random, Xavier and He. It has been observed that for small sized datasets, Fibonacci initialization technique reports 94% of test accuracy which is better than Random (85%) and close to Xavier (93%) and He (96%) initialization methods. Also, for medium sized dataset, we have noted that performance of Fibonacci weight initialization method is comparable with the same for Random, Xavier and He initialization techniques.</div>


2020 ◽  
Vol 5 ◽  
pp. 140-147 ◽  
Author(s):  
T.N. Aleksandrova ◽  
◽  
E.K. Ushakov ◽  
A.V. Orlova ◽  
◽  
...  

The neural network models series used in the development of an aggregated digital twin of equipment as a cyber-physical system are presented. The twins of machining accuracy, chip formation and tool wear are examined in detail. On their basis, systems for stabilization of the chip formation process during cutting and diagnose of the cutting too wear are developed. Keywords cyberphysical system; neural network model of equipment; big data, digital twin of the chip formation; digital twin of the tool wear; digital twin of nanostructured coating choice


Sign in / Sign up

Export Citation Format

Share Document