scholarly journals Socio-Economic Impact Research of Foundry Industry By using Neural Network

Global casting production reached 104.4 million tons in 2016. The top ten casting production nation produces 91.6 million metric tons of the total production of 104.4 million metric tons. Nearly 47.2 million metric tons of casting produces by China. Casting production increases from 5.4% to 11.35% million metric tons. USA, Japan, Germany, Russia, Korea, Mexico, Brazil and Italy are the top ten nations. Almost 6500 foundry units are in country out of which 90% can be categorized as small scale units, medium scale units as 8% and large scale units as 2%. Foundry industry includes several critical aspects related to social, economic and environmental aspect need to assess. The results gained by these models are compared with regression model. Socio- economic foundry industry complex relationship between different parameters can be modeled by using neural network and regression model. It can also study running such program lead to substantial improvements in socioeconomic circumstances of targeted industry and make it sustainable industry.

2015 ◽  
Vol 4 (1) ◽  
Author(s):  
R. Vinayagasundaram ◽  
V. Kannan

It is very essential to produce the cast components in foundries with high quality, reliable, consistent and at lowest cost. Hence the foundry owners have to introduce the Lean manufacturing to improve the productivity. This study brought out how technically qualified entrepreneurs of selected foundries have carried out technological innovations, mainly due to their self-motivation and self-efforts. Introducing the Lean manufacturing in the process and changing product designs, as desired or directed by the customers resulted in cost reduction, quality and productivity improvement. These have enabled the selected foundries to enhance competitiveness, grow in the domestic market and penetrate the international market and grow in size over time. And have achieved technological innovations successfully based on their technological capability and customer needs, enabling them to sail through the competitive environment. There are about 35000 foundries in the world with annual production of 90 million metric tons, providing employment to about 20 lakh people. Indian foundry industry is acknowledged as the worlds second largest producer of castings (7.4 Million Tons per Annum - MTPA) based on Tonnage during 2009, next to China (35.2 MTPA). There is a large gap between India and other nations, along with the fact that the foundry industry is not able to keep up with the local and international demand and to catch up in terms of absolute production quantities and qualities and hence the market share. Having reached this stage, what we need is to utilize the potential for growth in our favour. Indian Foundry Industry occupies a special place in shaping the countrys economy. India has around 5000 foundries, producing about 7.4 MT of castings worth Rs 20,500 crores. It ranks second in terms of casting production, next to China. These units are mostly located in clusters with numbers varying from 100 to 500 per cluster. Some of the notable clusters are Agra, Howrah, Coimbatore, Kolhapur, Rajkot and Belgaum. Coimbatore foundries have more export opportunities to tap with growth in the end user segment. Coimbatore foundry cluster has about 620 units and most of them are small-scale. They produce 40,000 to 45,000 tonnes of castings a month. The foundry product line of Coimbatore cluster is mainly catering to motor pumps, machineries and is slowly emerging to cater to valves and auto components sector from South India. In the last five years, output of the Coimbatore foundries has grown at 15 to 20 percent and it is estimated that Coimbatore contributes nearly 15 percent of the total casting production in the country. The total monthly casting production has gone up from about 25,000 ton in 2007 and in 2010 it is 60,000 ton. Almost 20% of the total production goes for exports (direct and indirect) to most of the European countries.


Electronics ◽  
2021 ◽  
Vol 10 (22) ◽  
pp. 2868
Author(s):  
Wenxuan Zhao ◽  
Yaqin Zhao ◽  
Liqi Feng ◽  
Jiaxi Tang

The purpose of image dehazing is the reduction of the image degradation caused by suspended particles for supporting high-level visual tasks. Besides the atmospheric scattering model, convolutional neural network (CNN) has been used for image dehazing. However, the existing image dehazing algorithms are limited in face of unevenly distributed haze and dense haze in real-world scenes. In this paper, we propose a novel end-to-end convolutional neural network called attention enhanced serial Unet++ dehazing network (AESUnet) for single image dehazing. We attempt to build a serial Unet++ structure that adopts a serial strategy of two pruned Unet++ blocks based on residual connection. Compared with the simple Encoder–Decoder structure, the serial Unet++ module can better use the features extracted by encoders and promote contextual information fusion in different resolutions. In addition, we take some improvement measures to the Unet++ module, such as pruning, introducing the convolutional module with ResNet structure, and a residual learning strategy. Thus, the serial Unet++ module can generate more realistic images with less color distortion. Furthermore, following the serial Unet++ blocks, an attention mechanism is introduced to pay different attention to haze regions with different concentrations by learning weights in the spatial domain and channel domain. Experiments are conducted on two representative datasets: the large-scale synthetic dataset RESIDE and the small-scale real-world datasets I-HAZY and O-HAZY. The experimental results show that the proposed dehazing network is not only comparable to state-of-the-art methods for the RESIDE synthetic datasets, but also surpasses them by a very large margin for the I-HAZY and O-HAZY real-world dataset.


2019 ◽  
Vol 10 (15) ◽  
pp. 4129-4140 ◽  
Author(s):  
Kyle Mills ◽  
Kevin Ryczko ◽  
Iryna Luchak ◽  
Adam Domurad ◽  
Chris Beeler ◽  
...  

We present a physically-motivated topology of a deep neural network that can efficiently infer extensive parameters (such as energy, entropy, or number of particles) of arbitrarily large systems, doing so with scaling.


Entropy ◽  
2020 ◽  
Vol 22 (3) ◽  
pp. 256
Author(s):  
Todd Hylton

A thermodynamically motivated neural network model is described that self-organizes to transport charge associated with internal and external potentials while in contact with a thermal reservoir. The model integrates techniques for rapid, large-scale, reversible, conservative equilibration of node states and slow, small-scale, irreversible, dissipative adaptation of the edge states as a means to create multiscale order. All interactions in the network are local and the network structures can be generic and recurrent. Isolated networks show multiscale dynamics, and externally driven networks evolve to efficiently connect external positive and negative potentials. The model integrates concepts of conservation, potentiation, fluctuation, dissipation, adaptation, equilibration and causation to illustrate the thermodynamic evolution of organization in open systems. A key conclusion of the work is that the transport and dissipation of conserved physical quantities drives the self-organization of open thermodynamic systems.


Processes ◽  
2019 ◽  
Vol 7 (12) ◽  
pp. 893
Author(s):  
Xiaoli Wang ◽  
He Zhang ◽  
Yalin Wang ◽  
Shaoming Yang

Online prediction of key parameters (e.g., process indices) is essential in many industrial processes because online measurement is not available. Data-based modeling is widely used for parameter prediction. However, model mismatch usually occurs owing to the variation of the feed properties, which changes the process dynamics. The current neural network online prediction models usually use fixed activation functions, and it is not easy to perform dynamic modification. Therefore, a few methods are proposed here. Firstly, an extreme learning machine (ELM)-based single-layer feedforward neural network with activation-function learning (AFL–SLFN) is proposed. The activation functions of the ELM are adjusted to enhance the ELM network structure and accuracy. Then, a hybrid model with adaptive weights is established by using the AFL–SLFN as a sub-model, which improves the prediction accuracy. To track the process dynamics and maintain the generalization ability of the model, a multiscale model-modification strategy is proposed. Here, small-, medium-, and large-scale modification is performed in accordance with the degree and the causes of the decrease in model accuracy. In the small-scale modification, an improved just-in-time local modeling method is used to update the parameters of the hybrid model. In the medium-scale modification, an improved elementary effect (EE)-based Morris pruning method is proposed for optimizing the sub-model structure. Remodeling is adopted in the large-scale modification. Finally, a simulation using industrial process data for tailings grade prediction in a flotation process reveals that the proposed method has better performance than some state-of-the-art methods. The proposed method can achieve rapid online training and allows optimization of the model parameters and structure for improving the model accuracy.


Tuberculosis (TB) is airborne infectious disease which has claimed many lives than any other infectious disease. Chest X-rays (CXRs) are often used in recognizing TB manifestation site in chest. Lately, CXRs are taken in digital formats, which has made a huge impact in rapid diagnosis using automated systems in medical field. In our current work, four simple Convolutional Neural Networks (CNN) models such as VGG-16, VGG-19, RestNet50, and GoogLenet are implemented in identification of TB manifested CXRs. Two public TB image datasets were utilized to conduct this research. This study was carried out to explore the limit of accuracies and AUCs acquired by simple and small-scale CNN with complex and large-scale CNN models. The results achieved from this work are compared with results of two previous studies. The results indicate that our proposed VGG-16 model has gained highest score overall compared to the models from other two previous studies.


2014 ◽  
Vol 9 (1) ◽  
pp. 147-156 ◽  
Author(s):  
Mico Apostolov

Purpose – This paper is a case study of the Republic of Macedonia and focuses on the development of governance and enterprise restructuring. Thus, country's effective corporate governance and corporate control, which impact enterprise restructuring, are essential in the analysis of market-driven restructuring through domestic financial institutions and markets. The data used in this article are analyzed with an econometric regression model, which as employed in this study examines the interrelationships between governance and enterprise restructuring and set of policies that influence the governance patterns. Two basic hypothesis are taken in the analysis: first, governance and enterprise restructuring depend on set of policies, such as, large-scale privatization, small-scale privatization, price liberalization, competition policy, trade and foreign exchange system, banking reform and interest rate liberalization, securities markets and non-bank financial institutions and overall infrastructure reform; and second, governance and enterprise restructuring improves over time due to imposed policies. The paper aims to discuss these issues. Design/methodology/approach – The data used in this article are analyzed with an econometric regression model, which as employed in this study examines the interrelationships between governance and enterprise restructuring and set of policies that influence the governance patterns. Findings – There is still more to be done in order to bring these economies closer to the standards of developed ones. Indeed, it is needed considerable improvement of corporate governance, institution-building to control agency problems and imposing already adopted regulation, as well as, enforcing new enterprise restructuring policies, within existing policies of overall transition economy restructuring. Originality/value – This paper is a contribution to the research developing the business aspects of the Macedonian economy, as there is constant lack of scientific papers that deal with the specific issues of corporate governance and enterprise restructuring.


2020 ◽  
Vol 12 (8) ◽  
pp. 137
Author(s):  
Bo Jiang ◽  
Yanbai He ◽  
Rui Chen ◽  
Chuanyan Hao ◽  
Sijiang Liu ◽  
...  

Learning data feedback and analysis have been widely investigated in all aspects of education, especially for large scale remote learning scenario like Massive Open Online Courses (MOOCs) data analysis. On-site teaching and learning still remains the mainstream form for most teachers and students, and learning data analysis for such small scale scenario is rarely studied. In this work, we first develop a novel user interface to progressively collect students’ feedback after each class of a course with WeChat mini program inspired by the evaluation mechanism of most popular shopping website. Collected data are then visualized to teachers and pre-processed. We also propose a novel artificial neural network model to conduct a progressive study performance prediction. These prediction results are reported to teachers for next-class and further teaching improvement. Experimental results show that the proposed neural network model outperforms other state-of-the-art machine learning methods and reaches a precision value of 74.05% on a 3-class classifying task at the end of the term.


SLEEP ◽  
2020 ◽  
Author(s):  
Alexander Neergaard Olesen ◽  
Poul Jørgen Jennum ◽  
Emmanuel Mignot ◽  
Helge Bjarup Dissing Sorensen

Abstract Study Objectives Sleep stage scoring is performed manually by sleep experts and is prone to subjective interpretation of scoring rules with low intra- and interscorer reliability. Many automatic systems rely on few small-scale databases for developing models, and generalizability to new datasets is thus unknown. We investigated a novel deep neural network to assess the generalizability of several large-scale cohorts. Methods A deep neural network model was developed using 15,684 polysomnography studies from five different cohorts. We applied four different scenarios: (1) impact of varying timescales in the model; (2) performance of a single cohort on other cohorts of smaller, greater, or equal size relative to the performance of other cohorts on a single cohort; (3) varying the fraction of mixed-cohort training data compared with using single-origin data; and (4) comparing models trained on combinations of data from 2, 3, and 4 cohorts. Results Overall classification accuracy improved with increasing fractions of training data (0.25%: 0.782 ± 0.097, 95% CI [0.777–0.787]; 100%: 0.869 ± 0.064, 95% CI [0.864–0.872]), and with increasing number of data sources (2: 0.788 ± 0.102, 95% CI [0.787–0.790]; 3: 0.808 ± 0.092, 95% CI [0.807–0.810]; 4: 0.821 ± 0.085, 95% CI [0.819–0.823]). Different cohorts show varying levels of generalization to other cohorts. Conclusions Automatic sleep stage scoring systems based on deep learning algorithms should consider as much data as possible from as many sources available to ensure proper generalization. Public datasets for benchmarking should be made available for future research.


2018 ◽  
Vol 22 (1) ◽  
pp. 265-286 ◽  
Author(s):  
Jérémy Chardon ◽  
Benoit Hingray ◽  
Anne-Catherine Favre

Abstract. Statistical downscaling models (SDMs) are often used to produce local weather scenarios from large-scale atmospheric information. SDMs include transfer functions which are based on a statistical link identified from observations between local weather and a set of large-scale predictors. As physical processes driving surface weather vary in time, the most relevant predictors and the regression link are likely to vary in time too. This is well known for precipitation for instance and the link is thus often estimated after some seasonal stratification of the data. In this study, we present a two-stage analog/regression model where the regression link is estimated from atmospheric analogs of the current prediction day. Atmospheric analogs are identified from fields of geopotential heights at 1000 and 500 hPa. For the regression stage, two generalized linear models are further used to model the probability of precipitation occurrence and the distribution of non-zero precipitation amounts, respectively. The two-stage model is evaluated for the probabilistic prediction of small-scale precipitation over France. It noticeably improves the skill of the prediction for both precipitation occurrence and amount. As the analog days vary from one prediction day to another, the atmospheric predictors selected in the regression stage and the value of the corresponding regression coefficients can vary from one prediction day to another. The model allows thus for a day-to-day adaptive and tailored downscaling. It can also reveal specific predictors for peculiar and non-frequent weather configurations.


Sign in / Sign up

Export Citation Format

Share Document