scholarly journals Application of Machine Learning to Bending Processes and Material Identification

Metals ◽  
2021 ◽  
Vol 11 (9) ◽  
pp. 1418
Author(s):  
Daniel J. Cruz ◽  
Manuel R. Barbosa ◽  
Abel D. Santos ◽  
Sara S. Miranda ◽  
Rui L. Amaral

The increasing availability of data, which becomes a continually increasing trend in multiple fields of application, has given machine learning approaches a renewed interest in recent years. Accordingly, manufacturing processes and sheet metal forming follow such directions, having in mind the efficiency and control of the many parameters involved, in processing and material characterization. In this article, two applications are considered to explore the capability of machine learning modeling through shallow artificial neural networks (ANN). One consists of developing an ANN to identify the constitutive model parameters of a material using the force–displacement curves obtained with a standard bending test. The second one concentrates on the springback problem in sheet metal press-brake air bending, with the objective of predicting the punch displacement required to attain a desired bending angle, including additional information of the springback angle. The required data for designing the ANN solutions are collected from numerical simulation using finite element methodology (FEM), which in turn was validated by experiments.

2020 ◽  
Author(s):  
Murad Megjhani ◽  
Kalijah Terilli ◽  
Ayham Alkhachroum ◽  
David J. Roh ◽  
Sachin Agarwal ◽  
...  

AbstractObjectiveTo develop a machine learning based tool, using routine vital signs, to assess delayed cerebral ischemia (DCI) risk over time.MethodsIn this retrospective analysis, physiologic data for 540 consecutive acute subarachnoid hemorrhage patients were collected and annotated as part of a prospective observational cohort study between May 2006 and December 2014. Patients were excluded if (i) no physiologic data was available, (ii) they expired prior to the DCI onset window (< post bleed day 3) or (iii) early angiographic vasospasm was detected on admitting angiogram. DCI was prospectively labeled by consensus of treating physicians. Occurrence of DCI was classified using various machine learning approaches including logistic regression, random forest, support vector machine (linear and kernel), and an ensemble classifier, trained on vitals and subject characteristic features. Hourly risk scores were generated as the posterior probability at time t. We performed five-fold nested cross validation to tune the model parameters and to report the accuracy. All classifiers were evaluated for good discrimination using the area under the receiver operating characteristic curve (AU-ROC) and confusion matrices.ResultsOf 310 patients included in our final analysis, 101 (32.6%) patients developed DCI. We achieved maximal classification of 0.81 [0.75-0.82] AU-ROC. We also predicted 74.7 % of all DCI events 12 hours before typical clinical detection with a ratio of 3 true alerts for every 2 false alerts.ConclusionA data-driven machine learning based detection tool offered hourly assessments of DCI risk and incorporated new physiologic information over time.


2021 ◽  
Author(s):  
Andreas Christ Sølvsten Jørgensen ◽  
Atiyo Ghosh ◽  
Marc Sturrock ◽  
Vahid Shahrezaei

AbstractThe modelling of many real-world problems relies on computationally heavy simulations. Since statistical inference rests on repeated simulations to sample the parameter space, the high computational expense of these simulations can become a stumbling block. In this paper, we compare two ways to mitigate this issue based on machine learning methods. One approach is to construct lightweight surrogate models to substitute the simulations used in inference. Alternatively, one might altogether circumnavigate the need for Bayesian sampling schemes and directly estimate the posterior distribution. We focus on stochastic simulations that track autonomous agents and present two case studies of real-world applications: tumour growths and the spread of infectious diseases. We demonstrate that good accuracy in inference can be achieved with a relatively small number of simulations, making our machine learning approaches orders of magnitude faster than classical simulation-based methods that rely on sampling the parameter space. However, we find that while some methods generally produce more robust results than others, no algorithm offers a one-size-fits-all solution when attempting to infer model parameters from observations. Instead, one must choose the inference technique with the specific real-world application in mind. The stochastic nature of the considered real-world phenomena poses an additional challenge that can become insurmountable for some approaches. Overall, we find machine learning approaches that create direct inference machines to be promising for real-world applications. We present our findings as general guidelines for modelling practitioners.Author summaryComputer simulations play a vital role in modern science as they are commonly used to compare theory with observations. One can thus infer the properties of a observed system by comparing the data to the predicted behaviour in different scenarios. Each of these scenarios corresponds to a simulation with slightly different settings. However, since real-world problems are highly complex, the simulations often require extensive computational resources, making direct comparisons with data challenging, if not insurmountable. It is, therefore, necessary to resort to inference methods that mitigate this issue, but it is not clear-cut what path to choose for any specific research problem. In this paper, we provide general guidelines for how to make this choice. We do so by studying examples from oncology and epidemiology and by taking advantage of developments in machine learning. More specifically, we focus on simulations that track the behaviour of autonomous agents, such as single cells or individuals. We show that the best way forward is problem-dependent and highlight the methods that yield the most robust results across the different case studies. We demonstrate that these methods are highly promising and produce reliable results in a small fraction of the time required by classic approaches that rely on comparisons between data and individual simulations. Rather than relying on a single inference technique, we recommend employing several methods and selecting the most reliable based on predetermined criteria.


2021 ◽  
Vol 11 (24) ◽  
pp. 11910
Author(s):  
Dalia Mahmoud ◽  
Marcin Magolon ◽  
Jan Boer ◽  
M.A Elbestawi ◽  
Mohammad Ghayoomi Mohammadi

One of the main issues hindering the adoption of parts produced using laser powder bed fusion (L-PBF) in safety-critical applications is the inconsistencies in quality levels. Furthermore, the complicated nature of the L-PBF process makes optimizing process parameters to reduce these defects experimentally challenging and computationally expensive. To address this issue, sensor-based monitoring of the L-PBF process has gained increasing attention in recent years. Moreover, integrating machine learning (ML) techniques to analyze the collected sensor data has significantly improved the defect detection process aiming to apply online control. This article provides a comprehensive review of the latest applications of ML for in situ monitoring and control of the L-PBF process. First, the main L-PBF process signatures are described, and the suitable sensor and specifications that can monitor each signature are reviewed. Next, the most common ML learning approaches and algorithms employed in L-PBFs are summarized. Then, an extensive comparison of the different ML algorithms used for defect detection in the L-PBF process is presented. The article then describes the ultimate goal of applying ML algorithms for in situ sensors, which is closing the loop and taking online corrective actions. Finally, some current challenges and ideas for future work are also described to provide a perspective on the future directions for research dealing with using ML applications for defect detection and control for the L-PBF processes.


2020 ◽  
Author(s):  
Riya Tapwal ◽  
Nitin Gupta ◽  
Qin Xin

<div>IoT devices (wireless sensors, actuators, computer devices) produce large volume and variety of data and the data</div><div>produced by the IoT devices are transient. In order to overcome the problem of traditional IoT architecture where</div><div>data is sent to the cloud for processing, an emerging technology known as fog computing is proposed recently.</div><div>Fog computing brings storage, computing and control near to the end devices. Fog computing complements the</div><div>cloud and provide services to the IoT devices. Hence, data used by the IoT devices must be cached at the fog nodes</div><div>in order to reduce the bandwidth utilization and latency. This chapter discusses the utility of data caching at the</div><div>fog nodes. Further, various machine learning techniques can be used to reduce the latency by caching the data</div><div>near to the IoT devices by predicting their future demands. Therefore, this chapter also discusses various machine</div><div>learning techniques that can be used to extract the accurate data and predict future requests of IoT devices.</div>


2010 ◽  
Vol 41 (2) ◽  
pp. 211-224 ◽  
Author(s):  
Fabio Tango ◽  
Luca Minin ◽  
Francesco Tesauri ◽  
Roberto Montanari

2020 ◽  
Author(s):  
Riya Tapwal ◽  
Nitin Gupta ◽  
Qin Xin

<div>IoT devices (wireless sensors, actuators, computer devices) produce large volume and variety of data and the data</div><div>produced by the IoT devices are transient. In order to overcome the problem of traditional IoT architecture where</div><div>data is sent to the cloud for processing, an emerging technology known as fog computing is proposed recently.</div><div>Fog computing brings storage, computing and control near to the end devices. Fog computing complements the</div><div>cloud and provide services to the IoT devices. Hence, data used by the IoT devices must be cached at the fog nodes</div><div>in order to reduce the bandwidth utilization and latency. This chapter discusses the utility of data caching at the</div><div>fog nodes. Further, various machine learning techniques can be used to reduce the latency by caching the data</div><div>near to the IoT devices by predicting their future demands. Therefore, this chapter also discusses various machine</div><div>learning techniques that can be used to extract the accurate data and predict future requests of IoT devices.</div>


2017 ◽  
pp. 311-318
Author(s):  
Riitta Hari ◽  
Aina Puce

This chapter looks to the future of MEG and EEG. Advances are expected both in instrumentation and in data-analysis tools suitable for experiments carried out in naturalistic settings. Moreover, the person’s behavior should be analyzed in much more finer detail than is done at present. Machine-learning approaches allow decoding of different brain states and distinctions between some patient and control groups; they also have implications for the development of brain–machine interfaces and brain-controlled prosthetic devices. Data governance will gain more emphasis when big brain-imaging datasets will be more widely available to the research community. At the same time, new challenges emerge for ensuring data quality, replicability, statistical analysis, documentation, and visualization. The main contribution of MEG and EEG to neuroscience continue to be in the accurate temporal information they provide.


2020 ◽  
Vol 46 (1) ◽  
pp. 176-190 ◽  
Author(s):  
Georgia Koppe ◽  
Andreas Meyer-Lindenberg ◽  
Daniel Durstewitz

AbstractPsychiatry today must gain a better understanding of the common and distinct pathophysiological mechanisms underlying psychiatric disorders in order to deliver more effective, person-tailored treatments. To this end, it appears that the analysis of ‘small’ experimental samples using conventional statistical approaches has largely failed to capture the heterogeneity underlying psychiatric phenotypes. Modern algorithms and approaches from machine learning, particularly deep learning, provide new hope to address these issues given their outstanding prediction performance in other disciplines. The strength of deep learning algorithms is that they can implement very complicated, and in principle arbitrary predictor-response mappings efficiently. This power comes at a cost, the need for large training (and test) samples to infer the (sometimes over millions of) model parameters. This appears to be at odds with the as yet rather ‘small’ samples available in psychiatric human research to date (n < 10,000), and the ambition of predicting treatment at the single subject level (n = 1). Here, we aim at giving a comprehensive overview on how we can yet use such models for prediction in psychiatry. We review how machine learning approaches compare to more traditional statistical hypothesis-driven approaches, how their complexity relates to the need of large sample sizes, and what we can do to optimally use these powerful techniques in psychiatric neuroscience.


Sign in / Sign up

Export Citation Format

Share Document