scholarly journals S1303 Predicting the Cost of Major Gastrointestinal Infection Admissions With Machine Learning

2021 ◽  
Vol 116 (1) ◽  
pp. S599-S599
Author(s):  
Tausif Syed ◽  
Shaan Kamal ◽  
Khwaja F. Haq ◽  
Shantanu Solanki ◽  
Dhanshree Solanki ◽  
...  
Polymers ◽  
2021 ◽  
Vol 13 (3) ◽  
pp. 353
Author(s):  
Kun-Cheng Ke ◽  
Ming-Shyan Huang

Conventional methods for assessing the quality of components mass produced using injection molding are expensive and time-consuming or involve imprecise statistical process control parameters. A suitable alternative would be to employ machine learning to classify the quality of parts by using quality indices and quality grading. In this study, we used a multilayer perceptron (MLP) neural network along with a few quality indices to accurately predict the quality of “qualified” and “unqualified” geometric shapes of a finished product. These quality indices, which exhibited a strong correlation with part quality, were extracted from pressure curves and input into the MLP model for learning and prediction. By filtering outliers from the input data and converting the measured quality into quality grades used as output data, we increased the prediction accuracy of the MLP model and classified the quality of finished parts into various quality levels. The MLP model may misjudge datapoints in the “to-be-confirmed” area, which is located between the “qualified” and “unqualified” areas. We classified the “to-be-confirmed” area, and only the quality of products in this area were evaluated further, which reduced the cost of quality control considerably. An integrated circuit tray was manufactured to experimentally demonstrate the feasibility of the proposed method.


2021 ◽  
Vol 3 (1) ◽  
Author(s):  
Zhikuan Zhao ◽  
Jack K. Fitzsimons ◽  
Patrick Rebentrost ◽  
Vedran Dunjko ◽  
Joseph F. Fitzsimons

AbstractMachine learning has recently emerged as a fruitful area for finding potential quantum computational advantage. Many of the quantum-enhanced machine learning algorithms critically hinge upon the ability to efficiently produce states proportional to high-dimensional data points stored in a quantum accessible memory. Even given query access to exponentially many entries stored in a database, the construction of which is considered a one-off overhead, it has been argued that the cost of preparing such amplitude-encoded states may offset any exponential quantum advantage. Here we prove using smoothed analysis that if the data analysis algorithm is robust against small entry-wise input perturbation, state preparation can always be achieved with constant queries. This criterion is typically satisfied in realistic machine learning applications, where input data is subjective to moderate noise. Our results are equally applicable to the recent seminal progress in quantum-inspired algorithms, where specially constructed databases suffice for polylogarithmic classical algorithm in low-rank cases. The consequence of our finding is that for the purpose of practical machine learning, polylogarithmic processing time is possible under a general and flexible input model with quantum algorithms or quantum-inspired classical algorithms in the low-rank cases.


2020 ◽  
Vol 7 (1) ◽  
Author(s):  
Miles L. Timpe ◽  
Maria Han Veiga ◽  
Mischa Knabenhans ◽  
Joachim Stadel ◽  
Stefano Marelli

AbstractIn the late stages of terrestrial planet formation, pairwise collisions between planetary-sized bodies act as the fundamental agent of planet growth. These collisions can lead to either growth or disruption of the bodies involved and are largely responsible for shaping the final characteristics of the planets. Despite their critical role in planet formation, an accurate treatment of collisions has yet to be realized. While semi-analytic methods have been proposed, they remain limited to a narrow set of post-impact properties and have only achieved relatively low accuracies. However, the rise of machine learning and access to increased computing power have enabled novel data-driven approaches. In this work, we show that data-driven emulation techniques are capable of classifying and predicting the outcome of collisions with high accuracy and are generalizable to any quantifiable post-impact quantity. In particular, we focus on the dataset requirements, training pipeline, and classification and regression performance for four distinct data-driven techniques from machine learning (ensemble methods and neural networks) and uncertainty quantification (Gaussian processes and polynomial chaos expansion). We compare these methods to existing analytic and semi-analytic methods. Such data-driven emulators are poised to replace the methods currently used in N-body simulations, while avoiding the cost of direct simulation. This work is based on a new set of 14,856 SPH simulations of pairwise collisions between rotating, differentiated bodies at all possible mutual orientations.


2021 ◽  
Vol 6 (11) ◽  
pp. 157
Author(s):  
Gonçalo Pereira ◽  
Manuel Parente ◽  
João Moutinho ◽  
Manuel Sampaio

Decision support and optimization tools to be used in construction often require an accurate estimation of the cost variables to maximize their benefit. Heavy machinery is traditionally one of the greatest costs to consider mainly due to fuel consumption. These typically diesel-powered machines have a great variability of fuel consumption depending on the scenario of utilization. This paper describes the creation of a framework aiming to estimate the fuel consumption of construction trucks depending on the carried load, the slope, the distance, and the pavement type. Having a more accurate estimation will increase the benefit of these optimization tools. The fuel consumption estimation model was developed using Machine Learning (ML) algorithms supported by data, which were gathered through several sensors, in a specially designed datalogger with wireless communication and opportunistic synchronization, in a real context experiment. The results demonstrated the viability of the method, providing important insight into the advantages associated with the combination of sensorization and the machine learning models in a real-world construction setting. Ultimately, this study comprises a significant step towards the achievement of IoT implementation from a Construction 4.0 viewpoint, especially when considering its potential for real-time and digital twins applications.


2021 ◽  
Vol 3 (2) ◽  
pp. 43-50
Author(s):  
Safa SEN ◽  
Sara Almeida de Figueiredo

Predicting bank failures has been an essential subject in literature due to the significance of the banks for the economic prosperity of a country. Acting as an intermediary player of the economy, banks channel funds between creditors and debtors. In that matter, banks are considered the backbone of the economies; hence, it is important to create early warning systems that identify insolvent banks from solvent ones. Thus, Insolvent banks can apply for assistance and avoid bankruptcy in financially turbulent times. In this paper, we will focus on two different machine learning disciplines: Boosting and Cost-Sensitive methods to predict bank failures. Boosting methods are widely used in the literature due to their better prediction capability. However, Cost-Sensitive Forest is relatively new to the literature and originally invented to solve imbalance problems in software defect detection. Our results show that comparing to the boosting methods, Cost-Sensitive Forest particularly classifies failed banks more accurately. Thus, we suggest using the Cost-Sensitive Forest when predicting bank failures with imbalanced datasets.


Polymers ◽  
2021 ◽  
Vol 13 (18) ◽  
pp. 3100
Author(s):  
Anusha Mairpady ◽  
Abdel-Hamid I. Mourad ◽  
Mohammad Sayem Mozumder

The selection of nanofillers and compatibilizing agents, and their size and concentration, are always considered to be crucial in the design of durable nanobiocomposites with maximized mechanical properties (i.e., fracture strength (FS), yield strength (YS), Young’s modulus (YM), etc). Therefore, the statistical optimization of the key design factors has become extremely important to minimize the experimental runs and the cost involved. In this study, both statistical (i.e., analysis of variance (ANOVA) and response surface methodology (RSM)) and machine learning techniques (i.e., artificial intelligence-based techniques (i.e., artificial neural network (ANN) and genetic algorithm (GA)) were used to optimize the concentrations of nanofillers and compatibilizing agents of the injection-molded HDPE nanocomposites. Initially, through ANOVA, the concentrations of TiO2 and cellulose nanocrystals (CNCs) and their combinations were found to be the major factors in improving the durability of the HDPE nanocomposites. Further, the data were modeled and predicted using RSM, ANN, and their combination with a genetic algorithm (i.e., RSM-GA and ANN-GA). Later, to minimize the risk of local optimization, an ANN-GA hybrid technique was implemented in this study to optimize multiple responses, to develop the nonlinear relationship between the factors (i.e., the concentration of TiO2 and CNCs) and responses (i.e., FS, YS, and YM), with minimum error and with regression values above 95%.


Author(s):  
Bartosz Firlik ◽  
Maciej Tabaszewski

This paper presents the concept of a simple system for the identification of the technical condition of tracks based on a trained learning system in the form of three independent neural networks. The studies conducted showed that basic measurements based on the root mean square of vibration acceleration allow for monitoring the track condition provided that the rail type has been included in the information system. Also, it is necessary to select data based on the threshold value of the vehicle velocity. In higher velocity ranges (above 40 km/h), it is possible to distinguish technical conditions with a permissible error of 5%. Such selection also enables to ignore the impact of rides through switches and crossings. Technical condition monitoring is also possible at lower ride velocities; however, this comes at the cost of reduced accuracy of the analysis.


2020 ◽  
Vol 176 ◽  
pp. 04011
Author(s):  
Sergey Korchagin ◽  
Denis Serdechny ◽  
Roman Kim ◽  
Denis Terin ◽  
Mihail Bey

The approach to solving the problems of diagnosis and prognosis of diseases of agricultural crops using machine learning methods is described. To solve the problem of forecasting diseases of agricultural crops, it is proposed to use a genetic algorithm in the work. The analysis of the effectiveness of the proposed method is carried out depending on the convergence rate of such parameters as the mutation coefficient and population size. To solve the problem of diagnostics of agricultural crops, it is proposed to use a recurrent type of neural network. A software modelling complex has been developed that allows solving the problems of plant diseases diagnostics and making forecasts. The results obtained can reduce the costs of agricultural enterprises by reducing the cost of diagnosing agricultural diseases.


2020 ◽  
Vol 28 (4) ◽  
pp. 532-551
Author(s):  
Blake Miller ◽  
Fridolin Linder ◽  
Walter R. Mebane

Supervised machine learning methods are increasingly employed in political science. Such models require costly manual labeling of documents. In this paper, we introduce active learning, a framework in which data to be labeled by human coders are not chosen at random but rather targeted in such a way that the required amount of data to train a machine learning model can be minimized. We study the benefits of active learning using text data examples. We perform simulation studies that illustrate conditions where active learning can reduce the cost of labeling text data. We perform these simulations on three corpora that vary in size, document length, and domain. We find that in cases where the document class of interest is not balanced, researchers can label a fraction of the documents one would need using random sampling (or “passive” learning) to achieve equally performing classifiers. We further investigate how varying levels of intercoder reliability affect the active learning procedures and find that even with low reliability, active learning performs more efficiently than does random sampling.


2020 ◽  
Author(s):  
Velimir Ilić ◽  
Alessandro Bertolini ◽  
Fabio Bonsignorio ◽  
Dario Jozinović ◽  
Tomasz Bulik ◽  
...  

<p>The analysis of low-frequency gravitational waves (GW) data is a crucial mission of GW science and the performance of Earth-based GW detectors is largely influenced by ability of combating the low-frequency ambient seismic noise and other seismic influences. This tasks require multidisciplinary research in the fields of seismic sensing, signal processing, robotics, machine learning and mathematical modeling.<br><br>In practice, this kind of research is conducted by large teams of researchers with different expertise, so that project management emerges as an important real life challenge in the projects for acquisition, processing and interpretation of seismic data from GW detector site. A prominent example that successfully deals with this aspect could be observed in the COST Action G2Net (CA17137 - A network for Gravitational Waves, Geophysics and Machine Learning) and its seismic research group, which counts more than 30 members. </p><div>In this talk we will review the structure of the group, present the goals and recent activities of the group, and present new methods for combating the seismic influences at GW detector site that will be developed and applied within this collaboration.</div><div> <p> </p> <p>This publication is based upon work from CA17137 - A network for Gravitational Waves, Geophysics and Machine Learning, supported by COST (European Cooperation in Science and Technology).</p> </div>


Sign in / Sign up

Export Citation Format

Share Document