scholarly journals An Integrated Approach of Mechanistic-Modeling and Machine-Learning for Thickness Optimization of Frozen Microwaveable Foods

Foods ◽  
2021 ◽  
Vol 10 (4) ◽  
pp. 763
Author(s):  
Ran Yang ◽  
Zhenbo Wang ◽  
Jiajia Chen

Mechanistic-modeling has been a useful tool to help food scientists in understanding complicated microwave-food interactions, but it cannot be directly used by the food developers for food design due to its resource-intensive characteristic. This study developed and validated an integrated approach that coupled mechanistic-modeling and machine-learning to achieve efficient food product design (thickness optimization) with better heating uniformity. The mechanistic-modeling that incorporated electromagnetics and heat transfer was previously developed and validated extensively and was used directly in this study. A Bayesian optimization machine-learning algorithm was developed and integrated with the mechanistic-modeling. The integrated approach was validated by comparing the optimization performance with a parametric sweep approach, which is solely based on mechanistic-modeling. The results showed that the integrated approach had the capability and robustness to optimize the thickness of different-shape products using different initial training datasets with higher efficiency (45.9% to 62.1% improvement) than the parametric sweep approach. Three rectangular-shape trays with one optimized thickness (1.56 cm) and two non-optimized thicknesses (1.20 and 2.00 cm) were 3-D printed and used in microwave heating experiments, which confirmed the feasibility of the integrated approach in thickness optimization. The integrated approach can be further developed and extended as a platform to efficiently design complicated microwavable foods with multiple-parameter optimization.

PLoS ONE ◽  
2021 ◽  
Vol 16 (8) ◽  
pp. e0255675
Author(s):  
László Zimányi ◽  
Áron Sipos ◽  
Ferenc Sarlós ◽  
Rita Nagypál ◽  
Géza I. Groma

Dealing with a system of first-order reactions is a recurrent issue in chemometrics, especially in the analysis of data obtained by spectroscopic methods applied on complex biological systems. We argue that global multiexponential fitting, the still common way to solve such problems, has serious weaknesses compared to contemporary methods of sparse modeling. Combining the advantages of group lasso and elastic net—the statistical methods proven to be very powerful in other areas—we created an optimization problem tunable from very sparse to very dense distribution over a large pre-defined grid of time constants, fitting both simulated and experimental multiwavelength spectroscopic data with high computational efficiency. We found that the optimal values of the tuning hyperparameters can be selected by a machine-learning algorithm based on a Bayesian optimization procedure, utilizing widely used or novel versions of cross-validation. The derived algorithm accurately recovered the true sparse kinetic parameters of an extremely complex simulated model of the bacteriorhodopsin photocycle, as well as the wide peak of hypothetical distributed kinetics in the presence of different noise levels. It also performed well in the analysis of the ultrafast experimental fluorescence kinetics data detected on the coenzyme FAD in a very wide logarithmic time window. We conclude that the primary application of the presented algorithms—implemented in available software—covers a wide area of studies on light-induced physical, chemical, and biological processes carried out with different spectroscopic methods. The demand for this kind of analysis is expected to soar due to the emerging ultrafast multidimensional infrared and electronic spectroscopic techniques that provide very large and complex datasets. In addition, simulations based on our methods could help in designing the technical parameters of future experiments for the verification of particular hypothetical models.


2021 ◽  
Vol 1 (1) ◽  
Author(s):  
Pietro Mascheroni ◽  
Symeon Savvopoulos ◽  
Juan Carlos López Alfonso ◽  
Michael Meyer-Hermann ◽  
Haralampos Hatzikirou

Abstract Background In clinical practice, a plethora of medical examinations are conducted to assess the state of a patient’s pathology producing a variety of clinical data. However, investigation of these data faces two major challenges. Firstly, we lack the knowledge of the mechanisms involved in regulating these data variables, and secondly, data collection is sparse in time since it relies on patient’s clinical presentation. The former limits the predictive accuracy of clinical outcomes for any mechanistic model. The latter restrains any machine learning algorithm to accurately infer the corresponding disease dynamics. Methods Here, we propose a novel method, based on the Bayesian coupling of mathematical modeling and machine learning, aiming at improving individualized predictions by addressing the aforementioned challenges. Results We evaluate the proposed method on a synthetic dataset for brain tumor growth and analyze its performance in predicting two relevant clinical outputs. The method results in improved predictions in almost all simulated patients, especially for those with a late clinical presentation (>95% patients show improvements compared to standard mathematical modeling). In addition, we test the methodology in two additional settings dealing with real patient cohorts. In both cases, namely cancer growth in chronic lymphocytic leukemia and ovarian cancer, predictions show excellent agreement with reported clinical outcomes (around 60% reduction of mean squared error). Conclusions We show that the combination of machine learning and mathematical modeling approaches can lead to accurate predictions of clinical outputs in the context of data sparsity and limited knowledge of disease mechanisms.


2020 ◽  
Author(s):  
László Zimányi ◽  
Áron Sipos ◽  
Ferenc Sarlós ◽  
Rita Nagypál ◽  
Géza Groma

<a>Dealing with a system of first-order reactions is a recurrent problem in chemometrics, especially in the analysis of data obtained by spectroscopic methods. Here we argue that global multiexponential fitting, the still common way to solve this kind of problems has serious weaknesses, in contrast to the available contemporary methods of sparse modeling. Combining the advantages of group-lasso and elastic net – the statistical methods proven to be very powerful in other areas – we obtained an optimization problem tunable to result in from very sparse to very dense distribution over a large pre-defined grid of time constants, fitting both simulated and experimental multiwavelength spectroscopic data with very high performance. Moreover, it was found that the optimal values of the tuning hyperparameters can be selected by a machine-learning algorithm based on a Bayesian optimization procedure, utilizing a widely used and a novel version of cross-validation. The applied algorithm recovered very exactly the true sparse kinetic parameters of an extremely complex simulated model of the bacteriorhodopsin photocycle, as well as the wide peak of hypothetical distributed kinetics in the presence of different levels of noise. It also performed well in the analysis of the ultrafast experimental fluorescence kinetics data detected on the coenzyme FAD in a very wide logarithmic time window.</a>


2019 ◽  
Vol 58 (36) ◽  
pp. 16743-16752 ◽  
Author(s):  
Xiang Zhang ◽  
Teng Zhou ◽  
Lei Zhang ◽  
Ka Yip Fung ◽  
Ka Ming Ng

2020 ◽  
Author(s):  
László Zimányi ◽  
Áron Sipos ◽  
Ferenc Sarlós ◽  
Rita Nagypál ◽  
Géza Groma

<a>Dealing with a system of first-order reactions is a recurrent problem in chemometrics, especially in the analysis of data obtained by spectroscopic methods. Here we argue that global multiexponential fitting, the still common way to solve this kind of problems has serious weaknesses, in contrast to the available contemporary methods of sparse modeling. Combining the advantages of group-lasso and elastic net – the statistical methods proven to be very powerful in other areas – we obtained an optimization problem tunable to result in from very sparse to very dense distribution over a large pre-defined grid of time constants, fitting both simulated and experimental multiwavelength spectroscopic data with very high performance. Moreover, it was found that the optimal values of the tuning hyperparameters can be selected by a machine-learning algorithm based on a Bayesian optimization procedure, utilizing a widely used and a novel version of cross-validation. The applied algorithm recovered very exactly the true sparse kinetic parameters of an extremely complex simulated model of the bacteriorhodopsin photocycle, as well as the wide peak of hypothetical distributed kinetics in the presence of different levels of noise. It also performed well in the analysis of the ultrafast experimental fluorescence kinetics data detected on the coenzyme FAD in a very wide logarithmic time window.</a>


2015 ◽  
Vol 112 (38) ◽  
pp. E5351-E5360 ◽  
Author(s):  
Weizhe Hong ◽  
Ann Kennedy ◽  
Xavier P. Burgos-Artizzu ◽  
Moriel Zelikowsky ◽  
Santiago G. Navonne ◽  
...  

A lack of automated, quantitative, and accurate assessment of social behaviors in mammalian animal models has limited progress toward understanding mechanisms underlying social interactions and their disorders such as autism. Here we present a new integrated hardware and software system that combines video tracking, depth sensing, and machine learning for automatic detection and quantification of social behaviors involving close and dynamic interactions between two mice of different coat colors in their home cage. We designed a hardware setup that integrates traditional video cameras with a depth camera, developed computer vision tools to extract the body “pose” of individual animals in a social context, and used a supervised learning algorithm to classify several well-described social behaviors. We validated the robustness of the automated classifiers in various experimental settings and used them to examine how genetic background, such as that of Black and Tan Brachyury (BTBR) mice (a previously reported autism model), influences social behavior. Our integrated approach allows for rapid, automated measurement of social behaviors across diverse experimental designs and also affords the ability to develop new, objective behavioral metrics.


2021 ◽  
Author(s):  
Daniel O'Leary ◽  
Deirdree Polak ◽  
Roman Popat ◽  
Oliver Eatough ◽  
Tom Brian

Abstract Optimising the Rate of Penetration (ROP) on Development wells contributes heavily to delivery of projects ahead of schedule and has long been a goal for drilling engineers. Selecting the best parameters to achieve this has often proved difficult due to the extensive quantities of data concerning formation types, bottom-hole assembly (BHA) design and bit specifications. Legacy drilling data can also be vast and not well characterised, making it very difficult to robustly analyse manually. Additionally, multiple stakeholders can each have their own hypotheses on how to improve drilling performance, including bit vendors, directional drilling companies, drilling engineers and offshore supervisors, creating further confusion in this field. Together with its team of data scientists, TotalEnergies E&P UK (TEPUK) has utilised machine learning to analyse field and equipment data and produce guidelines for optimised drilling rate. The machine learning algorithm identifies parameters which have a statistical likelihood of improving ROP performance whilst drilling. The model was developed using offset well data from TotalEnergies' Realtime Support Centre (RTSC) and bit design information. This represented the first use of Machine Learning in the 20+ years of drilling on Elgin Franklin. Adapting to this new data-based method forms part of a wider digital revolution within TEPUK and the Offshore Drilling Industry. In this case, an integrated approach from the data scientists, drilling engineers and supervisors was required to transition to a new way of working. The first trial of using optimised parameters was on a recent Franklin well (F13) in the Cretaceous Chalk formations. The model generated statistically optimised parameter sheets which were strictly executed on site. Within the guideline sheets were suggested ranges of Revolutions per Minute (RPM), Flowrate, Weight on Bit (WOB) and Torque, as well as recommendations for bit blades and cutters. Heatmaps were generated to show what combination of WOB and RPM would likely achieve best ROP in each sub formation. The parameter range defined was specifically narrow to reduce any time spent varying parameters. In practice the new digital approach was successfully adopted offshore and contributed to the delivery of the 12 ½" and 8 ½" sections in record time for the field, resulting in significant savings versus AFE. Following the success of the guideline implementation, steps have been taken to integrate the machine learning model with live incoming data on TotalEnergies' digital drilling online platform. Since the initial trial on Franklin, online ROP optimisation features have been deployed on the Elgin field and currently provide live parameter guidance, a forecast to section TD and data driven bit change scenario analyses whist drilling.


2018 ◽  
Author(s):  
C.H.B. van Niftrik ◽  
F. van der Wouden ◽  
V. Staartjes ◽  
J. Fierstra ◽  
M. Stienen ◽  
...  

2020 ◽  
pp. 1-12
Author(s):  
Li Dongmei

English text-to-speech conversion is the key content of modern computer technology research. Its difficulty is that there are large errors in the conversion process of text-to-speech feature recognition, and it is difficult to apply the English text-to-speech conversion algorithm to the system. In order to improve the efficiency of the English text-to-speech conversion, based on the machine learning algorithm, after the original voice waveform is labeled with the pitch, this article modifies the rhythm through PSOLA, and uses the C4.5 algorithm to train a decision tree for judging pronunciation of polyphones. In order to evaluate the performance of pronunciation discrimination method based on part-of-speech rules and HMM-based prosody hierarchy prediction in speech synthesis systems, this study constructed a system model. In addition, the waveform stitching method and PSOLA are used to synthesize the sound. For words whose main stress cannot be discriminated by morphological structure, label learning can be done by machine learning methods. Finally, this study evaluates and analyzes the performance of the algorithm through control experiments. The results show that the algorithm proposed in this paper has good performance and has a certain practical effect.


Sign in / Sign up

Export Citation Format

Share Document