scholarly journals Machine Learning Model of Dimensionless Numbers to Predict Flow Patterns and Droplet Characteristics for Two-Phase Digital Flows

2021 ◽  
Vol 11 (9) ◽  
pp. 4251
Author(s):  
Jinsong Zhang ◽  
Shuai Zhang ◽  
Jianhua Zhang ◽  
Zhiliang Wang

In the digital microfluidic experiments, the droplet characteristics and flow patterns are generally identified and predicted by the empirical methods, which are difficult to process a large amount of data mining. In addition, due to the existence of inevitable human invention, the inconsistent judgment standards make the comparison between different experiments cumbersome and almost impossible. In this paper, we tried to use machine learning to build algorithms that could automatically identify, judge, and predict flow patterns and droplet characteristics, so that the empirical judgment was transferred to be an intelligent process. The difference on the usual machine learning algorithms, a generalized variable system was introduced to describe the different geometry configurations of the digital microfluidics. Specifically, Buckingham’s theorem had been adopted to obtain multiple groups of dimensionless numbers as the input variables of machine learning algorithms. Through the verification of the algorithms, the SVM and BPNN algorithms had classified and predicted the different flow patterns and droplet characteristics (the length and frequency) successfully. By comparing with the primitive parameters system, the dimensionless numbers system was superior in the predictive capability. The traditional dimensionless numbers selected for the machine learning algorithms should have physical meanings strongly rather than mathematical meanings. The machine learning algorithms applying the dimensionless numbers had declined the dimensionality of the system and the amount of computation and not lose the information of primitive parameters.

Author(s):  
Aaron Rodrigues

Abstract: Food sales forecasting is concerned with predicting future sales of food-related businesses such as supermarkets, grocery stores, restaurants, bakeries, and patisseries. Companies can reduce stocked and expired products within stores while also avoiding missing revenues by using accurate short-term sales forecasting. This research examines current machine learning algorithms for predicting food purchases. It goes over key design considerations for a data analyst working on food sales forecasting’s, such as the temporal granularity of sales data, the input variables to employ for forecasting sales, and the representation of the sales output variable. It also examines machine learning algorithms that have been used to anticipate food sales and the proper metrics for assessing their performance. Finally, it goes over the major problems and prospects for applied machine learning in the field of food sales forecasting. Keywords: Food, Demand forecasting, Machine learning, Regression, Timeseries forecasting, Sales prediction


2019 ◽  
Vol 38 (7) ◽  
pp. 512-519 ◽  
Author(s):  
Brian Russell

As geophysicists, we are trained to conceptualize geophysical problems in detail. However, machine learning algorithms are more difficult to understand and are often thought of as simply “black boxes.” A numerical example is used here to illustrate the difference between geophysical inversion and inversion by machine learning. In doing so, an attempt is made to demystify machine learning algorithms and show that, like inverse problems, they have a definite mathematical structure that can be written down and understood. The example used is the extraction of the underlying reflection coefficients from a synthetic seismic response that was created by convolving a reflection coefficient dipole with a symmetric wavelet. Because the dipole is below the seismic tuning frequency, the overlapping wavelets create both an amplitude increase and extra nonphysical reflection coefficients in the synthetic seismic data. This is a common problem in real seismic data. In discussing the solution to this problem, the topics of deconvolution, recursive inversion, linear regression, and nonlinear regression using a feedforward neural network are covered. It is shown that if the inputs to the deconvolution problem are fully understood, this is the optimal way to extract the true reflection coefficients. However, if the geophysics is not fully understood and large amounts of data are available, machine learning can provide a viable alternative to geophysical inversion.


2020 ◽  
Vol 11 (3) ◽  
pp. 80-105 ◽  
Author(s):  
Vijay M. Khadse ◽  
Parikshit Narendra Mahalle ◽  
Gitanjali R. Shinde

The emerging area of the internet of things (IoT) generates a large amount of data from IoT applications such as health care, smart cities, etc. This data needs to be analyzed in order to derive useful inferences. Machine learning (ML) plays a significant role in analyzing such data. It becomes difficult to select optimal algorithm from the available set of algorithms/classifiers to obtain best results. The performance of algorithms differs when applied to datasets from different application domains. In learning, it is difficult to understand if the difference in performance is real or due to random variation in test data, training data, or internal randomness of the learning algorithms. This study takes into account these issues during a comparison of ML algorithms for binary and multivariate classification. It helps in providing guidelines for statistical validation of results. The results obtained show that the performance measure of accuracy for one algorithm differs by critical difference (CD) than others over binary and multivariate datasets obtained from different application domains.


Author(s):  
Adrián G. Bruzón ◽  
Patricia Arrogante-Funes ◽  
Fátima Arrogante-Funes ◽  
Fidel Martín-González ◽  
Carlos J. Novillo ◽  
...  

The risks associated with landslides are increasing the personal losses and material damages in more and more areas of the world. These natural disasters are related to geological and extreme meteorological phenomena (e.g., earthquakes, hurricanes) occurring in regions that have already suffered similar previous natural catastrophes. Therefore, to effectively mitigate the landslide risks, new methodologies must better identify and understand all these landslide hazards through proper management. Within these methodologies, those based on assessing the landslide susceptibility increase the predictability of the areas where one of these disasters is most likely to occur. In the last years, much research has used machine learning algorithms to assess susceptibility using different sources of information, such as remote sensing data, spatial databases, or geological catalogues. This study presents the first attempt to develop a methodology based on an automatic machine learning (AutoML) framework. These frameworks are intended to facilitate the development of machine learning models, with the aim to enable researchers focus on data analysis. The area to test/validate this study is the center and southern region of Guerrero (Mexico), where we compare the performance of 16 machine learning algorithms. The best result achieved is the extra trees with an area under the curve (AUC) of 0.983. This methodology yields better results than other similar methods because using an AutoML framework allows to focus on the treatment of the data, to better understand input variables and to acquire greater knowledge about the processes involved in the landslides.


2021 ◽  
Author(s):  
Yew Kee Wong

In the information era, enormous amounts of data have become available on hand to decision makers. Big data refers to datasets that are not only big, but also high in variety and velocity, which makes them difficult to handle using traditional tools and techniques. Due to the rapid growth of such data, solutions need to be studiedand provided in order to handle and extract value and knowledge from these datasets. Machine learning is a method of data analysis that automates analytical model building. It is a branch of artificial intelligence based on the idea that systems can learn from data, identify patterns and make decisions with minimal human intervention. Such minimal human intervention can be provided using big data analytics, which is the application of advanced analytics techniques on big data. This paper aims to analyse some of the different machine learning algorithms and methods which can be applied to big data analysis, as well as the opportunities provided by the application of big data analytics in various decision making domains.


2021 ◽  
Author(s):  
Yanji Wang ◽  
Hangyu Li ◽  
Jianchun Xu ◽  
Ling Fan ◽  
Xiaopu Wang ◽  
...  

Abstract Conventional flow-based two-phase upscaling for simulating the waterflooding process requires the calculations of upscaled two-phase parameters for each coarse interface or block. The whole procedure can be greatly time-consuming especially for large-scale reservoir models. To address this problem, flow-based two-phase upscaling techniques are combined with machine learning algorithms, in which the flow-based two-phase upscaling is needed only for a small fraction of coarse interfaces (or blocks), while the upscaled two-phase parameters for the rest of the coarse interfaces (or blocks) are directly provided by the machine learning algorithms instead of performing upscaling computation on each coarse interfaces (or blocks). The new two-phase upscaling workflow was tested for generic (left to right) flow problems using a 2D large-scale model. We observed similar accuracy for results using the machine learning assisted workflow compared with the results using full flow-based upscaling. And significant speedup (nearly 70) is achieved. The workflow developed in this work is one of the pioneering work in combining machine learning algorithm with the time-consuming flow-based two-phase upscaling method. It is a valuable addition to the existing multiscale techniques for subsurface flow simulation.


2021 ◽  
Vol 5 (2(15)) ◽  
pp. 61-76
Author(s):  
Vasilii Konstantinovich Alekhin ◽  

Social network TikTok has strong competitive differentiator in comparing with other platforms. ByteDance exploits machine learning algorithms to generate a recommendation feed (for you page). The algorithm bases on two main mechanisms. The first mechanism provides content database clustering depending on the type, audio track, video captions, and hashtags. The second mechanism analyzes the user’s behavioral patterns based on their actions in the application. The next step is the formation of user interaction scenarios. The difference between the predicted behavior and the real one is the object of analysis. If it equals zero, then the recommendations feed is formed correctly. The user is watching more and more interesting videos, just scrolling through video after video.


Author(s):  
Hozan Khalid Hamarashid

The mean result of machine learning models is determined by utilizing k-fold cross-validation. The algorithm with the best average performance should surpass those with the poorest. But what if the difference in average outcomes is the consequence of a statistical anomaly? To conduct whether or not the mean result differences between two algorithms is genuine then statistical hypothesis test is utilized. Using statistical hypothesis testing, this study will demonstrate how to compare machine learning algorithms. The output of several machine learning algorithms or simulation pipelines is compared during model selection. The model that performs the best based on your performance measure becomes the last model, which can be utilized to make predictions on new data. With classification and regression prediction models it can be conducted by utilizing traditional machine learning and deep learning methods. The difficulty is to identify whether or not the difference between two models is accurate.


Sign in / Sign up

Export Citation Format

Share Document