observable data
Recently Published Documents


TOTAL DOCUMENTS

65
(FIVE YEARS 24)

H-INDEX

10
(FIVE YEARS 2)

2022 ◽  
pp. 109821402094330
Author(s):  
Wendy Chan

Over the past ten years, propensity score methods have made an important contribution to improving generalizations from studies that do not select samples randomly from a population of inference. However, these methods require assumptions and recent work has considered the role of bounding approaches that provide a range of treatment impact estimates that are consistent with the observable data. An important limitation to bound estimates is that they can be uninformatively wide. This has motivated research on the use of propensity score stratification to narrow bounds. This article assesses the role of distributional overlap in propensity scores on the effectiveness of stratification to tighten bounds. Using the results of two simulation studies and two case studies, I evaluate the relationship between distributional overlap and precision gain and discuss the implications when propensity score stratification is used as a method to improve precision in the bounding framework.


2021 ◽  
Author(s):  
Jafar Sadeghi ◽  
B Pourhassan ◽  
Saeed Noorigashti ◽  
Sudhaker Upadhyay

Abstract Over the past few decades, inflation models have been studied by researchers from different perspectives and conditions in order to introduce a model for the expanding universe. In this paper, we introduce a modified f(R) gravitational model as (R + γRp ) in order to examine a new condition for inflation models. Given that our studies are related to a modified f(R) gravitational model on the brane, therefore we will encounter modified cosmological parameters. So, we first introduce these modified cosmological parameters such as spectral index, a number of e-folds and etc. Then, we apply these conditions to our modified f(R) gravitational model in order to adapt to the swampland criteria. Finally, we determine the range of each of these parameters by plotting some figures and with respect to observable data such as Planck 2018.


2021 ◽  
Vol 2102 (1) ◽  
pp. 012011
Author(s):  
F Mesa ◽  
J R González Granada ◽  
G Correa Vélez

Abstract Being able to estimate the behavior of a system from observable data is one of the great difficulties that any system presents. This problem presents a challenge for researchers who perform scenario estimation and forecasting. In most problems it is proposed to perform data analysis, but in this article, we propose to perform synthesis in such a way that a dipheomorphic attractor is constructed. that models the system. In the treatment of the analysis, we start from the inputs and assume some equations that describe the system, in the case of synthesis the most important thing is the data produced by the system, since these are real with some associated noise, so from those data and using Takens’ theorem, we can build an attractor that models the system we model in a more real way.


Author(s):  
Yana Lyakhova ◽  
Evgeny Alexandrovich Polyakov ◽  
Alexey N Rubtsov

Abstract In recent years, there has been an intensive research on how to exploit the quantum laws of nature in the machine learning. Models have been put forward which employ spins, photons, and cold atoms. In this work we study the possibility of using the lattice fermions to learn the classical data. We propose an alternative to the quantum Boltzmann Machine, the so-called Spin-Fermion Machine (SFM), in which the spins represent the degrees of freedom of the observable data (to be learned), and the fermions represent the correlations between the data. The coupling is linear in spins and quadratic in fermions. The fermions are allowed to tunnel between the lattice sites. The training of SFM can be eciently implemented since there are closed expressions for the log- likelihood gradient. We nd that SFM is more powerful than the classical Restricted Boltzmann Machine (RBM) with the same number of physical degrees of freedom. The reason is that SFM has additional freedom due to the rotation of the Fermi sea. We show examples for several data sets.


2021 ◽  
Vol 18 ◽  
pp. 150-169
Author(s):  
Vladimir K. Mukhomorov

A model is proposed that allows one to interpret the carcinogenic properties of polycyclic chemical compounds. Electronic, informational and structural molecular factors that characterize the molecule as a whole are proposed as explanatory variables. The factors limiting the carcinogenic activity of polycyclic compounds are analyzed. The model fully interprets all observable data that were used to support previous early models


2021 ◽  
Author(s):  
Masaru Kondo

We propose a mathematical model for quantifying willpower and an application based on the model. Volitional Motion Theory (VMT) is a mathematical model that draws on classical mechanics, thermodynamics, statistical mechanics, information theory, and philosophy. The resulting numbers are statistical theoretical values deduced using observable data. VMT can be applied to a variety of fields, including behavioral science, behavioral economics, and computational neuroscience. For example, "What is animal spirit in economics?" VMT is one proposal to answer this question. In addition, a scheduling application has been created to validate VMT. This application is open to the public for anyone to use.


Author(s):  
Stef Kuypers ◽  
Thomas Goorden ◽  
Bruno Delepierre

“Money has always been something of an embarrassment to economic theory. Everyone agrees that it isimportant; indeed, much of macroeconomic policy discussion makes no sense without reference to money.Yet, for the most part theory fails to provide a good account for it.”(Banerjee and Maskin, 1996, p. 955)The debate about whether or not a growth imperative exists in debt based, interest bearing mone-tary systems has not yet been settled. It is the goal of this paper to introduce a new perspective inthis discussion.For that purpose an SFC computational model is constructed which simulates a post KeynesianEndogenous Money system without including economic parameters such as production, wages,consumption and savings. A case is made that isolating the monetary system allows for betteranalysis of the inherent properties of such a system.Loan demands, which are assumed to happen, are the driving force of the model. Simulationscan be run in 2 modes, each based on a different assumption. Either the growth rate of the moneystock is assumed to be constant or the loan rate, expressed as a percentage of the money stock, isconsidered to be constant.Simualtions with varying parameters are run in order to determine the conditions under whichthe model converges to stability, which is defined as converging to a bounded debt rate.The analysis shows that stability of the model is dependent on net bank profit ratios, expressedrelative to their debt assets, remaining below the growth rate of the money stock. Based on thesefindings it is argued that the question about the existence of a growth imperative in debt based,interest bearing monetary systems needs to be reframed. The question becomes whether a steadystate economy can support such a system without destabilizing it.It is concluded that there are indications that this might not be the case. However, for a definiteanswer more research is necessary. Real world observable data should be analysed through thelens of the presented model to bring more clarity.


Author(s):  
Tín Minh Ngô

Currently, intellectual property in general and the organization and management of intellectual property activities (Intellectual asset governance) in particular are being paid special attention by most economic organizations because of their enormous contributions to the asset value of the organization. In recent years, along with the general development trend of the whole country and industries, universities, the origin of creativity, the beginning of most intellectual asset have gradually recognized the value of the contribution of IP to sustainable development, the universities have initially built and operated their Intellectual asset governance models in order to find the appropriate model as Intellectual asset governance is only effective when we have the most suitable model of governance. Given the rising trend, VietNam National University Ho Chi Minh City (VNU-HCM) and its member universities soon tested and operated their models. After nearly 10 years of operation, no official summary report on the efficiency obtained from each model is conducted. With observable data, however, the author found that each model has its advantages and certain limitations. In this article, the author analyzes the Intellectual asset governance models in VNU-HCM and its member universities, followed by the proposal of a suitable model for corporate governance for each subject.


2021 ◽  
Author(s):  
Serge Dolgikh

AbstractAnalysis of small datasets presents a number of essential challenges not in the least due to insufficient sampling of characteristic patterns in the data making confident conclusions about the unknown distribution elusive and resulting in lower statistical confidence and higher error. In this work, a novel approach to augmentation of small datasets is proposed based on an ensemble of neural network models of unsupervised generative self-learning. Applying generative learning with an ensemble of individual models allowed to identify stable clusters of data points in the latent representations of the observable data. Several techniques of augmentation based on identified latent cluster structure were applied to produce new data points and enhance the dataset. The proposed method can be used with small and extremely small datasets to identify characteristics patterns, augment data and in some cases, improve accuracy of classification in the scenarios with strong deficit of labels.


2021 ◽  
Vol 27 (2) ◽  
pp. 146045822110082
Author(s):  
M Adela Grando ◽  
Vaishak Vellore ◽  
Benjamin J Duncan ◽  
David R Kaufman ◽  
Stephanie K Furniss ◽  
...  

Rapid ethnography and data mining approaches have been used individually to study clinical workflows, but have seldom been used together to overcome the limitations inherent in either type of method. For rapid ethnography, how reliable are the findings drawn from small samples? For data mining, how accurate are the discoveries drawn from automatic analysis of big data, when compared with observable data? This paper explores the combined use of rapid ethnography and process mining, aka ethno-mining, to study and compare metrics of a typical clinical documentation task, vital signs charting. The task was performed with different electronic health records (EHRs) used in three different hospital sites. The individual methods revealed substantial discrepancies in task duration between sites. Specifically, means of 159.6(78.55), 38.2(34.9), and 431.3(283.04) seconds were captured with rapid ethnography. When process mining was used, means of 518.6(3,808), 345.5(660.6), and 119.74(210.3) seconds were found. When ethno-mining was applied instead, outliers could be identified, explained and removed. Without outliers, mean task duration was similar between sites (78.1(66.7), 72.5(78.5), and 71.7(75) seconds). Results from this work suggest that integrating rapid ethnography and data mining into a single process may provide more meaningful results than a siloed approach when studying of workflow.


Sign in / Sign up

Export Citation Format

Share Document