scholarly journals Forecasting Development of COVID-19 Epidemic in European Union Using Entropy-Randomized Approach

Author(s):  
Yuri Popkov ◽  
Yuri Dubnov ◽  
Alexey Popkov

The paper is devoted to the forecasting of the COVID-19 epidemic by the novel method of randomized machine learning. This method is based on the idea of estimation of probability distributions of model parameters and noises on real data. Entropy-optimal distributions correspond to the state of maximum uncertainty which allows the resulting forecasts to be used as forecasts of the most "negative" scenario of the process under study. The resulting estimates of parameters and noises, which are probability distributions, must be generated, thus obtaining an ensemble of trajectories that considered to be analyzed by statistical methods. In this work, for the purposes of such an analysis, the mean and median trajectories over the ensemble are calculated, as well as the trajectory corresponding to the mean over distribution values of the model parameters. The proposed approach is used to predict the total number of infected people using a three-parameter logistic growth model. The conducted experiment is based on real COVID-19 epidemic data in several countries of the European Union. The main goal of the experiment is to demonstrate an entropy-randomized approach for predicting the epidemic process based on real data near the peak. The significant uncertainty contained in the available real data is modeled by an additive noise within 30%, which is used both at the training and predicting stages. To tune the hyperparameters of the model, the scheme is used to configure them according to a testing dataset with subsequent retraining of the model. It is shown that with the same datasets, the proposed approach makes it possible to predict the development of the epidemic more efficiently in comparison with the standard approach based on the least-squares method.

Author(s):  
Jovana Matic ◽  
Jasna Mastilovic ◽  
Ivana Cabarkapa ◽  
Anamarija Mandic

Mycotoxins are toxic secondary metabolites of fungi that contaminate a large variety of foods and have toxic effects on humans. The best protection against mycotoxins is to monitor their presence in food. This paper shows the screening results of mycotoxins present in 76 samples of different groups of grain foods. Samples of grain food were analyzed for contamination with aflatoxins, ochratoxin A, zearalenone, fumonisins and deoxynivalenol. Analysis were conducted using competitive enzyme-linked immunosorbent assay (ELISA). None of the samples was contaminated with aflatoxins. The most predominant mycotoxin was ochratoxin A with the mean level of 4.84 ? 4.49 ppb in 19.7% of the examined samples. Zearalenone, fumonisins, and deoxynivalenol were found in 9.21, 14.5 and 3.9% of the samples, respectively. Mycotoxin content in the investigated samples was compared with the regulations of Serbia and those of the European Union.


2020 ◽  
Vol 14 (1) ◽  
Author(s):  
Fernanda Carini ◽  
Alberto Cargnelutti-Filho ◽  
Jéssica Maronez De Souza ◽  
Rafael Vieira Pezzini ◽  
Cassiane Ubessi ◽  
...  

The objective of this study was to fit a logistic model to fresh and dry matters of leaves and fresh and dry matters of shoots of four lettuce cultivars to describe growth in summer. Cultivars Crocantela, Elisa, Rubinela, and Vera were evaluated in the summer of 2017 and 2018, in soil in protected environment and in soilless system. Seven days after transplantation, fresh and dry leaf matters and fresh and dry shoot matters of 8 plants were weighed every 4 days. The model parameters were estimated using the software R, using the least squares method and iterative process of Gauss-Newton. We also estimated the confidence intervals of the parameters, verified the assumptions of the models, calculated the goodness-of-fit measures and the critical points, and quantified the parametric and intrinsic nonlinearities. The logistic growth model fitted well to fresh and dry leaf and shoot matters of cultivars Crocantela, Elisa, Rubinela, and Vera and is indicated to describe the growth of lettuce.


2018 ◽  
Vol 17 (4) ◽  
pp. 199-205
Author(s):  
Jadwiga Zaród

Following Poland’s accession to the European Union, farmers were given new opportunities to make use of various form of support from EU funds. The goal of this work is to show the utilization of EU funds by an agricultural farm and optimization of its production. The task was made possible by means of two multicriteria linear-dynamic optimization models. The first model accounted for real production structure and EU subsidies. The subsidies were not included in the second model. The empirical material constituted real data on an agricultural farm located in the commune of Nowogard (West Pomeranian Voivodship). The results of the solutions indicated over a threefold increase of agricultural income, agricultural production and the amount of organic substance supplemented to the soil of an agricultural farm accounting for EU grants.


2020 ◽  
Vol 18 (02) ◽  
pp. 2050014
Author(s):  
S. N. Fedotov

As a rule, receptor-ligand assay data are fitted by logistic functions (4PL model, 5PL model, Feldman’s model). The preparation of the initial estimates for parameters of these functions is an important problem for processing receptor-ligand interaction data. This study represents a new mathematical approach to calculate the initial estimates more closely to the true values of parameters. The main idea of this approach is in using the modified linear least squares method for calculations of the parameters for the 4PL model and the Feldman’s model. In this study, the convergence of model parameters to true values is verified for the simulated data with different statistical scatter. Also, the results of processing real data for the 4PL model and the Feldman’s model are presented. A comparison is made of the parameter values calculated by the presented and a nonlinear method. The developed approach has demonstrated its efficiency in calculating the parameters of the complex Feldman”s models up to 4 ligands and 4 sites.


2010 ◽  
Vol 3 (1) ◽  
pp. 35-44 ◽  
Author(s):  
T. Whitaker ◽  
A. Slate ◽  
J. Adams ◽  
T. Birmingham ◽  
F. Giesbrecht

The European Commission (EC) aflatoxin sampling plan for ready-to-eat tree nuts such as almonds requires that each of the three 10 kg laboratory samples must all test less than 2 ng/g aflatoxin B1 (AFB1) and 4 ng/g total aflatoxins (AFT) for the lot to be accepted. Exporters have observed that the AFB1/AFT ratio varied greatly from sample to sample and the ratio appeared to average more than 50%. Because of the concern that dual limits associated with the EC aflatoxin sampling plans may reject more lots than similar sampling plans that use a single limit based upon total aflatoxins, studies were designed with the objectives to (a) measure the distribution of AFB1/AFT ratio values using sample test results associated with testing U.S. almond lots exported to the European Union; (b) use Monte Carlo methods to develop a model to compute the effects of using dual limits based upon AFB1 and AFT on the probability of accepting almond lots; and (c) compare the probability of accepting almond lots using the current Codex aflatoxin sampling plans for tree nuts when using single limits versus the use of dual limits. The study results showed that the mean and median among 3,257 AFB1/AFT ratio values was 87.6% and 91.9%, respectively, indicating that the distribution among the ratio values was negatively skewed. Only 31% of the 3,257 AFB1/AFT ratio values are less than the mean ratio of 87.6%. Codex aflatoxin sampling plans for tree nuts using a single limit based upon total aflatoxins had the highest probability of accepting lots at all lot concentrations when compared to the probability of accepting lots with dual limits. As the AFB1 limit decreased from 90 to 50% of the total limit, the probability of rejecting lots at all concentrations increased when compared to the Codex aflatoxin sampling plans with a single limit based upon total aflatoxins.


2019 ◽  
Vol 83 (1) ◽  
pp. 101-107 ◽  
Author(s):  
ALI HESHMATI ◽  
FERESHTEH MEHRI ◽  
JAVAD KARAMI-MOMTAZ ◽  
AMIN MOUSAVI KHANEGHAH

ABSTRACT The concentration of cadmium (Cd) and lead (Pb) in vegetable (potatoes, onions, tomatoes, lettuce, leeks, and carrots) and cereal (wheat and rice) samples collected from Iran were investigated by a graphite furnace atomic absorption spectrophotometer. In addition, we determined the health risks due to exposure to Cd and Pb through vegetable and cereal consumption by computing the estimated daily intake, the target hazard quotient (THQ), the total THQ, and the margin of exposure. The mean concentrations of Pb in potato, onion, tomato, lettuce, leek, carrot, wheat, and rice samples were measured as 0.029 ± 0.011, 0.016 ± 0.012, 0.007 ± 0.005, 0.022 ± 0.020, 0.040 ± 0.048, 0.029 ± 0.025, 0.123 ± 0.120, and 0.097 ± 0.059 mg kg−1 wet weight, respectively, and all were below the maximum allowable concentrations set by the European Union. The mean concentrations of Cd in potatoes, onions, tomatoes, lettuce, leeks, carrots, wheat, and rice samples were measured as 0.022 ± 0.013, 0.011 ± 0.009, 0.003 ± 0.003, 0.007 ± 0.005, 0.015 ± 0.024, 0.013 ± 0.011, 0.046 ± 0.043, and 0.049 ± 0.04 mg kg−1 wet weight, respectively, and all were below the permissible levels established by the European Union. The corresponding values for the estimated daily intake of Cd were acceptable and lower than the provisional tolerable daily intake. The THQ and total THQ values of Cd through consumption of all vegetables and cereals were lower than 1. The margin of exposure values for Pb in samples were >1, showing no significant human health risks for both potentially toxic elements. The findings of this study indicated there is no risk associated with exposure to Pb and Cd through the intake of selected vegetables and cereals in western Iran. HIGHLIGHTS


Symmetry ◽  
2020 ◽  
Vol 12 (9) ◽  
pp. 1462
Author(s):  
Mansour Shrahili ◽  
Naif Alotaibi

A new family of probability distributions is defined and applied for modeling symmetric real-life datasets. Some new bivariate type G families using Farlie–Gumbel–Morgenstern copula, modified Farlie–Gumbel–Morgenstern copula, Clayton copula and Renyi’s entropy copula are derived. Moreover, some of its statistical properties are presented and studied. Next, the maximum likelihood estimation method is used. A graphical assessment based on biases and mean squared errors is introduced. Based on this assessment, the maximum likelihood method performs well and can be used for estimating the model parameters. Finally, two symmetric real-life applications to illustrate the importance and flexibility of the new family are proposed. The symmetricity of the real data is proved nonparametrically using the kernel density estimation method.


2016 ◽  
Vol 8 (2) ◽  
pp. 98-108
Author(s):  
Renáta Krajčírová ◽  
Alexandra Ferenczi Vaňová ◽  
Michal Munk

Abstract The article is focused on the consideration between the agricultural land acreage and the amount of land tax in the selected sample of companies of agricultural primary production in the Slovak Republic within the period from 2010 to 2014 based on the data from departmental database of enterprises with primary agricultural production drawn from the factsheets of Ministry of Agriculture and Rural Development of the Slovak Republic presented by the selected statistical methods. In particular, the article presents the agricultural land and land tax from the accounting and tax perspective of the Slovak Republic and the European Union. It can be resulted that a slightly declining trend of the mean acreage of agricultural land was recorded for the evaluated group of agricultural enterprises within the reported period, while the mean land tax value per hectare of agricultural land had increasing trend. Results of the survey on significances of differences in the values of the dependent variables at the level of combinations of factors of year and enterprise indicate that the acreage of agricultural land and the volume of the land tax are statistically dependant at the level of year, however there are not dependent at the level of combination of factors of year and enterprise within the surveyed period.


2005 ◽  
Vol 360 (1457) ◽  
pp. 1075-1091 ◽  
Author(s):  
L.M Harrison ◽  
O David ◽  
K.J Friston

Cortical activity is the product of interactions among neuronal populations. Macroscopic electrophysiological phenomena are generated by these interactions. In principle, the mechanisms of these interactions afford constraints on biologically plausible models of electrophysiological responses. In other words, the macroscopic features of cortical activity can be modelled in terms of the microscopic behaviour of neurons. An evoked response potential (ERP) is the mean electrical potential measured from an electrode on the scalp, in response to some event. The purpose of this paper is to outline a population density approach to modelling ERPs. We propose a biologically plausible model of neuronal activity that enables the estimation of physiologically meaningful parameters from electrophysiological data. The model encompasses four basic characteristics of neuronal activity and organization: (i) neurons are dynamic units, (ii) driven by stochastic forces, (iii) organized into populations with similar biophysical properties and response characteristics and (iv) multiple populations interact to form functional networks. This leads to a formulation of population dynamics in terms of the Fokker–Planck equation. The solution of this equation is the temporal evolution of a probability density over state-space, representing the distribution of an ensemble of trajectories. Each trajectory corresponds to the changing state of a neuron. Measurements can be modelled by taking expectations over this density, e.g. mean membrane potential, firing rate or energy consumption per neuron. The key motivation behind our approach is that ERPs represent an average response over many neurons. This means it is sufficient to model the probability density over neurons, because this implicitly models their average state. Although the dynamics of each neuron can be highly stochastic, the dynamics of the density is not. This means we can use Bayesian inference and estimation tools that have already been established for deterministic systems. The potential importance of modelling density dynamics (as opposed to more conventional neural mass models) is that they include interactions among the moments of neuronal states (e.g. the mean depolarization may depend on the variance of synaptic currents through nonlinear mechanisms). Here, we formulate a population model, based on biologically informed model-neurons with spike-rate adaptation and synaptic dynamics. Neuronal sub-populations are coupled to form an observation model, with the aim of estimating and making inferences about coupling among sub-populations using real data. We approximate the time-dependent solution of the system using a bi-orthogonal set and first-order perturbation expansion. For didactic purposes, the model is developed first in the context of deterministic input, and then extended to include stochastic effects. The approach is demonstrated using synthetic data, where model parameters are identified using a Bayesian estimation scheme we have described previously.


2021 ◽  
Vol 62 ◽  
pp. 54-65
Author(s):  
A.D. Fofack ◽  
◽  
S.D. Temkeng ◽  

The aim of this paper is to assess and compare the link between labor productivity and compensation in four industries — air transport, electronics, finance, and telecommunications — of twenty‐five member states of the European Union (EU) from 2000 to 2014. The long‐run and short‐run dynamics of productivity and compensation are analyzed using the pooled mean group (PMG), the mean group (MG) and the dynamic fixed effects (DFE) estimators. The results confirm the existence of a gap between productivity and compensation in each of those industries as mentioned in previous studies. However, the results show that despite that gap, the link between the two variables is not broken. That is, productivity and compensation are not only linked in the long run, but they also return to their long‐run equilibrium after every short‐run disturbance. The econometric analysis also reveals that the relation between productivity and compensation does not follow a significantly different pattern from one industry to the other. These findings robust to alternative models, estimation techniques and across industries, suggest that there are some other cross‐sectoral factors preventing productivity gains to be fully reflected on paychecks.


Sign in / Sign up

Export Citation Format

Share Document