scholarly journals Framework for enhancing the estimation of model parameters for data with a high level of uncertainty

Author(s):  
Gustavo B Libotte ◽  
Lucas Anjos ◽  
Regina Almeida ◽  
Sandra Malta ◽  
Renato Silva

Abstract Reliable data is essential to obtain adequate simulations for forecasting the dynamics of epidemics. In this context, several political, economic, and social factors may cause inconsistencies in the reported data, which reflect the capacity for realistic simulations and predictions. In the case of COVID-19, for example, such uncertainties are mainly motivated by large-scale underreporting of cases due to reduced testing capacity in some locations. In order to mitigate the effects of noise in the data used to estimate parameters of models, we propose strategies capable of improving the ability to predict the spread of the diseases. Using a compartmental model in a COVID-19 study case, we show that the regularization of data by means of Gaussian Process Regression can reduce the variability of successive forecasts, improving predictive ability. We also present the advantages of adopting parameters of compartmental models that vary over time, in detriment to the usual approach with constant values.

2020 ◽  
Author(s):  
Gustavo B. Libotte ◽  
Lucas dos Anjos ◽  
Regina C. Almeida ◽  
Sandra M. C. Malta ◽  
Renato S. Silva

AbstractResearch on predictions related to the spread of the novel coronavirus are crucial in decision-making to mitigate the disease. Computational simulations are often used as a basis for forecasting the dynamics of epidemics and, for this purpose, compartmental models have been widely used to assess the situation resulting from the spread of the disease in the population. Reliable data is essential to obtain adequate simulations. However, several political, economic, and social factors have caused inconsistencies in the reported data, which are reflected in the capacity for realistic simulations and predictions. Such uncertainties are mainly motivated by a large-scale underreporting of cases due to the reduced testing capacity in some locations. In order to mitigate the effects of noise in the data used to estimate parameters of compartmental models, we propose strategies capable of improving the ability to predict the spread of the disease. We show that the regularization of data by means of Gaussian Process Regression can reduce the variability of successive forecasts, thus improving predictive ability. We also present the advantages of adopting parameters of compartmental models that vary over time, in detriment to the usual approach with constant values.


Author(s):  
Georgi Derluguian

The author develops ideas about the origin of social inequality during the evolution of human societies and reflects on the possibilities of its overcoming. What makes human beings different from other primates is a high level of egalitarianism and altruism, which contributed to more successful adaptability of human collectives at early stages of the development of society. The transition to agriculture, coupled with substantially increasing population density, was marked by the emergence and institutionalisation of social inequality based on the inequality of tangible assets and symbolic wealth. Then, new institutions of warfare came into existence, and they were aimed at conquering and enslaving the neighbours engaged in productive labour. While exercising control over nature, people also established and strengthened their power over other people. Chiefdom as a new type of polity came into being. Elementary forms of power (political, economic and ideological) served as a basis for the formation of early states. The societies in those states were characterised by social inequality and cruelties, including slavery, mass violence and numerous victims. Nowadays, the old elementary forms of power that are inherent in personalistic chiefdom are still functioning along with modern institutions of public and private bureaucracy. This constitutes the key contradiction of our time, which is the juxtaposition of individual despotic power and public infrastructural one. However, society is evolving towards an ever more efficient combination of social initiatives with the sustainability and viability of large-scale organisations.


Bribes are mainly directed at government officials, although they could be directed at the employees and managers of business firms. However, bribery appears to be a self-defined crime. Bribery of small public sector employees is a white-collar crime. However, bribery also exists in high-level decision-making processes, whether political, economic, or corporate situations. These are large-scale bribes, consisting of millions and/or billions of dollars, paid out to executives and public officials in return for construction contracts, oil contracts, telecommunication contracts, etc. Although punishments exist and are implemented, it is up to the individual alone to make the final decision and choose between personal moral value system and personal welfare in opposition to serving the public welfare. This chapter explores bribery.


2021 ◽  
pp. 1-40
Author(s):  
Cecilia Romaro ◽  
Fernando Araujo Najman ◽  
William W. Lytton ◽  
Antonio C. Roque ◽  
Salvador Dura-Bernal

Abstract The Potjans-Diesmann cortical microcircuit model is a widely used model originally implemented in NEST. Here, we reimplemented the model using NetPyNE, a high-level Python interface to the NEURON simulator, and reproduced the findings of the original publication. We also implemented a method for scaling the network size that preserves first- and second-order statistics, building on existing work on network theory. Our new implementation enabled the use of more detailed neuron models with multicompartmental morphologies and multiple biophysically realistic ion channels. This opens the model to new research, including the study of dendritic processing, the influence of individual channel parameters, the relation to local field potentials, and other multiscale interactions. The scaling method we used provides flexibility to increase or decrease the network size as needed when running these CPU-intensive detailed simulations. Finally, NetPyNE facilitates modifying or extending the model using its declarative language; optimizing model parameters; running efficient, large-scale parallelized simulations; and analyzing the model through built-in methods, including local field potential calculation and information flow measures.


2020 ◽  
Author(s):  
Abraham Varghese ◽  
Shajidmon Kolamban ◽  
Vinu Sherimon ◽  
Eduardo M. Lacap ◽  
Saad Salman Ahmed ◽  
...  

Abstract The present novel corona virus (COVID-19) infection has engendered a worldwide crisis across the world in an enormous scale within a very short period. The effective solution for this pandemic is to recognize the nature and spread of the disease so that appropriate policies can be framed. Mathematical modelling is always at the forefront to understand and provide an adequate description about the transmission of any disease. In this research work, we have formulated a deterministic compartmental model (SEAMHCRD) including various stages of infection, such as Mild, Moderate, Severe and Critical to study the spreading of COVID-19 and estimated the model parameters by fitting the model with the reported data of ongoing pandemic in Oman. The steady state, stability and final pandemic size of the model has been proved mathematically. The various transmission as well as transition parameters are estimated during the period from June 8th - July 30th, 2020. Based on the current estimated parameters, the pandemic size is also predicted for another 100 days. Sensitivity analysis is performed to identify the key model parameters, and corresponding basic reproduction number has been computed using Next Generation Matrix (NGM) method. As the value of basic reproduction number (R0) is 0.9761 during the period from June 8th - July 30th, 2020, it is an indication for the policy makers to adopt appropriate remedial measures like social distancing and contact tracing to reduce the value of R0 to control the spread of the disease.


Author(s):  
Gustavo B. Libotte ◽  
Lucas dos Anjos ◽  
Regina C. C. Almeida ◽  
Sandra M. C. Malta ◽  
Renato S. Silva

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Abraham Varghese ◽  
Shajidmon Kolamban ◽  
Vinu Sherimon ◽  
Eduardo M. Lacap ◽  
Saad Salman Ahmed ◽  
...  

AbstractThe present novel coronavirus (COVID-19) infection has engendered a worldwide crisis on an enormous scale within a very short period. The effective solution for this pandemic is to recognize the nature and spread of the disease so that appropriate policies can be framed. Mathematical modelling is always at the forefront to understand and provide an adequate description of the transmission of any disease. In this research work, we have formulated a deterministic compartmental model (SEAMHCRD) including various stages of infection, such as Mild, Moderate, Severe and Critical to study the spreading of COVID-19 and estimated the model parameters by fitting the model with the reported data of ongoing pandemic in Oman. The steady-state, stability and final pandemic size of the model has been proved mathematically. The various transmission as well as transition parameters are estimated during the period from June 4th to July 30th, 2020. Based on the currently estimated parameters, the pandemic size is also predicted for another 100 days. Sensitivity analysis is performed to identify the key model parameters, and the parameter gamma due to contact with the symptomatic moderately infected is found to be more significant in spreading the disease. Accordingly, the corresponding basic reproduction number has also been computed using the Next Generation Matrix (NGM) method. As the value of the basic reproduction number (R0) is 0.9761 during the period from June 4th to July 30th, 2020, the disease-free equilibrium is stable. Isolation and tracing the contact of infected individuals are recommended to control the spread of disease.


2019 ◽  
Vol 12 (9) ◽  
pp. 5161-5181 ◽  
Author(s):  
Tongshu Zheng ◽  
Michael H. Bergin ◽  
Ronak Sutaria ◽  
Sachchida N. Tripathi ◽  
Robert Caldow ◽  
...  

Abstract. Wireless low-cost particulate matter sensor networks (WLPMSNs) are transforming air quality monitoring by providing particulate matter (PM) information at finer spatial and temporal resolutions. However, large-scale WLPMSN calibration and maintenance remain a challenge. The manual labor involved in initial calibration by collocation and routine recalibration is intensive. The transferability of the calibration models determined from initial collocation to new deployment sites is questionable, as calibration factors typically vary with the urban heterogeneity of operating conditions and aerosol optical properties. Furthermore, the stability of low-cost sensors can drift or degrade over time. This study presents a simultaneous Gaussian process regression (GPR) and simple linear regression pipeline to calibrate and monitor dense WLPMSNs on the fly by leveraging all available reference monitors across an area without resorting to pre-deployment collocation calibration. We evaluated our method for Delhi, where the PM2.5 measurements of all 22 regulatory reference and 10 low-cost nodes were available for 59 d from 1 January to 31 March 2018 (PM2.5 averaged 138±31 µg m−3 among 22 reference stations), using a leave-one-out cross-validation (CV) over the 22 reference nodes. We showed that our approach can achieve an overall 30 % prediction error (RMSE: 33 µg m−3) at a 24 h scale, and it is robust as it is underscored by the small variability in the GPR model parameters and in the model-produced calibration factors for the low-cost nodes among the 22-fold CV. Of the 22 reference stations, high-quality predictions were observed for those stations whose PM2.5 means were close to the Delhi-wide mean (i.e., 138±31 µg m−3), and relatively poor predictions were observed for those nodes whose means differed substantially from the Delhi-wide mean (particularly on the lower end). We also observed washed-out local variability in PM2.5 across the 10 low-cost sites after calibration using our approach, which stands in marked contrast to the true wide variability across the reference sites. These observations revealed that our proposed technique (and more generally the geostatistical technique) requires high spatial homogeneity in the pollutant concentrations to be fully effective. We further demonstrated that our algorithm performance is insensitive to training window size as the mean prediction error rate and the standard error of the mean (SEM) for the 22 reference stations remained consistent at ∼30 % and ∼3 %–4 %, respectively, when an increment of 2 d of data was included in the model training. The markedly low requirement of our algorithm for training data enables the models to always be nearly the most updated in the field, thus realizing the algorithm's full potential for dynamically surveilling large-scale WLPMSNs by detecting malfunctioning low-cost nodes and tracking the drift with little latency. Our algorithm presented similarly stable 26 %–34 % mean prediction errors and ∼3 %–7 % SEMs over the sampling period when pre-trained on the current week's data and predicting 1 week ahead, and therefore it is suitable for online calibration. Simulations conducted using our algorithm suggest that in addition to dynamic calibration, the algorithm can also be adapted for automated monitoring of large-scale WLPMSNs. In these simulations, the algorithm was able to differentiate malfunctioning low-cost nodes (due to either hardware failure or under the heavy influence of local sources) within a network by identifying aberrant model-generated calibration factors (i.e., slopes close to zero and intercepts close to the Delhi-wide mean of true PM2.5). The algorithm was also able to track the drift of low-cost nodes accurately within 4 % error for all the simulation scenarios. The simulation results showed that ∼20 reference stations are optimum for our solution in Delhi and confirmed that low-cost nodes can extend the spatial precision of a network by decreasing the extent of pure interpolation among only reference stations. Our solution has substantial implications in reducing the amount of manual labor for the calibration and surveillance of extensive WLPMSNs, improving the spatial comprehensiveness of PM evaluation, and enhancing the accuracy of WLPMSNs.


2015 ◽  
pp. 144-158
Author(s):  
Philippe W. Zgheib

Bribes are mainly directed at government officials, although they could be directed at the employees and managers of business firms. However, bribery appears to be a self-defined crime. Bribery of small public sector employees is a white-collar crime. However, bribery also exists in high-level decision-making processes, whether political, economic, or corporate situations. These are large-scale bribes, consisting of millions and/or billions of dollars, paid out to executives and public officials in return for construction contracts, oil contracts, telecommunication contracts, etc. Although punishments exist and are implemented, it is up to the individual alone to make the final decision and choose between personal moral value system and personal welfare in opposition to serving the public welfare. This chapter explores bribery.


2019 ◽  
Author(s):  
Tongshu Zheng ◽  
Michael H. Bergin ◽  
Ronak Sutaria ◽  
Sachchida N. Tripathi ◽  
Robert Caldow ◽  
...  

Abstract. Wireless low-cost particulate matter sensor networks (WLPMSNs) are transforming air quality monitoring by providing PM information at finer spatial and temporal resolutions; however, large-scale WLPMSN calibration and maintenance remain a challenge because the manual labor involved in initial calibration by collocation and routine recalibration is intensive, the transferability of the calibration models determined from initial collocation to new deployment sites is questionable as calibration factors typically vary with urban heterogeneity of operating conditions and aerosol optical properties, and the stability of low-cost sensors can develop drift or degrade over time. This study presents a simultaneous Gaussian Process regression (GPR) and simple linear regression pipeline to calibrate and monitor dense WLPMSNs on the fly by leveraging all available reference monitors across an area without resorting to pre-deployment collocation calibration. We evaluated our method for Delhi where the PM2.5 measurements of all 22 regulatory reference and 10 low-cost nodes were available in 59 valid days from 1 January 2018 to 31 March 2018 (PM2.5 averaged 138 ± 31 μg m−3 among 22 reference stations) using a leave-one-out cross-validation (CV) over the 22 reference nodes. We showed that our approach can achieve an overall 30 % prediction error (RMSE: 33 μg m−3) at a 24 h scale and is robust as underscored by the small variability in the GPR model parameters and in the model-produced calibration factors for the low-cost nodes among the 22-fold CV. We revealed that the accuracy of our calibrations depends on the degree of homogeneity of PM concentrations, and decreases with increasing local source contributions. As by-products of dynamic calibration, our algorithm can be adapted for automated large-scale WLPMSN monitoring as simulations proved its capability of differentiating malfunctioning or singular low-cost nodes within a network via model-generated calibration factors with the aberrant nodes having slopes close to 0 and intercepts close to the global mean of true PM2.5 and of tracking the drift of low-cost nodes accurately within 4 % error for all the simulation scenarios. The simulation results showed that ~20 reference stations are optimum for our solution in Delhi and confirmed that low-cost nodes can extend the spatial precision of a network by decreasing the extent of pure interpolation among only reference stations. Our solution has substantial implications in reducing the amount of manual labor for the calibration and surveillance of extensive WLPMSNs, improving the spatial comprehensiveness of PM evaluation, and enhancing the accuracy of WLPMSNs.


Sign in / Sign up

Export Citation Format

Share Document