scholarly journals A data-driven approach to measuring epidemiological susceptibility risk around the world

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Alessandro Bitetto ◽  
Paola Cerchiello ◽  
Charilaos Mertzanis

AbstractEpidemic outbreaks are extreme events that become more frequent and severe, associated with large social and real costs. It is therefore important to assess whether countries are prepared to manage epidemiological risks. We use a fully data-driven approach to measure epidemiological susceptibility risk at the country level using time-varying information. We apply both principal component analysis (PCA) and dynamic factor model (DFM) to deal with the presence of strong cross-section dependence in the data. We conduct extensive in-sample model evaluations of 168 countries covering 17 indicators for the 2010–2019 period. The results show that the robust PCA method accounts for about 90% of total variability, whilst the DFM accounts for about 76% of the total variability. Our index could therefore provide the basis for developing risk assessments of epidemiological risk contagion. It could be also used by organizations to assess likely real consequences of epidemics with useful managerial implications.

2018 ◽  
Vol 90 (2) ◽  
pp. 435-451 ◽  
Author(s):  
Xu Kang ◽  
Dechang Pi

Purpose The purpose of this paper is to detect the occurrence of anomaly and fault in a spacecraft, investigate various tendencies of telemetry parameters and evaluate the operation state of the spacecraft to monitor the health of the spacecraft. Design/methodology/approach This paper proposes a data-driven method (empirical mode decomposition-sample entropy-principal component analysis [EMD-SE-PCA]) for monitoring the health of the spacecraft, where EMD is used to decompose telemetry data and obtain the trend items, SE is utilised to calculate the sample entropies of trend items and extract the characteristic data and squared prediction error and statistic contribution rate are analysed using PCA to monitor the health of the spacecraft. Findings Experimental results indicate that the EMD-SE-PCA method could detect characteristic parameters that appear abnormally before the anomaly or fault occurring, could provide an abnormal early warning time before anomaly or fault appearing and summarise the contribution of each parameter more accurately than other fault detection methods. Practical implications The proposed EMD-SE-PCA method has high level of accuracy and efficiency. It can be used in monitoring the health of a spacecraft, detecting the anomaly and fault, avoiding them timely and efficiently. Also, the EMD-SE-PCA method could be further applied for monitoring the health of other equipment (e.g. attitude control and orbit control system) in spacecraft and satellites. Originality/value The paper provides a data-driven method EMD-SE-PCA to be applied in the field of practical health monitoring, which could discover the occurrence of anomaly or fault timely and efficiently and is very useful for spacecraft health diagnosis.


Author(s):  
D. Y. Nokhrin ◽  
M. A. Derkho ◽  
L. G. Mukhamedyarova ◽  
A. V. Zhivetina

A qualitative and quantitative analysis of hydrochemical parameters of water is given in order to identify the factors that determine their spatial and temporal changes in a lake-type reservoir. Water samples were taken in 2019 and 2020 from the average level in spring (April), summer (July) and autumn (September) in the first week of the month in accordance with the requirements of GOST R 51592-2000 in three sections. The first target (1) is the shallow upper part (depth from 2 to 4 m); the second target (2) is the central part (depth from 5 to 7 m) and the third target (3) is the near – dam part (depth up to 12.2 m). Statistical analysis of the obtained data was performed using the unlimited Principal component analysis (PCA) technique and the limited redundancy analysis (RDA) technique. The effects were considered statistically significant at P<0.05, and useful for discussion-at P<0.10. It was found that, despite the flood increase in the level of chemical components in the water of the reservoir, most of them meet the requirements for fishing waters, with the exception of iron, copper, manganese, zinc, nickel and lead, which exceed the MPCVR from 1.1 to 45.0 times. The total variability of the hydrochemical composition of water in the reservoir, estimated by the PCA method, depends on the season of the year by 71.4 %. A similar result was obtained by the RDA method in a model with a single regressor. When all factors are taken into account in the RDA model, the variability of the water chemical composition is affected by the season of the year by 74.3 %, the year of research by 11.1 %, and the location of the target by 1.9 %. The primary indicators of water for the proportion of unexplained variability in both the PCA and RDA methods are manganese, bicarbonates, lead and aluminum, and pH.


2020 ◽  
pp. 135676672095035
Author(s):  
Sunyoung Hlee ◽  
Hyunae Lee ◽  
Chulmo Koo ◽  
Namho Chung

Because tourism destinations are difficult to assess in certain standard aspects, the factors that contribute to the helpfulness of reviews remain largely unknown. Moreover, the helpfulness of online reviews has not been explored in terms of the interaction between language style (high- vs. low-cognitive) and attraction type (hedonic vs. utilitarian). Hence, this study examines the impact of language style on the helpfulness of an online review of an attraction, depending on the type of attraction and the meaning of the destination. This study’s data included 8,032 reviews of four attractions (2 hedonic x 2 utilitarian), drawn from TripAdvisor in two different meanings of destinations. Specifically, our findings indicate that when a reviewer posts a utilitarian attraction of the destination, high-cognitive language is perceived to be more helpful. First, we discuss the theoretical contribution of our study using cognitive fit theory, and then provide the study’s managerial implications.


Sensors ◽  
2019 ◽  
Vol 19 (22) ◽  
pp. 5009 ◽  
Author(s):  
Stefania Tronci ◽  
Paul Van Neer ◽  
Erwin Giling ◽  
Uilke Stelwagen ◽  
Daniele Piras ◽  
...  

The use of continuous processing is replacing batch modes because of their capabilities to address issues of agility, flexibility, cost, and robustness. Continuous processes can be operated at more extreme conditions, resulting in higher speed and efficiency. The issue when using a continuous process is to maintain the satisfaction of quality indices even in the presence of perturbations. For this reason, it is important to evaluate in-line key performance indicators. Rheology is a critical parameter when dealing with the production of complex fluids obtained by mixing and filling. In this work, a tomographic ultrasonic velocity meter is applied to obtain the rheological curve of a non-Newtonian fluid. Raw ultrasound signals are processed using a data-driven approach based on principal component analysis (PCA) and feedforward neural networks (FNN). The obtained sensor has been associated with a data-driven decision support system for conducting the process.


Author(s):  
Chady Ghnatios ◽  
Anais Barasinski

AbstractA nonparametric method assessing the error and variability margins in solutions depicted in a separated form using experimental results is illustrated in this work. The method assess the total variability of the solution including the modeling error and the truncation error when experimental results are available. The illustrated method is based on the use of the PGD separated form solutions, enriched by transforming a part of the PGD basis vectors into probabilistic one. The constructed probabilistic vectors are restricted to the physical solution’s Stiefel manifold. The result is a real-time parametric PGD solution enhanced with the solution variability and the confidence intervals.


2020 ◽  
Vol 6 (1) ◽  
pp. 10-26
Author(s):  
Ademir Abdić ◽  
Emina Resić ◽  
Adem Abdić

AbstractIn the most developed countries the first estimations of Gross Domestic Product (GDP) are available 30 days after the end of the reference quarter. In this paper, possibilities of creating an econometric model for making short-term forecasts of GDP in B&H have been explored. The database consists of more than 100 daily, monthly and quarterly time series for the period 2006q1-2016q4. The aim of this study was to estimate and validate different factor models. Due to the length limit of the series, the factor analysis included 12 time series which had a correlation coefficient with a quarterly GDP at the absolute value greater than 0.8. The principal component analysis (PCA) and the orthogonal varimax rotation of the initial solution were applied. Three principal components are extracted from the set of the series, thus together accounting for 73.34% of the total variability of the given set of series. The final choice of the model for forecasting quarterly B&H GDP was selected based on a comparative analysis of the predictive efficiency of the analysed models for the in-sample period and for the out-of-sample period. The unbiasedness and efficiency of individual forecasts were tested using the Mincer-Zarnowitz regression, while a comparison of the accuracy of forecast of two models was tested by the Diebold-Mariano test. We have examined the justification of a combination of two forecasts using the Granger-Ramanathan regression. A factor model involving three factors has shown to be the most efficient factor model for forecasting quarterly B&H GDP.


2013 ◽  
Vol 16 (04) ◽  
pp. 1350020
Author(s):  
SERGIO M. FOCARDI ◽  
FRANK J. FABOZZI

In this paper, we analyze factor uniqueness in the S&P 500 universe. The current theory of approximate factor models applies to infinite markets. In the limit of infinite markets, factors are unique and can be represented with principal components. If this theory would apply to realistic markets such as the S&P 500 universe, the quest for proprietary factors would be futile. We find that this is not the case: in finite markets of the size of the S&P 500 universe different factor models can indeed coexist. We compare three dynamic factor models: a factor model based on principal component analysis, a classical factor model based on industry, and a factor model based on cluster analysis. Dynamic behavior is represented by fitting vector autoregressive models to factors and using them to make forecasts. We analyze the uniqueness of factors using Procrustes analysis and correlation analysis. Forecasting performance of the factor models is analyzed by forming active portfolio strategies based on the forecasts for each model using sample data from the S&P 500 index in the 21-year period 1989–2010. We find that one or two factors which we can identify with global factors are common to all models, while the other factors for the factor models we analyzed are truly different. Models exhibit significant differences in performance with principal component analysis-based factor models appearing to behave better than the sector-based factor models.


Author(s):  
Frederik Seeup Hass ◽  
Jamal Jokar Arsanjani

The Covid-19 pandemic emerged and evolved so quickly that societies were not able to respond quickly enough, mainly due to the nature of the Covid-19 virus’ rate of spread and also the largely open societies that we live in. While we have been willingly moving towards open societies and reducing movement barriers, there is a need to be prepared for minimizing the openness of society on occasions such as large pandemics, which are low probability events with massive impacts. Certainly, similar to many phenomena, the Covid-19 pandemic has shown us its own geography presenting its emergence and evolving patterns as well as taking advantage of our geographical settings for escalating its spread. Hence, this study aims at presenting a data-driven approach for exploring the spatio-temporal patterns of the pandemic over a regional scale, i.e., Europe and a country scale, i.e., Denmark, and also what geographical variables potentially contribute to expediting its spread. We used official regional infection rates, points of interest, temperature and air pollution data for monitoring the pandemic’s spread across Europe and also applied geospatial methods such as spatial autocorrelation and space-time autocorrelation to extract relevant indicators that could explain the dynamics of the pandemic. Furthermore, we applied statistical methods, e.g., ordinary least squares, geographically weighted regression, as well as machine learning methods, e.g., random forest for exploring the potential correlation between the chosen underlying factors and the pandemic spread. Our findings indicate that population density, amenities such as cafes and bars, and pollution levels are the most influential explanatory variables while pollution levels can be explicitly used to monitor lockdown measures and infection rates at country level. The choice of data and methods used in this study along with the achieved results and presented discussions can empower health authorities and decision makers with an interactive decision support tool, which can be useful for imposing geographically varying lockdowns and protectives measures using historical data.


Sign in / Sign up

Export Citation Format

Share Document