Forecasting Trend of Coronavirus Disease 2019 using Multi-Task Weighted TSK Fuzzy System

2022 ◽  
Vol 22 (3) ◽  
pp. 1-24
Yizhang Jiang ◽  
Xiaoqing Gu ◽  
Lei Hua ◽  
Kang Li ◽  
Yuwen Tao ◽  

Artificial intelligence– (AI) based fog/edge computing has become a promising paradigm for infectious disease. Various AI algorithms are embedded in cooperative fog/edge devices to construct medical Internet of Things environments, infectious disease forecast systems, smart health, and so on. However, these systems are usually done in isolation, which is called single-task learning. They do not consider the correlation and relationship between multiple/different tasks, so some common information in the model parameters or data characteristics is lost. In this study, each data center in fog/edge computing is considered as a task in the multi-task learning framework. In such a learning framework, a multi-task weighted Takagi-Sugeno-Kang (TSK) fuzzy system, called MW-TSKFS, is developed to forecast the trend of Coronavirus disease 2019 (COVID-19). MW-TSKFS provides a multi-task learning strategy for both antecedent and consequent parameters of fuzzy rules. First, a multi-task weighted fuzzy c-means clustering algorithm is developed for antecedent parameter learning, which extracts the public information among all tasks and the private information of each task. By sharing the public cluster centroid and public membership matrix, the differences of commonality and individuality can be further exploited. For consequent parameter learning of MW-TSKFS, a multi-task collaborative learning mechanism is developed based on ε-insensitive criterion and L2 norm penalty term, which can enhance the generalization and forecasting ability of the proposed fuzzy system. The experimental results on the real COVID-19 time series show that the forecasting tend model based on multi-task the weighted TSK fuzzy system has a high application value.

2021 ◽  
Vol 11 (1) ◽  
Jie Zhu ◽  
Blanca Gallego

AbstractEpidemic models are being used by governments to inform public health strategies to reduce the spread of SARS-CoV-2. They simulate potential scenarios by manipulating model parameters that control processes of disease transmission and recovery. However, the validity of these parameters is challenged by the uncertainty of the impact of public health interventions on disease transmission, and the forecasting accuracy of these models is rarely investigated during an outbreak. We fitted a stochastic transmission model on reported cases, recoveries and deaths associated with SARS-CoV-2 infection across 101 countries. The dynamics of disease transmission was represented in terms of the daily effective reproduction number ($$R_t$$ R t ). The relationship between public health interventions and $$R_t$$ R t was explored, firstly using a hierarchical clustering algorithm on initial $$R_t$$ R t patterns, and secondly computing the time-lagged cross correlation among the daily number of policies implemented, $$R_t$$ R t , and daily incidence counts in subsequent months. The impact of updating $$R_t$$ R t every time a prediction is made on the forecasting accuracy of the model was investigated. We identified 5 groups of countries with distinct transmission patterns during the first 6 months of the pandemic. Early adoption of social distancing measures and a shorter gap between interventions were associated with a reduction on the duration of outbreaks. The lagged correlation analysis revealed that increased policy volume was associated with lower future $$R_t$$ R t (75 days lag), while a lower $$R_t$$ R t was associated with lower future policy volume (102 days lag). Lastly, the outbreak prediction accuracy of the model using dynamically updated $$R_t$$ R t produced an average AUROC of 0.72 (0.708, 0.723) compared to 0.56 (0.555, 0.568) when $$R_t$$ R t was kept constant. Monitoring the evolution of $$R_t$$ R t during an epidemic is an important complementary piece of information to reported daily counts, recoveries and deaths, since it provides an early signal of the efficacy of containment measures. Using updated $$R_t$$ R t values produces significantly better predictions of future outbreaks. Our results found variation in the effect of early public health interventions on the evolution of $$R_t$$ R t over time and across countries, which could not be explained solely by the timing and number of the adopted interventions.

2011 ◽  
Vol 2011 ◽  
pp. 1-12 ◽  
Karim El-Laithy ◽  
Martin Bogdan

An integration of both the Hebbian-based and reinforcement learning (RL) rules is presented for dynamic synapses. The proposed framework permits the Hebbian rule to update the hidden synaptic model parameters regulating the synaptic response rather than the synaptic weights. This is performed using both the value and the sign of the temporal difference in the reward signal after each trial. Applying this framework, a spiking network with spike-timing-dependent synapses is tested to learn the exclusive-OR computation on a temporally coded basis. Reward values are calculated with the distance between the output spike train of the network and a reference target one. Results show that the network is able to capture the required dynamics and that the proposed framework can reveal indeed an integrated version of Hebbian and RL. The proposed framework is tractable and less computationally expensive. The framework is applicable to a wide class of synaptic models and is not restricted to the used neural representation. This generality, along with the reported results, supports adopting the introduced approach to benefit from the biologically plausible synaptic models in a wide range of intuitive signal processing.

2021 ◽  
Vol 13 (5) ◽  
pp. 168781402110131
Junfeng Wu ◽  
Li Yao ◽  
Bin Liu ◽  
Zheyuan Ding ◽  
Lei Zhang

As more and more sensor data have been collected, automated detection, and diagnosis systems are urgently needed to lessen the increasing monitoring burden and reduce the risk of system faults. A plethora of researches have been done on anomaly detection, event detection, anomaly diagnosis respectively. However, none of current approaches can explore all these respects in one unified framework. In this work, a Multi-Task Learning based Encoder-Decoder (MTLED) which can simultaneously detect anomalies, diagnose anomalies, and detect events is proposed. In MTLED, feature matrix is introduced so that features are extracted for each time point and point-wise anomaly detection can be realized in an end-to-end way. Anomaly diagnosis and event detection share the same feature matrix with anomaly detection in the multi-task learning framework and also provide important information for system monitoring. To train such a comprehensive detection and diagnosis system, a large-scale multivariate time series dataset which contains anomalies of multiple types is generated with simulation tools. Extensive experiments on the synthetic dataset verify the effectiveness of MTLED and its multi-task learning framework, and the evaluation on a real-world dataset demonstrates that MTLED can be used in other application scenarios through transfer learning.

Gregory Gutin ◽  
Tomohiro Hirano ◽  
Sung-Ha Hwang ◽  
Philip R. Neary ◽  
Alexis Akira Toda

AbstractHow does social distancing affect the reach of an epidemic in social networks? We present Monte Carlo simulation results of a susceptible–infected–removed with social distancing model. The key feature of the model is that individuals are limited in the number of acquaintances that they can interact with, thereby constraining disease transmission to an infectious subnetwork of the original social network. While increased social distancing typically reduces the spread of an infectious disease, the magnitude varies greatly depending on the topology of the network, indicating the need for policies that are network dependent. Our results also reveal the importance of coordinating policies at the ‘global’ level. In particular, the public health benefits from social distancing to a group (e.g. a country) may be completely undone if that group maintains connections with outside groups that are not following suit.

2012 ◽  
Vol 532-533 ◽  
pp. 1445-1449
Ting Ting Tong ◽  
Zhen Hua Wu

EM algorithm is a common method to solve mixed model parameters in statistical classification of remote sensing image. The EM algorithm based on fuzzification is presented in this paper to use a fuzzy set to represent each training sample. Via the weighted degree of membership, different samples will be of different effect during iteration to decrease the impact of noise on parameter learning and to increase the convergence rate of algorithm. The function and accuracy of classification of image data can be completed preferably.

2014 ◽  
Vol 2014 ◽  
pp. 1-14 ◽  
Ebenezer Bonyah ◽  
Isaac Dontwi ◽  
Farai Nyabadza

The management of the Buruli ulcer (BU) in Africa is often accompanied by limited resources, delays in treatment, and macilent capacity in medical facilities. These challenges limit the number of infected individuals that access medical facilities. While most of the mathematical models with treatment assume a treatment function proportional to the number of infected individuals, in settings with such limitations, this assumption may not be valid. To capture these challenges, a mathematical model of the Buruli ulcer with a saturated treatment function is developed and studied. The model is a coupled system of two submodels for the human population and the environment. We examine the stability of the submodels and carry out numerical simulations. The model analysis is carried out in terms of the reproduction number of the submodel of environmental dynamics. The dynamics of the human population submodel, are found to occur at the steady states of the submodel of environmental dynamics. Sensitivity analysis is carried out on the model parameters and it is observed that the BU epidemic is driven by the dynamics of the environment. The model suggests that more effort should be focused on environmental management. The paper is concluded by discussing the public implications of the results.

1980 ◽  
Vol 66 (3) ◽  
pp. 458-461
Thomas H. Weller

For this address at the opening session of the First Mexican National Congress of Infectious Diseases in Children (ler, Congreso National de Infectologia Pediatrica), I have chosen as my title "Contemporary Plagues and Social Progress." While in medicine the term plague usually refers to diseases caused by Pasteurella pestis, the word has broader meanings and usages. It describes that which smites or troubles, can refer to an afflictive evil or anything troublesome or vexatious, or can be applied to any malignant disease, especially those that are contagious. It can be used as an expression of annoyance, as a mild oath, or with the implication of harassment. Thus, today we are concerned with the plague of plagues, the afflictive evils of the cumulative insults of infectious disease. Additionally, we might be tempted to cast a plague on the system of medical education and on the political process that neither conveys the continuing importance of infectious diseases nor funds the mechanisms for their containment. Or, should the shoe be on the other foot? Should not society cast a plague on us? As experts in the field of infectious disease, have we not failed to publicize that, on a global basis, the combination of diarrheal disease and malnutrition is the leading cause of death in infants and children? Has not our successful use of antibiotics induced unjustified public complacency regarding the problems of infectious disease? Why have our low-keyed reports of resistant typhoid bacilli, or pneumococci or of gonococci failed to dispel the prevalent mystique that science has controlled infectious agents, leaving cancer and heart disease in the public eye as the major unconquered problems in the health field?

2021 ◽  
Vol 14 (2) ◽  
pp. 26
Na Li ◽  
Lianguan Huang ◽  
Yanling Li ◽  
Meng Sun

In recent years, with the development of the Internet, the data on the network presents an outbreak trend. Big data mining aims at obtaining useful information through data processing, such as clustering, clarifying and so on. Clustering is an important branch of big data mining and it is popular because of its simplicity. A new trend for clients who lack of storage and computational resources is to outsource the data and clustering task to the public cloud platforms. However, as datasets used for clustering may contain some sensitive information (e.g., identity information, health information), simply outsourcing them to the cloud platforms can't protect the privacy. So clients tend to encrypt their databases before uploading to the cloud for clustering. In this paper, we focus on privacy protection and efficiency promotion with respect to k-means clustering, and we propose a new privacy-preserving multi-user outsourced k-means clustering algorithm which is based on locality sensitive hashing (LSH). In this algorithm, we use a Paillier cryptosystem encrypting databases, and combine LSH to prune off some unnecessary computations during the clustering. That is, we don't need to compute the Euclidean distances between each data record and each clustering center. Finally, the theoretical and experimental results show that our algorithm is more efficient than most existing privacy-preserving k-means clustering.

Sign in / Sign up

Export Citation Format

Share Document