Characterizing Disease Spreading via Visibility Graph Embedding

Author(s):  
Kangyu Ni ◽  
Jiejun Xu ◽  
Shane Roach ◽  
Tsai-Ching Lu ◽  
Alexei Kopylov
Coronaviruses ◽  
2020 ◽  
Vol 01 ◽  
Author(s):  
Chandra Mohan ◽  
Vinod Kumar

: World Health Organization (WHO) office in China received the information of pneumonia cases of unknown aetiology from Wuhan, central China on 31st December 2019, subsequently this disease spreading in china and rest of world. Till the March 2020 end, more than 2 lakhs confirmed cases with more than 70000 deaths were reported worldwide, very soon researchers identified it as novel beta Corona virus (virus SARS-CoV-2) and its infection coined as COVID-19. Health ministries of various countries and WHO together fighting to this health emergency, which not only affects public health, but also started affecting various economic sectors as well. The main aim of the current article is to explore the various pandemic situations (SARS, MERS) in past, life cycle of COVID-19, diagnosis procedures, prevention and comparative analysis of COVID-19 with other epidemic situations.


Author(s):  
A-Yeong Kim ◽  
◽  
Hee-Guen Yoon ◽  
Seong-Bae Park ◽  
Se-Young Park ◽  
...  

Author(s):  
Yun Peng ◽  
Byron Choi ◽  
Jianliang Xu

AbstractGraphs have been widely used to represent complex data in many applications, such as e-commerce, social networks, and bioinformatics. Efficient and effective analysis of graph data is important for graph-based applications. However, most graph analysis tasks are combinatorial optimization (CO) problems, which are NP-hard. Recent studies have focused a lot on the potential of using machine learning (ML) to solve graph-based CO problems. Most recent methods follow the two-stage framework. The first stage is graph representation learning, which embeds the graphs into low-dimension vectors. The second stage uses machine learning to solve the CO problems using the embeddings of the graphs learned in the first stage. The works for the first stage can be classified into two categories, graph embedding methods and end-to-end learning methods. For graph embedding methods, the learning of the the embeddings of the graphs has its own objective, which may not rely on the CO problems to be solved. The CO problems are solved by independent downstream tasks. For end-to-end learning methods, the learning of the embeddings of the graphs does not have its own objective and is an intermediate step of the learning procedure of solving the CO problems. The works for the second stage can also be classified into two categories, non-autoregressive methods and autoregressive methods. Non-autoregressive methods predict a solution for a CO problem in one shot. A non-autoregressive method predicts a matrix that denotes the probability of each node/edge being a part of a solution of the CO problem. The solution can be computed from the matrix using search heuristics such as beam search. Autoregressive methods iteratively extend a partial solution step by step. At each step, an autoregressive method predicts a node/edge conditioned to current partial solution, which is used to its extension. In this survey, we provide a thorough overview of recent studies of the graph learning-based CO methods. The survey ends with several remarks on future research directions.


2021 ◽  
Vol 232 (3) ◽  
Author(s):  
Kamila Jessie Sammarro Silva ◽  
Larissa Lopes Lima ◽  
Gustavo Santos Nunes ◽  
Lyda Patricia Sabogal-Paz

Electronics ◽  
2021 ◽  
Vol 10 (12) ◽  
pp. 1407
Author(s):  
Peng Wang ◽  
Jing Zhou ◽  
Yuzhang Liu ◽  
Xingchen Zhou

Knowledge graph embedding aims to embed entities and relations into low-dimensional vector spaces. Most existing methods only focus on triple facts in knowledge graphs. In addition, models based on translation or distance measurement cannot fully represent complex relations. As well-constructed prior knowledge, entity types can be employed to learn the representations of entities and relations. In this paper, we propose a novel knowledge graph embedding model named TransET, which takes advantage of entity types to learn more semantic features. More specifically, circle convolution based on the embeddings of entity and entity types is utilized to map head entity and tail entity to type-specific representations, then translation-based score function is used to learn the presentation triples. We evaluated our model on real-world datasets with two benchmark tasks of link prediction and triple classification. Experimental results demonstrate that it outperforms state-of-the-art models in most cases.


Sign in / Sign up

Export Citation Format

Share Document