scholarly journals Similarity-Driven Edge Bundling: Data-Oriented Clutter Reduction in Graphs Layouts

Algorithms ◽  
2020 ◽  
Vol 13 (11) ◽  
pp. 290
Author(s):  
Fabio Sikansi ◽  
Renato R. O. da Silva ◽  
Gabriel D. Cantareira ◽  
Elham Etemad ◽  
Fernando V. Paulovich

Graph visualization has been successfully applied in a wide range of problems and applications. Although different approaches are available to create visual representations, most of them suffer from clutter when faced with many nodes and/or edges. Among the techniques that address this problem, edge bundling has attained relative success in improving node-link layouts by bending and aggregating edges. Despite their success, most approaches perform the bundling based only on visual space information. There is no explicit connection between the produced bundled visual representation and the underlying data (edges or vertices attributes). In this paper, we present a novel edge bundling technique, called Similarity-Driven Edge Bundling (SDEB), to address this issue. Our method creates a similarity hierarchy based on a multilevel partition of the data, grouping edges considering the similarity between nodes to guide the bundling. The novel features introduced by SDEB are explored in different application scenarios, from dynamic graph visualization to multilevel exploration. Our results attest that SDEB produces layouts that consistently follow the similarity relationships found in the graph data, resulting in semantically richer presentations that are less cluttered than the state-of-the-art.

Author(s):  
Saeed Nosratabadi ◽  
Amir Mosavi ◽  
Puhong Duan ◽  
Pedram Ghamisi ◽  
Ferdinand Filip

Abstract This paper provides the state of the art of data science in economics. Through a novel taxonomy of applications and methods advances in data science are investigated. The novel data science methods and applications are investigated in four individual classes of deep learning models, hybrid deep learning models, hybrid machine learning, and ensemble models. Application domains include a wide range of economics research, from stock market, marketing, E-commerce, to corporate banking, and cryptocurrency. Prisma method, a systematic literature review methodology is used to ensure the quality of the survey. The findings revealed that the trends are on the advancement of hybrid models. On the other hand, based on the accuracy metric it is also reported that the hybrid models outperform other learning algorithms. It is further expected that the trends would go toward the advancements of sophisticated hybrid deep learning models.


2020 ◽  
Vol 12 ◽  
Author(s):  
Francisco Basílio ◽  
Ricardo Jorge Dinis-Oliveira

Background: Pharmacobezoars are specific types of bezoars formed when medicines, such as tablets, suspensions, and/or drug delivery systems, aggregate and may cause death by occluding airways with tenacious material or by eluting drugs resulting in toxic or lethal blood concentrations. Objective: This work aims to fully review the state-of-the-art regarding pathophysiology, diagnosis, treatment and other relevant clinical and forensic features of pharmacobezoars. Results: patients of a wide range of ages and in both sexes present with signs and symptoms of intoxications or more commonly gastrointestinal obstructions. The exact mechanisms of pharmacobezoar formation are unknown but is likely multifactorial. The diagnosis and treatment depend on the gastrointestinal segment affected and should be personalized to the medication and the underlying factor. A good and complete history, physical examination, image tests, upper endoscopy and surgery through laparotomy of the lower tract are useful for diagnosis and treatment. Conclusion: Pharmacobezoars are rarely seen in clinical and forensic practice. They are related to controlled or immediate-release formulations, liquid or non-digestible substances, in normal or altered digestive motility/anatomy tract, and in overdoses or therapeutic doses, and should be suspected in the presence of risk factors or patients taking drugs which may form pharmacobezoars.


This volume vividly demonstrates the importance and increasing breadth of quantitative methods in the earth sciences. With contributions from an international cast of leading practitioners, chapters cover a wide range of state-of-the-art methods and applications, including computer modeling and mapping techniques. Many chapters also contain reviews and extensive bibliographies which serve to make this an invaluable introduction to the entire field. In addition to its detailed presentations, the book includes chapters on the history of geomathematics and on R.G.V. Eigen, the "father" of mathematical geology. Written to commemorate the 25th anniversary of the International Association for Mathematical Geology, the book will be sought after by both practitioners and researchers in all branches of geology.


This book explores the value for literary studies of relevance theory, an inferential approach to communication in which the expression and recognition of intentions plays a major role. Drawing on a wide range of examples from lyric poetry and the novel, nine of the ten chapters are written by literary specialists and use relevance theory both as an overall framework and as a resource for detailed analysis. The final chapter, written by the co-founder of relevance theory, reviews the issues addressed by the volume and explores their implications for cognitive theories of how communicative acts are interpreted in context. Originally designed to explain how people understand each other in everyday face-to-face exchanges, relevance theory—described in an early review by a literary scholar as ‘the makings of a radically new theory of communication, the first since Aristotle’s’—sheds light on the whole spectrum of human modes of communication, including literature in the broadest sense. Reading Beyond the Code is unique in using relevance theory as a prime resource for literary study, and is also the first to apply the model to a range of phenomena widely seen as supporting an ‘embodied’ conception of cognition and language where sensorimotor processes play a key role. This broadened perspective serves to enhance the value for literary studies of the central claim of relevance theory: that the ‘code model’ is fundamentally inadequate to account for human communication, and in particular for the modes of communication that are proper to literature.


Foods ◽  
2021 ◽  
Vol 10 (2) ◽  
pp. 316
Author(s):  
Marco Montemurro ◽  
Erica Pontonio ◽  
Rossana Coda ◽  
Carlo Giuseppe Rizzello

Due to the increasing demand for milk alternatives, related to both health and ethical needs, plant-based yogurt-like products have been widely explored in recent years. With the main goal to obtain snacks similar to the conventional yogurt in terms of textural and sensory properties and ability to host viable lactic acid bacteria for a long-time storage, several plant-derived ingredients (e.g., cereals, pseudocereals, legumes, and fruits) as well as technological solutions (e.g., enzymatic and thermal treatments) have been investigated. The central role of fermentation in yogurt-like production led to specific selections of lactic acid bacteria strains to be used as starters to guarantee optimal textural (e.g., through the synthesis of exo-polysaccharydes), nutritional (high protein digestibility and low content of anti-nutritional compounds), and functional (synthesis of bioactive compounds) features of the products. This review provides an overview of the novel insights on fermented yogurt-like products. The state-of-the-art on the use of unconventional ingredients, traditional and innovative biotechnological processes, and the effects of fermentation on the textural, nutritional, functional, and sensory features, and the shelf life are described. The supplementation of prebiotics and probiotics and the related health effects are also reviewed.


2021 ◽  
Vol 15 (5) ◽  
pp. 1-32
Author(s):  
Quang-huy Duong ◽  
Heri Ramampiaro ◽  
Kjetil Nørvåg ◽  
Thu-lan Dam

Dense subregion (subgraph & subtensor) detection is a well-studied area, with a wide range of applications, and numerous efficient approaches and algorithms have been proposed. Approximation approaches are commonly used for detecting dense subregions due to the complexity of the exact methods. Existing algorithms are generally efficient for dense subtensor and subgraph detection, and can perform well in many applications. However, most of the existing works utilize the state-or-the-art greedy 2-approximation algorithm to capably provide solutions with a loose theoretical density guarantee. The main drawback of most of these algorithms is that they can estimate only one subtensor, or subgraph, at a time, with a low guarantee on its density. While some methods can, on the other hand, estimate multiple subtensors, they can give a guarantee on the density with respect to the input tensor for the first estimated subsensor only. We address these drawbacks by providing both theoretical and practical solution for estimating multiple dense subtensors in tensor data and giving a higher lower bound of the density. In particular, we guarantee and prove a higher bound of the lower-bound density of the estimated subgraph and subtensors. We also propose a novel approach to show that there are multiple dense subtensors with a guarantee on its density that is greater than the lower bound used in the state-of-the-art algorithms. We evaluate our approach with extensive experiments on several real-world datasets, which demonstrates its efficiency and feasibility.


2021 ◽  
Vol 50 (1) ◽  
pp. 33-40
Author(s):  
Chenhao Ma ◽  
Yixiang Fang ◽  
Reynold Cheng ◽  
Laks V.S. Lakshmanan ◽  
Wenjie Zhang ◽  
...  

Given a directed graph G, the directed densest subgraph (DDS) problem refers to the finding of a subgraph from G, whose density is the highest among all the subgraphs of G. The DDS problem is fundamental to a wide range of applications, such as fraud detection, community mining, and graph compression. However, existing DDS solutions suffer from efficiency and scalability problems: on a threethousand- edge graph, it takes three days for one of the best exact algorithms to complete. In this paper, we develop an efficient and scalable DDS solution. We introduce the notion of [x, y]-core, which is a dense subgraph for G, and show that the densest subgraph can be accurately located through the [x, y]-core with theoretical guarantees. Based on the [x, y]-core, we develop both exact and approximation algorithms. We have performed an extensive evaluation of our approaches on eight real large datasets. The results show that our proposed solutions are up to six orders of magnitude faster than the state-of-the-art.


2021 ◽  
Author(s):  
Danila Piatov ◽  
Sven Helmer ◽  
Anton Dignös ◽  
Fabio Persia

AbstractWe develop a family of efficient plane-sweeping interval join algorithms for evaluating a wide range of interval predicates such as Allen’s relationships and parameterized relationships. Our technique is based on a framework, components of which can be flexibly combined in different manners to support the required interval relation. In temporal databases, our algorithms can exploit a well-known and flexible access method, the Timeline Index, thus expanding the set of operations it supports even further. Additionally, employing a compact data structure, the gapless hash map, we utilize the CPU cache efficiently. In an experimental evaluation, we show that our approach is several times faster and scales better than state-of-the-art techniques, while being much better suited for real-time event processing.


Sign in / Sign up

Export Citation Format

Share Document