Understanding node-link and matrix visualizations of networks: A large-scale online experiment

2019 ◽  
Vol 7 (2) ◽  
pp. 242-264 ◽  
Author(s):  
Donghao Ren ◽  
Laura R. Marusich ◽  
John O’Donovan ◽  
Jonathan Z. Bakdash ◽  
James A. Schaffer ◽  
...  

AbstractWe investigated human understanding of different network visualizations in a large-scale online experiment. Three types of network visualizations were examined: node-link and two different sorting variants of matrix representations on a representative social network of either 20 or 50 nodes. Understanding of the network was quantified using task time and accuracy metrics on questions that were derived from an established task taxonomy. The sample size in our experiment was more than an order of magnitude larger (N = 600) than in previous research, leading to high statistical power and thus more precise estimation of detailed effects. Specifically, high statistical power allowed us to consider modern interaction capabilities as part of the evaluated visualizations, and to evaluate overall learning rates as well as ambient (implicit) learning. Findings indicate that participant understanding was best for the node-link visualization, with higher accuracy and faster task times than the two matrix visualizations. Analysis of participant learning indicated a large initial difference in task time between the node-link and matrix visualizations, with matrix performance steadily approaching that of the node-link visualization over the course of the experiment. This research is reproducible as the web-based module and results have been made available at: https://osf.io/qct84/.

2013 ◽  
Author(s):  
Laura S. Hamilton ◽  
Stephen P. Klein ◽  
William Lorie

2020 ◽  
Vol 59 (04) ◽  
pp. 294-299 ◽  
Author(s):  
Lutz S. Freudenberg ◽  
Ulf Dittmer ◽  
Ken Herrmann

Abstract Introduction Preparations of health systems to accommodate large number of severely ill COVID-19 patients in March/April 2020 has a significant impact on nuclear medicine departments. Materials and Methods A web-based questionnaire was designed to differentiate the impact of the pandemic on inpatient and outpatient nuclear medicine operations and on public versus private health systems, respectively. Questions were addressing the following issues: impact on nuclear medicine diagnostics and therapy, use of recommendations, personal protective equipment, and organizational adaptations. The survey was available for 6 days and closed on April 20, 2020. Results 113 complete responses were recorded. Nearly all participants (97 %) report a decline of nuclear medicine diagnostic procedures. The mean reduction in the last three weeks for PET/CT, scintigraphies of bone, myocardium, lung thyroid, sentinel lymph-node are –14.4 %, –47.2 %, –47.5 %, –40.7 %, –58.4 %, and –25.2 % respectively. Furthermore, 76 % of the participants report a reduction in therapies especially for benign thyroid disease (-41.8 %) and radiosynoviorthesis (–53.8 %) while tumor therapies remained mainly stable. 48 % of the participants report a shortage of personal protective equipment. Conclusions Nuclear medicine services are notably reduced 3 weeks after the SARS-CoV-2 pandemic reached Germany, Austria and Switzerland on a large scale. We must be aware that the current crisis will also have a significant economic impact on the healthcare system. As the survey cannot adapt to daily dynamic changes in priorities, it serves as a first snapshot requiring follow-up studies and comparisons with other countries and regions.


Cortex ◽  
2021 ◽  
Vol 137 ◽  
pp. 138-148
Author(s):  
Jeremie Güsten ◽  
Gabriel Ziegler ◽  
Emrah Düzel ◽  
David Berron
Keyword(s):  

2021 ◽  
Author(s):  
Parsoa Khorsand ◽  
Fereydoun Hormozdiari

Abstract Large scale catalogs of common genetic variants (including indels and structural variants) are being created using data from second and third generation whole-genome sequencing technologies. However, the genotyping of these variants in newly sequenced samples is a nontrivial task that requires extensive computational resources. Furthermore, current approaches are mostly limited to only specific types of variants and are generally prone to various errors and ambiguities when genotyping complex events. We are proposing an ultra-efficient approach for genotyping any type of structural variation that is not limited by the shortcomings and complexities of current mapping-based approaches. Our method Nebula utilizes the changes in the count of k-mers to predict the genotype of structural variants. We have shown that not only Nebula is an order of magnitude faster than mapping based approaches for genotyping structural variants, but also has comparable accuracy to state-of-the-art approaches. Furthermore, Nebula is a generic framework not limited to any specific type of event. Nebula is publicly available at https://github.com/Parsoa/Nebula.


Data in Brief ◽  
2015 ◽  
Vol 5 ◽  
pp. 453-457 ◽  
Author(s):  
Jingwen Zhang ◽  
Devon Brackbill ◽  
Sijia Yang ◽  
Damon Centola

2017 ◽  
Vol 6 ◽  
Author(s):  
Saskia Meijboom ◽  
Martinette T. van Houts-Streppel ◽  
Corine Perenboom ◽  
Els Siebelink ◽  
Anne M. van de Wiel ◽  
...  

AbstractSelf-administered web-based 24-h dietary recalls (24 hR) may save a lot of time and money as compared with interviewer-administered telephone-based 24 hR interviews and may therefore be useful in large-scale studies. Within the Nutrition Questionnaires plus (NQplus) study, the web-based 24 hR tool Compl-eat™ was developed to assess Dutch participants’ dietary intake. The aim of the present study was to evaluate the performance of this tool against the interviewer-administered telephone-based 24 hR method. A subgroup of participants of the NQplus study (20–70 years, n 514) completed three self-administered web-based 24 hR and three telephone 24 hR interviews administered by a dietitian over a 1-year period. Compl-eat™ as well as the dietitians guided the participants to report all foods consumed the previous day. Compl-eat™ on average underestimated the intake of energy by 8 %, of macronutrients by 10 % and of micronutrients by 13 % as compared with telephone recalls. The agreement between both methods, estimated using Lin's concordance coefficients (LCC), ranged from 0·15 for vitamin B1 to 0·70 for alcohol intake (mean LCC 0·38). The lower estimations by Compl-eat™ can be explained by a lower number of total reported foods and lower estimated intakes of the food groups, fats, oils and savoury sauces, sugar and confectionery, dairy and cheese. The performance of the tool may be improved by, for example, adding an option to automatically select frequently used foods and including more recall cues. We conclude that Compl-eat™ may be a useful tool in large-scale Dutch studies after suggested improvements have been implemented and evaluated.


2021 ◽  
Vol 15 (3) ◽  
pp. 1-31
Author(s):  
Haida Zhang ◽  
Zengfeng Huang ◽  
Xuemin Lin ◽  
Zhe Lin ◽  
Wenjie Zhang ◽  
...  

Driven by many real applications, we study the problem of seeded graph matching. Given two graphs and , and a small set of pre-matched node pairs where and , the problem is to identify a matching between and growing from , such that each pair in the matching corresponds to the same underlying entity. Recent studies on efficient and effective seeded graph matching have drawn a great deal of attention and many popular methods are largely based on exploring the similarity between local structures to identify matching pairs. While these recent techniques work provably well on random graphs, their accuracy is low over many real networks. In this work, we propose to utilize higher-order neighboring information to improve the matching accuracy and efficiency. As a result, a new framework of seeded graph matching is proposed, which employs Personalized PageRank (PPR) to quantify the matching score of each node pair. To further boost the matching accuracy, we propose a novel postponing strategy, which postpones the selection of pairs that have competitors with similar matching scores. We show that the postpone strategy indeed significantly improves the matching accuracy. To improve the scalability of matching large graphs, we also propose efficient approximation techniques based on algorithms for computing PPR heavy hitters. Our comprehensive experimental studies on large-scale real datasets demonstrate that, compared with state-of-the-art approaches, our framework not only increases the precision and recall both by a significant margin but also achieves speed-up up to more than one order of magnitude.


Author(s):  
F. Ma ◽  
J. H. Hwang

Abstract In analyzing a nonclassically damped linear system, one common procedure is to neglect those damping terms which are nonclassical, and retain the classical ones. This approach is termed the method of approximate decoupling. For large-scale systems, the computational effort at adopting approximate decoupling is at least an order of magnitude smaller than the method of complex modes. In this paper, the error introduced by approximate decoupling is evaluated. A tight error bound, which can be computed with relative ease, is given for this method of approximate solution. The role that modal coupling plays in the control of error is clarified. If the normalized damping matrix is strongly diagonally dominant, it is shown that adequate frequency separation is not necessary to ensure small errors.


Sign in / Sign up

Export Citation Format

Share Document