Polygraph diagrams for holistic visualization of data sets using multiple-units of analysis

Author(s):  
E.B. Klemm ◽  
M.K. Iding ◽  
M.E. Crosby
2021 ◽  
pp. 1-14
Author(s):  
T.G. Vargas ◽  
V.A. Mittal

Abstract Discrimination has been associated with adverse mental health outcomes, though it is unclear how early in life this association becomes apparent. Implicit emotion regulation, developing during childhood, is a foundational skill tied to a range of outcomes. Implicit emotion regulation has yet to be tested as an associated process for mental illness symptoms that can often emerge during this sensitive developmental period. Youth aged 9–11 were recruited for the Adolescent Brain Cognitive Development (ABCD) study. Associations between psychotic-like experiences, depressive symptoms, and total discrimination (due to race, ethnicity, nationality, weight, or sexual minority status) were tested, as well as associations with implicit emotion regulation measures (emotional updating working memory and inhibitory control). Analyses examined whether associations with symptoms were mediated by implicit emotion regulation. Discrimination related to decreased implicit emotion regulation performance, and increased endorsement of depressive symptoms and psychotic-like experiences. Emotional updating working memory performance partially mediated the association between discrimination and psychotic-like experiences, while emotional inhibitory control did not. Discrimination and implicit emotion regulation could serve as putative transdiagnostic markers of vulnerability. Results support the utility of using multiple units of analysis to improve understanding of complex emerging neurocognitive functions and developmentally sensitive periods.


Author(s):  
Anna Ursyn ◽  
Edoardo L'Astorina

This chapter discusses some possible ways of how professionals, researchers and users representing various knowledge domains are collecting and visualizing big data sets. First it describes communication through senses as a basis for visualization techniques, computational solutions for enhancing senses and ways of enhancing senses by technology. The next part discusses ideas behind visualization of data sets and ponders what is and what not visualization is. Further discussion relates to data visualization through art as visual solutions of science and mathematics related problems, documentation objects and events, and a testimony to thoughts, knowledge and meaning. Learning and teaching through data visualization is the concluding theme of the chapter. Edoardo L'Astorina provides visual analysis of best practices in visualization: An overlay of Google Maps that showed all the arrival times - in real time - of all the buses in your area based on your location and visual representation of all the Tweets in the world about TfL (Transport for London) tube lines to predict disruptions.


2018 ◽  
Vol 7 (3.12) ◽  
pp. 239
Author(s):  
Chitransh Rajesh ◽  
Yash Jain ◽  
J Jayapradha

Data Analytics is the process of analyzing unprocessed data to draw conclusions by studying and inspecting various patterns in the data. Several algorithms and conceptual methods are often followed to derive legit and accurate results. Efficient data handling is important for interactive visualization of data sets. Considering recent researches and analytical theories on column-oriented Database Management System, we are developing a new data engine using R and Tableau to predict airport trends. The engine uses Univariate datasets (Example, Perth Airport Passenger Movement Dataset, and Newark Airport Cargo Stats Dataset) to analyze and predict accurate trends. Data analyzing and prediction is done with the implementation of Time Series Analysis and respective ARIMA Models for respective modules. Development of modules is done using RStudio whereas Tableau is used for interactive visualization and end-user report generation. The Airport Trends Analytics Engine is an integral part of R and Tableau 10.4 and is optimized for use on desktop and server environments.  


Author(s):  
Mark F. St. John ◽  
Woodrow Gustafson ◽  
April Martin ◽  
Ronald A. Moore ◽  
Christopher A. Korkos

Enterprises share a wide variety of data with different partners. Tracking the risks and benefits of this data sharing is important for avoiding unwarranted risks of data exploitation. Data sharing risk can be characterized as a combination of trust in data sharing partners to not exploit shared data and the sensitivity, or potential for harm, of the data. Data sharing benefits can be characterized as the value likely to accrue to the enterprise from sharing the data by making the enterprise’s objectives more likely to succeed. We developed a risk visualization concept called a risk surface to support users monitoring for high risks and poor risk-benefit trade-offs. The risk surface design was evaluated in a series of two focus groups conducted with human factors professionals. Across the two studies, the design was improved and ultimately rated as highly useful. A risk surface needs to 1) convey which data, as joined data sets, are shared with which partners, 2) convey the degree of risk due to sharing that data, 3) convey the benefits of the data sharing and the trade-off between risk and benefits, and 4) be easy to scan at scale, since enterprises are likely to share many different types of data with many different partners.


Author(s):  
Dian Pratiwi ◽  
Dwi Martani

The Audit Board of the Republic of Indonesia (BPK) had findings on tax receivables in the last seven years, indicate that the DGT has not managed tax receivables properly. This study aims to analyze problems in the administration of tax receivables at DGT, benchmarking with other countries, and provide suggestions to solve these problems. This study uses a qualitative method with case studies at the DGT and some other tax authorities as multiple units of analysis. Data collection are carried out through interviews and documentation. The results show several problems in the administration of tax receivables at DGT that lead to the system, Taxpayer Account application, regulation, human resource, and exchange of information. Some suggested solutions to solve these problems are integrating existing systems in DGT, developing Taxpayer Accounts, improving the quality of human resources and conducting regular supervision, revised PER-08/PJ./2009 and affirming rules for DGT's recurring business processes, and building a data exchange system between the DGT and the Tax Court as well as the DGT and Directorate General of the Treasury


Author(s):  
Jeffrey J. Reuer ◽  
Sharon F. Matusik ◽  
Jessica Jones

The role of collaboration in entrepreneurship spans across different contexts, varied theoretical perspectives, and multiple units of analysis. This chapter introduces The Oxford Handbook of Entrepreneurship and Collaboration with an overview of the important role that collaboration plays in value creation, resource acquisition, and the development of entrepreneurial ventures. It is organized in two ways. First, the chapter summarizes each chapter to direct readers to the material of greatest relevance and interest to them. Second, it identifies important research questions to further push connections between the fields of entrepreneurship and interorganizational collaboration.


Author(s):  
Alfredo Cuzzocrea ◽  
Svetlana Mansmann

The problem of efficiently visualizing multidimensional data sets produced by scientific and statistical tasks/ processes is becoming increasingly challenging, and is attracting the attention of a wide multidisciplinary community of researchers and practitioners. Basically, this problem consists in visualizing multidimensional data sets by capturing the dimensionality of data, which is the most difficult aspect to be considered. Human analysts interacting with high-dimensional data often experience disorientation and cognitive overload. Analysis of high- dimensional data is a challenge encountered in a wide set of real-life applications such as (i) biological databases storing massive gene and protein data sets, (ii) real-time monitoring systems accumulating data sets produced by multiple, multi-rate streaming sources, (iii) advanced Business Intelligence (BI) systems collecting business data for decision making purposes etc. Traditional DBMS front-end tools, which are usually tuple-bag-oriented, are completely inadequate to fulfill the requirements posed by an interactive exploration of high-dimensional data sets due to two major reasons: (i) DBMS implement the OLTP paradigm, which is optimized for transaction processing and deliberately neglects the dimensionality of data; (ii) DBMS operators are very poor and offer nothing beyond the capability of conventional SQL statements, what makes such tools very inefficient with respect to the goal of visualizing and, above all, interacting with multidimensional data sets embedding a large number of dimensions. Despite the above-highlighted practical relevance of the problem of visualizing multidimensional data sets, the literature in this field is rather scarce, due to the fact that, for many years, this problem has been of relevance for life science research communities only, and interaction of the latter with the computer science research community has been insufficient. Following the enormous growth of scientific disciplines like Bio-Informatics, this problem has then become a fundamental field in the computer science academic as well as industrial research. At the same time, a number of proposals dealing with the multidimensional data visualization problem appeared in literature, with the amenity of stimulating novel and exciting application fields such as the visualization of Data Mining results generated by challenging techniques like clustering and association rule discovery. The above-mentioned issues are meant to facilitate understanding of the high relevance and attractiveness of the problem of visualizing multidimensional data sets at present and in the future, with challenging research findings accompanied by significant spin-offs in the Information Technology (IT) industrial field. A possible solution to tackle this problem is represented by well-known OLAP techniques (Codd et al., 1993; Chaudhuri & Dayal, 1997; Gray et al., 1997), focused on obtaining very efficient representations of multidimensional data sets, called data cubes, thus leading to the research field which is known in literature under the terms OLAP Visualization and Visual OLAP, which, in the remaining part of the article, are used interchangeably.


2000 ◽  
Author(s):  
Roman Y. Novoselov ◽  
Dale A. Lawrence ◽  
Lucy Y. Pao

Abstract Haptic rendering of data on irregular grids requires additional data storage and real time computation compared to the same size data sets on regular grids. At the same time, it is important to keep computation time small to avoid noticeable artifacts in haptic rendering. When the rendering algorithm is implemented on DSP processors, memory is often much smaller than on contemporary general purpose computers. By appropriate partitioning of algorithms between preprocessing and real time computation, and by quantizing and packing data, we show how scientific visualization of data sets on the order of 1 million elements by real time haptic rendering can be accomplished.


2005 ◽  
Vol 13 (4) ◽  
pp. 345-364 ◽  
Author(s):  
Jeffrey B. Lewis ◽  
Drew A. Linzer

Researchers often use as dependent variables quantities estimated from auxiliary data sets. Estimated dependent variable (EDV) models arise, for example, in studies where counties or states are the units of analysis and the dependent variable is an estimated mean, proportion, or regression coefficient. Scholars fitting EDV models have generally recognized that variation in the sampling variance of the observations on the dependent variable will induce heteroscedasticity. We show that the most common approach to this problem, weighted least squares, will usually lead to inefficient estimates and underestimated standard errors. In many cases, OLS with White's or Efron heteroscedastic consistent standard errors yields better results. We also suggest two simple alternative FGLS approaches that are more efficient and yield consistent standard error estimates. Finally, we apply the various alternative estimators to a replication of Cohen's (2004) cross-national study of presidential approval.


Sign in / Sign up

Export Citation Format

Share Document