"The Complete Painter": Malangatana's Approach to Painting, 1959–75

Author(s):  
Allison Langley ◽  
Katrina Rush ◽  
Julie Simek

Based on conservation research associated with an exhibition of early work by pioneering Mozambican artist Malangatana Ngwenya, this essay describes his painting methodologies as revealed by material evidence and state-of-the-art visualization techniques.

2006 ◽  
Vol 12 (2) ◽  
pp. 189-192 ◽  
Author(s):  
Seth Bullock ◽  
Tom Smith ◽  
Jon Bird

Visualization has an increasingly important role to play in scientific research. Moreover, visualization has a special role to play within artificial life as a result of the informal status of its key explananda: life and complexity. Both are poorly defined but apparently identifiable via raw inspection. Here we concentrate on how visualization techniques might allow us to move beyond this situation by facilitating increased understanding of the relationships between an ALife system's (low-level) composition and organization and its (high-level) behavior. We briefly review the use of visualization within artificial life, and point to some future developments represented by the articles collected within this special issue.


2020 ◽  
Author(s):  
Lace Padilla ◽  
Matthew Kay ◽  
Jessica Hullman

While uncertainty is present in most data analysis pipelines, reasoning with uncertainty is challenging for novices and experts alike. Fortunately, researchers are making significant advancements in the communication of uncertainty. In this chapter, we detail new visualization methods and emerging cognitive theories that describe how we reason with visual representations of uncertainty. We describe the best practices in uncertainty visualization and the psychology behind how each approach supports viewers' judgments. This chapter begins with a brief overview of conventional and state-of-the-art uncertainty visualization techniques. Then we take an in-depth look at the pros and cons of each technique using cognitive theories that describe why and how the mind processes different types of uncertainty information.


2021 ◽  
Author(s):  
Shaw-Hwa Lo ◽  
Yiqiao Yin

Abstract In the field of eXplainable AI (XAI), robust “black-box” algorithms such as Convolutional Neural Networks (CNNs) are known for making high prediction performance. However, the ability to explain and interpret these algorithms still require innovation in the understanding of influential and, more importantly, ex-plainable features that directly or indirectly impact the performance of predictivity. A number of methods existing in literature focus on visualization techniques but the concepts of explainability and interpretability still require rigorous definition. In view of the above needs, this paper proposes an interaction-based methodology – Influence Score (I-score) – to screen out the noisy and non-informative variables in the images hence it nourishes an environment with explainable and interpretable features that are directly associated to feature predictiv-ity. We apply the proposed method on a real world application in Pneumonia Chest X-ray Image data set and produced state-of-the-art results. We demonstrate how to apply the proposed approach for more general big data problems by improving the explainability and in-terpretability without sacrificing the prediction performance. The contribution of this paper opens a novel angle that moves the community closer to the future pipelines of XAI problems.


2013 ◽  
Vol 12 (2) ◽  
pp. 133-162 ◽  
Author(s):  
Mathias Frisch ◽  
Raimund Dachselt

Visual representations of node-link diagrams are very important for the software development process. In many situations, large diagrams have to be explored, whereby diagram elements of interest are often clipped from the viewport and are therefore not visible. Thus, in state-of-the-art modeling tools, navigation is accompanied by time-consuming panning and zooming. One solution to this problem is offscreen visualization techniques. Usually, they indicate the existence and direction of clipped elements by overlays at the border of the viewport. In this article, we contribute the application of offscreen visualization techniques to the domain of node-link diagrams in general and to Unified Modeling Language class diagrams in particular. The basic idea of our approach is to represent offscreen nodes by proxy elements located within an interactive border region around the viewport. The proxies show information of the associated offscreen nodes and can be used to quickly navigate to the respective node. In addition, we contribute techniques that preserve the routing of edges during panning and zooming and present strategies to make our approach scalable to large diagrams. We conducted a formative pilot study of our first prototype. Based on the observations made during the evaluation, we suggest how particular techniques should be combined. Finally, we ran a user evaluation to compare our technique with a traditional zoom+pan interface. The results showed that our approach is significantly faster for exploring relationships within diagrams than state-of-the-art interfaces. We also found that the offscreen visualization combined with an additional overview window did not improve the orientation within an unknown diagram. However, an overview should be offered as a cognitive support. CR categories: D.2.2 [Software Engineering]: Design Tools and Techniques— User Interface; H.5.2 [Information Interfaces and Presentation]: User Interfaces— Graphical User Interfaces General terms: Design, Human Factors


2018 ◽  
Vol 24 (5) ◽  
pp. 649-676 ◽  
Author(s):  
XURI TANG

AbstractThis paper reviews the state-of-the-art of one emergent field in computational linguistics—semantic change computation. It summarizes the literature by proposing a framework that identifies five components in the field: diachronic corpus, diachronic word sense characterization, change modelling, evaluation and data visualization. Despite its potentials, the review shows that current studies are mainly focused on testifying hypotheses of semantic change from theoretical linguistics and that several core issues remain to be tackled: the need of diachronic corpora for languages other than English, the comparison and development of approaches to diachronic word sense characterization and change modelling, the need of comprehensive evaluation data and further exploration of data visualization techniques for hypothesis justification.


Author(s):  
P. Pushpalatha

Abstract: Optical coherence tomography angiography (OCTA) is an imaging which can applied in ophthalmology to provide detailed visualization of the perfusion of vascular networks in the eye. compared to previous state of the art dye-based imaging, such as fluorescein angiography. OCTA is non-invasive, time efficient, and it allows for the examination of retinal vascular in 3D. These advantage of the technique combined with the good usability in commercial devices led to a quick adoption of the new modality in the clinical routine. However, the interpretation of OCTA data is not without problems commonly observed image artifacts and the quite involved algorithmic details of OCTA signal construction can make the clinical assessment of OCTA exams challenging. In this paper we describe the technical background of OCTA and discuss the data acquisition process, common image visualization techniques, as well as 3D to 2D projection using high pass filtering, relu function and convolution neural network (CNN) for more accuracy and segmentation results.


Author(s):  
Gabriel Zaid ◽  
Lilian Bossuet ◽  
Amaury Habrard ◽  
Alexandre Venelli

The side-channel community recently investigated a new approach, based on deep learning, to significantly improve profiled attacks against embedded systems. Previous works have shown the benefit of using convolutional neural networks (CNN) to limit the effect of some countermeasures such as desynchronization. Compared with template attacks, deep learning techniques can deal with trace misalignment and the high dimensionality of the data. Pre-processing is no longer mandatory. However, the performance of attacks depends to a great extent on the choice of each hyperparameter used to configure a CNN architecture. Hence, we cannot perfectly harness the potential of deep neural networks without a clear understanding of the network’s inner-workings. To reduce this gap, we propose to clearly explain the role of each hyperparameters during the feature selection phase using some specific visualization techniques including Weight Visualization, Gradient Visualization and Heatmaps. By highlighting which features are retained by filters, heatmaps come in handy when a security evaluator tries to interpret and understand the efficiency of CNN. We propose a methodology for building efficient CNN architectures in terms of attack efficiency and network complexity, even in the presence of desynchronization. We evaluate our methodology using public datasets with and without desynchronization. In each case, our methodology outperforms the previous state-of-the-art CNN models while significantly reducing network complexity. Our networks are up to 25 times more efficient than previous state-of-the-art while their complexity is up to 31810 times smaller. Our results show that CNN networks do not need to be very complex to perform well in the side-channel context.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Izabela Zgłobicka ◽  
Jürgen Gluch ◽  
Zhongquan Liao ◽  
Stephan Werner ◽  
Peter Guttmann ◽  
...  

AbstractThe diatom shell is an example of complex siliceous structure which is a suitable model to demonstrate the process of digging into the third dimension using modern visualization techniques. This paper demonstrates importance of a comprehensive multi-length scale approach to the bio-structures/materials with the usage of state-of-the-art imaging techniques. Imaging of diatoms applying visible light, electron and X-ray microscopy provide a deeper insight into the morphology of their frustules.


Computers ◽  
2019 ◽  
Vol 8 (1) ◽  
pp. 17 ◽  
Author(s):  
Mohammad Alharbi ◽  
Robert Laramee

Text visualization is a rapidly growing sub-field of information visualization and visual analytics. There are many approaches and techniques introduced every year to address a wide range of challenges and analysis tasks, enabling researchers from different disciplines to obtain leading-edge knowledge from digitized collections of text. This can be challenging particularly when the data is massive. Additionally, the sources of digital text have spread substantially in the last decades in various forms, such as web pages, blogs, twitter, email, electronic publications, and digitized books. In response to the explosion of text visualization research literature, the first text visualization survey article was published in 2010. Furthermore, there are a growing number of surveys that review existing techniques and classify them based on text research methodology. In this work, we aim to present the first Survey of Surveys (SoS) that review all of the surveys and state-of-the-art papers on text visualization techniques and provide an SoS classification. We study and compare the 14 surveys, and categorize them into five groups: (1) Document-centered, (2) user task analysis, (3) cross-disciplinary, (4) multi-faceted, and (5) satellite-themed. We provide survey recommendations for researchers in the field of text visualization. The result is a very unique, valuable starting point and overview of the current state-of-the-art in text visualization research literature.


2011 ◽  
Vol 11 (2) ◽  
pp. 561-592 ◽  
Author(s):  
Benedikt Szmrecsanyi ◽  
Christoph Wolk

This paper is concerned with sketching future directions for corpus-based dialectology. We advocate a holistic approach to the study of geographically conditioned linguistic variability, and we present a suitable methodology, 'corpusbased dialectometry', in exactly this spirit. Specifically, we argue that in order to live up to the potential of the corpus-based method, practitioners need to (i) abandon their exclusive focus on individual linguistic features in favor of the study of feature aggregates, (ii) draw on computationally advanced multivariate analysis techniques (such as multidimensional scaling, cluster analysis, and principal component analysis), and (iii) aid interpretation of empirical results by marshalling state-of-the-art data visualization techniques. To exemplify this line of analysis, we present a case study which explores joint frequency variability of 57 morphosyntax features in 34 dialects all over Great Britain.


Sign in / Sign up

Export Citation Format

Share Document