scholarly journals Towards an Uncertainty-Aware Visualization in the Digital Humanities

Informatics ◽  
2019 ◽  
Vol 6 (3) ◽  
pp. 31 ◽  
Author(s):  
Roberto Therón Sánchez ◽  
Alejandro Benito Santos ◽  
Rodrigo Santamaría Vicente ◽  
Antonio Losada Gómez

As visualization becomes widespread in a broad range of cross-disciplinary academic domains, such as the digital humanities (DH), critical voices have been raised on the perils of neglecting the uncertain character of data in the visualization design process. Visualizations that, purposely or not, obscure or remove uncertainty in its different forms from the scholars’ vision may negatively affect the manner in which humanities scholars regard computational methods as useful tools in their daily work. In this paper, we address the issue of uncertainty representation in the context of the humanities from a theoretical perspective, in an attempt to provide the foundations of a framework that allows for the construction of ecological interface designs which are able to expose the computational power of the algorithms at play while, at the same time, respecting the particularities and needs of humanistic research. To this end, we review past uncertainty taxonomies in other domains typically related to the humanities and visualization, such as cartography and GIScience. From this review, we select an uncertainty taxonomy related to the humanities that we link to recent research in visualization for the DH. Finally, we bring a novel analytics method developed by other authors (Progressive Visual Analytics) into question, which we argue can be a good candidate to resolve the aforementioned difficulties in DH practice.

Design Issues ◽  
2021 ◽  
Vol 37 (4) ◽  
pp. 9-22
Author(s):  
Joachim Knape

Abstract This article deals primarily with object design from a production-theoretical perspective. It is focused on the question of the rhetorical achievement of design, i.e., its persuasiveness, which was already discussed by Buchanan and Krippendorf in 1985. To this day, the relationship between aesthetic and rhetorical calculuses in the design process is controversial in theoretical discussion. The solution to the problem: Aesthetics and rhetoric combine in the appeal structure (1) at the moment of creation of design and (2) at the moment of the user's decision for an object. In these processes, the design argument results from the combination of aestheticized gestalt and rhetorical appeal of an object.


Author(s):  
James E. Dobson

This book seeks to develop an answer to the major question arising from the adoption of sophisticated data-science approaches within humanities research: are existing humanities methods compatible with computational thinking? Data-based and algorithmically powered methods present both new opportunities and new complications for humanists. This book takes as its founding assumption that the exploration and investigation of texts and data with sophisticated computational tools can serve the interpretative goals of humanists. At the same time, it assumes that these approaches cannot and will not obsolete other existing interpretive frameworks. Research involving computational methods, the book argues, should be subject to humanistic modes that deal with questions of power and infrastructure directed toward the field’s assumptions and practices. Arguing for a methodologically and ideologically self-aware critical digital humanities, the author contextualizes the digital humanities within the larger neo-liberalizing shifts of the contemporary university in order to resituate the field within a theoretically informed tradition of humanistic inquiry. Bringing the resources of critical theory to bear on computational methods enables humanists to construct an array of compelling and possible humanistic interpretations from multiple dimensions—from the ideological biases informing many commonly used algorithms to the complications of a historicist text mining, from examining the range of feature selection for sentiment analysis to the fantasies of human subjectless analysis activated by machine learning and artificial intelligence.


Information ◽  
2021 ◽  
Vol 12 (11) ◽  
pp. 436
Author(s):  
Alejandro Benito-Santos ◽  
Michelle Doran ◽  
Aleyda Rocha ◽  
Eveline Wandl-Vogt ◽  
Jennifer Edmond ◽  
...  

The capture, modelling and visualisation of uncertainty has become a hot topic in many areas of science, such as the digital humanities (DH). Fuelled by critical voices among the DH community, DH scholars are becoming more aware of the intrinsic advantages that incorporating the notion of uncertainty into their workflows may bring. Additionally, the increasing availability of ubiquitous, web-based technologies has given rise to many collaborative tools that aim to support DH scholars in performing remote work alongside distant peers from other parts of the world. In this context, this paper describes two user studies seeking to evaluate a taxonomy of textual uncertainty aimed at enabling remote collaborations on digital humanities (DH) research objects in a digital medium. Our study focuses on the task of free annotation of uncertainty in texts in two different scenarios, seeking to establish the requirements of the underlying data and uncertainty models that would be needed to implement a hypothetical collaborative annotation system (CAS) that uses information visualisation and visual analytics techniques to leverage the cognitive effort implied by these tasks. To identify user needs and other requirements, we held two user-driven design experiences with DH experts and lay users, focusing on the annotation of uncertainty in historical recipes and literary texts. The lessons learned from these experiments are gathered in a series of insights and observations on how these different user groups collaborated to adapt an uncertainty taxonomy to solve the proposed exercises. Furthermore, we extract a series of recommendations and future lines of work that we share with the community in an attempt to establish a common agenda of DH research that focuses on collaboration around the idea of uncertainty.


2016 ◽  
Author(s):  
Amanda Visconti

This whitepaper offers an analytic discussion of the process and productfor Amanda Visconti's dissertation "How can you love a work, if you don'tknow it?": Critical Code and Design toward Participatory Digital Editions (dr.AmandaVisconti.com). The introductory section proposes a speculativeexperiment to test digital edition design theories: "What if we build adigital edition and invite everyone? What if millions of scholars,first-time readers, book clubs, teachers and their students show up andannotate a text with their infinite interpretations, questions, andcontextualizations?". Approaching digital editions as Morris Eaves'"problem-solving mechanism"s, the project designed, built, and user-testeda digital edition of James Joyce’s Ulysses with various experimentalinterface features: InfiniteUlysses.com. Three areas of research advancedthrough the project are presented: designing public and participatoryedition projects, and whether critical participation is necessary to suchprojects; designing digital edition functionalities and appearance to servea participatory audience, and what we learn about such an endeavor throughInfinite Ulysses' user experience data; and separating the values oftextual scholarship from their embodiments to imagine new types of edition.A review of theoretical and built precedents from textual scholarship,scholarly design and code projects, public and participatory humanitiesendeavors, and theories around a digital Ulysses grounds the report,followed by an overview of the features of the Infinite Ulyssesparticipatory digital edition. Section 2 discusses existing examples ofpublic participation in digital humanities (DH) projects, Section 3 focuseson digital editions and the design process, Section 4 reimagines thedigital edition by separating textual scholarship values from the commonembodiments of these values, and the conclusion sums up the interventionsof this project and lists next steps for continuing this research. Abibliography and appendices (full texts of user surveys, explanation ofproject's dissertational format, wireframes and screenshot from throughoutthe design process) conclude the report.


Author(s):  
Kim Ebensgaard Jensen

<p class="p1">Corpus linguistics has been closely intertwined with digital technology since the introduction of university computer mainframes in the 1960s. Making use of both digitized data in the form of the language corpus and computational methods of analysis involving concordancers and statistics software, corpus linguistics arguably has a place in the digital humanities. Still, it remains obscure and fi gures only sporadically in the literature on the digital humanities. Th is article provides an overview of the main principles of corpus linguistics and the role of computer technology in relation to data and method and also off ers a bird's-eye view of the history of corpus linguistics with a focus on its intimate relationship with digital technology and how digital technology has impacted the very core of corpus linguistics and shaped the identity of the corpus linguist. Ultimately, the article is oriented towards an acknowledgment of corpus linguistics' alignment with the digital humanities.</p>


2020 ◽  
Author(s):  
Inácio Gomes Medeiros ◽  
André Salim Khayat ◽  
Beatriz Stransky ◽  
Sidney Emanuel Batista dos Santos ◽  
Paulo Pimentel de Assumpção ◽  
...  

Abstract This protocol aims to describe the building of a database of SARS-CoV-2 targets for siRNA approaches. Starting from the virus reference genome, we will derive sequences from 18 to 21nt-long and verify their similarity against the human genome and coding and non-coding transcriptome, as well as genomes from related viruses. We will also calculate a set of thermodynamic features for those sequences and will infer their efficiencies using three different predictors. The protocol has two main phases: at first, we align sequences against reference genomes. In the second one, we extract the features. The first phase varies in terms of duration, depending on computational power from the running machine and the number of reference genomes. Despite that, the second phase lasts about thirty minutes of execution, also depending on the number of cores of running machine. The constructed database aims to speed the design process by providing a broad set of possible SARS-CoV-2 sequences targets and siRNA sequences.


Author(s):  
Salbiah Ashaari ◽  
Sherzod Turaev ◽  
M. Izzuddin M. Tamrin ◽  
Abdurahim Okhunov ◽  
Tamara Zhukabayeva

<p>This study focus on defining a new variant of regulated grammars called multiset controlled grammars as well as investigating their computational power. We apply a constructive theoretical approach; the intent of which is to provide new theories based on computational methods where the results are appeared in the form of examples, lemmas and theorems. In the study, we have found that multiset is powerful and yet a simple method in regulated rewriting theory. We have proved that multiset controlled grammars are at least as powerful as additive valence grammars, and they are at most powerful as matrix grammars.</p>


1989 ◽  
Vol 33 (4) ◽  
pp. 224-228 ◽  
Author(s):  
Robert A. Virzi

A case is made for using low-fidelity prototypes early in the design phase of new services. The rationale for this is based upon (1) a model of how user interface designs progress and (2) a call to expediency. The design process is viewed as the successive application of constraints that serve to prune the space of all user interfaces. Some constraints are external (i.e., placed on the service by limits of technology or cost). Other constraints are derived by application of heuristic design principles. Even after these constraints have been applied, the design is still not fully constrained and the designer must make high-level design decisions. At these choice points, I propose that low-fidelity prototyping is an appropriate means of gathering design information as it is an expedient solution and may serve as a method of testing the central tendency of entire classes of user interfaces.


Buildings ◽  
2021 ◽  
Vol 11 (4) ◽  
pp. 163
Author(s):  
Ju Hyun Lee ◽  
Michael J. Ostwald

The design of a building façade has a significant impact on the way people respond to it physiologically and behaviourally. Few methods are available to assist an architect to understand such impacts during the design process. Thus, this paper examines the viability of using two computational methods to examine potential visual stimulus-sensation relationships in facade design. The first method, fractal analysis, is used to holistically measure the visual stimuli of a design. This paper describes both the box counting (density) and differential box counting (intensity) approaches to determining fractal dimension (D) in architecture. The second method, visual attention simulation, is used to explore pre-attentive processing and sensation in vision. Four measures—D-density (Dd), D-intensity (Di), heat map and gaze sequence—are used to provide quantitative and qualitative indicators of the ways people read different design options. Using two façade designs as examples, the results of this application reveal that the D values of a façade image have a relationship with the pre-attentive processing shown in heat map and gaze sequence simulations. The findings are framed as a methodological contribution to the field, but also to the disciplinary knowledge gap about the stimulus-sensation relationship and visual reasoning in design.


Sign in / Sign up

Export Citation Format

Share Document