Digital Human Sciences: New Objects – New Approaches
Latest Publications


TOTAL DOCUMENTS

12
(FIVE YEARS 12)

H-INDEX

0
(FIVE YEARS 0)

Published By Stockholm University Press

9789176351475

Author(s):  
Cecilia Magnusson Sjöberg

A major starting point is that transparency is a condition for privacy in the context of personal data processing, especially when based on artificial intelligence (AI) methods. A major keyword here is openness, which however is not equivalent to transparency. This is explained by the fact that an organization may very well be governed by principles of openness but still not provide transparency due to insufficient access rights and lacking implementation of those rights. Given these hypotheses, the chapter investigates and illuminates ways forward in recognition of algorithms, machine learning, and big data as critical success factors of personal data processing based on AI—that is, if privacy is to be preserved. In these circumstances, autonomy of technology calls for attention and needs to be challenged from a variety of perspectives. Not least, a legal approach to digital human sciences appears to be a resource to examine further. This applies, for instance, when data subjects in the public as well as in the private sphere are exposed to AI for better or for worse. Providing what may be referred to as a legal shield between user and application might be one remedy to shortcomings in this context.


Author(s):  
Jonas Andersson Schwarz

Digital media infrastructures give rise to texts that are socially interconnected in various forms of complex networks. These mediated phenomena can be analyzed through methods that trace relational data. Social network analysis (SNA) traces interconnections between social nodes, while natural language processing (NLP) traces intralinguistic properties of the text. These methods can be bracketed under the header “social big data.” Empirical and theoretical rigor begs a constructionist understanding of such data. Analysis is inherently perspective-bound; it is rarely a purely objective statistical exercise. Some kind of selection is always made, primarily out of practical necessity. Moreover, the agents observed (network participants producing the texts in question) all tend to make their own encodings, based on observational inferences, situated in the network topology. Recent developments in such methods have, for example, provided social scientific scholars with innovative means to address inconsistencies in comparative surveys in different languages, addressing issues of comparability and measurement equivalence. NLP provides novel, inductive ways of understanding word meanings as a function of their relational placement in syntagmatic and paradigmatic relations, thereby identifying biases in the relative meanings of words. Reflecting on current research projects, the chapter addresses key epistemological challenges in order to improve contextual understanding.


Author(s):  
Julia Pennlert ◽  
Björn Ekström ◽  
David Gunnarsson Lorentzen

Computer-assisted tools have introduced new ways to conduct research in the social sciences and the humanities. Digital methods, as an umbrella term for this line of methodology, have presented new vocabularies that affect research communities from different disciplines. The aim of this chapter is to discuss how digital methods can be understood and scrutinized as procedures of collecting, analyzing, visualizing, and interpreting born-digital and digitized material. We aim to problematize how the embracing of digital methods in the research process paves the way for certain knowledge claims. By adopting a teleoptical metaphor in order to scrutinize three case studies, our aim is to discuss the limitations and the possibilities for digital methods as a way of conducting science and research. The contribution addresses how and to what extent digital methods direct the researcher’s gaze toward particular focal points.


Author(s):  
Amanda Wasielewski ◽  
Anna Dahlgren

Text mining in art history scholarship can tell us about the discipline itself, as well as artistic concerns at any given moment. The aim of this study is to develop and test a strategy for text mining from PDFs of journal articles that have nonstandard formatting and/or use notes rather than full bibliographies for references. While articles in the natural and social sciences typically adhere to standard formats, art history journals employ a variety of formatting styles that make bulk capture of citation and other textual data from the articles challenging. This study outlines a method by which researchers can extract data from journals articles, using a sample set from art history. Once extracted, the data from PDFs can be used to compare frequently used terms across samples and determine which scholars are most cited in either bibliographies or the main body text of articles. If the structure and layout of individual journals are carefully considered and the data is properly cleaned, a clear picture of the disciplinary influences and dependencies of the scholarship through citations and key terms can be obtained.


Author(s):  
Johan Jarlbrink

Computers and mobile phones are piling up in archives, libraries, and museums. What kind of objects are they, what can they tell us, and how can we approach them? The aim of this chapter is to exemplify what an investigation of a hard drive implicates, the methods needed to conduct it, and what kind of results we can get out of it. To focus the investigation, hard drives are approached as records of everyday media use. The chapter introduces a computer forensic method used as a media ethnographic tool. Computer forensics and media ethnography are rooted in different methodological traditions, but both take an interest in people’s routines and the way they do and organize things. The chapter argues that a hard drive represents a window into the history of new media: into time specific software, formats, and media use.


Author(s):  
Teresa Cerratto Pargman ◽  
Cormac McGrath

With the growing digitalization of the education sector, the availability of significant amounts of data, “big data,” creates possibilities for the use of artificial intelligence technologies to gain valuable insight into how students learn in higher education. Learning analytics technologies are examples of how deep learning algorithms can identify patterns in data and incorporate this “knowledge” into a model that is eventually integrated into the digital platforms used for interacting with students. This chapter introduces learning analytics as an emerging sociotechnical phenomenon in higher education. We situate the promises and expectations associated with learning analytics technologies, map their ties to emerging data-driven practices, and unpack the ethical concerns that are related to such practices via examples.Following this, we discuss three insights that we hope will provoke discussions among educators, researchers, and practitioners in higher education: (1) educational data-driven practices are highly context sensitive, (2) educational data-driven practices are not synonymous with evidence-based practices, and (3) innovative educational data-driven practices are not sustainable per se. This chapter calls for debating the role of emerging data-driven practices in higher education in relation to academic freedom and educational values embedded in critical pedagogy.


Author(s):  
Stanley Greenstein

The widespread use of digital technologies within society incorporating elements of artificial intelligence (AI) is increasing at a phenomenal rate. This technology promises a multitude of advantages for the development of society. However, it also entails risks. A characteristic of AI technology is that, in addition to using knowledge from computer science, it is increasingly being combined with insights from within the cognitive sciences. This deeper insight into human behavior combined with technology increases the ability of those who control the technology to not only predict but also manipulate human behavior. A function of the law is to protect individuals and society from risks and vulnerabilities, including those associated with technology. The more complex the technologies applied in society, the more difficult it may be to identify the potential harms associated with the technologies. Consequently, it is worthwhile discussing the extent to which the law protects society from the risks associated with technology. In applying the law, the dominant method is the “traditional legal science method.” It incorporates a dogmatic approach and can be described as the default legal problem-solving mechanism utilized by legal students and practitioners. A question that arises is whether it is equipped to meet the new demands placed on law in an increasingly technocratic society. Attempting to frame the modern risks to society using a traditional legal science method alone is bound to provide an unsatisfactory result in that it fails to provide a complete picture of the problem. Simply put, modern societal problems that arise from the use of new digital technologies are multidimensional in nature and require a multidisciplinary solution. In other words, applying a restrictive legal approach may result in solutions that are logical from a purely legal perspective but that are out of step with reality, potentially resulting in unjust solutions. This chapter examines the increased digitalization of society from the legal perspective and also elevates the application of the legal informatics approach as a means of better aligning research within in legal science with other disciplines.


Author(s):  
Karolina Uggla

Information visualization has become a prevailing part of our visual culture, a research field, and a line of work for designers. The literature on information visualization is diverse, dominated by handbooks aimed at designers and illustrated surveys, sometimes with an emphasis on historical examples. This chapter makes a survey of the field of information visualization in order to map out and assess its analytical vocabulary. First, there is a need to refer to some different definitions and general concepts, such as the naming of phases in the visualization process, the naming of different visualization types, and their components. In order for scholars of human sciences to be able to identify, understand, and interpret information visualization as visual objects, some tools for suitable visual analysis of these objects would be useful. To meet this end, the chapter explores two particular interpretative frameworks. The first framework discusses social semiotics as an analytical tool. The second one analyzes the ethics and emotional appeal of information visualization.


Author(s):  
Amanda Wasielewski

Digital methodologies that revolve around the study of text have become popular in humanities disciplines such as literature and history. The potential for studying large groups of text automatically through techniques like text mining has meant that Franco Moretti’s “distant reading” has found more and more proponents. Art history, however, presents some unique barriers to uptake in computational techniques, not least the resistance of art historians, who have raised legitimate concerns about the relevance of such techniques. Many so-called “digital art history” projects focus only on formal characteristics while ignoring context, which does not reflect the nature of art historical study in the last 60 years. The technical challenges of using digital methodologies in the study of art and visual culture have limited the potential benefits of such techniques as well: the methodologies used for images are more complex than text recognition and there is simply not enough preexisting data that needs to be sorted in this way.


Author(s):  
Jonas Stier

This chapter starts from three assumptions: (1) that digital technologies (DTs) are products of humans, and reversibly that such technologies have effects on and consequences for humans; (2) that DTs have profound, long-term effects on culture and social interaction; and (3) that research on such effects often disregards inherent social and cultural biases in DTs and discourses on digitalization and innovation. DTs tend to be depicted as “objective” and void of cultural contents and underpinnings. Therefore, and with an emphasis on the usefulness of combining different research methodologies, this chapter sheds light upon a number of discursive blind spots in these domains: technocentrism and normativism, homo- and heterocentrism, ego- and ethnocentrism, and what I call the reversed problem imperative. Drawing upon intercultural communication studies (ICCS), these blind spots are discussed in the light of DTs, scientific theories, and research methodologies. Moreover, the case is made that digital human sciences (DHV) offers a valuable contribution to the scientific understanding of the manifestations and consequences of digitalization. In particular, this chapter argues for the usefulness of “intermethodological,” interdisciplinary, intercultural, and integrative approaches in DHV.


Sign in / Sign up

Export Citation Format

Share Document