multimodal corpora
Recently Published Documents


TOTAL DOCUMENTS

27
(FIVE YEARS 7)

H-INDEX

5
(FIVE YEARS 0)

Author(s):  
Alena Velichko ◽  
Alexey Karpov

In recent years the interest in automatic depression detection has grown within medical and scientific-technical communities. Depression is one of the most widespread mental illnesses that affects human life. In this review we present and analyze the latest researches devoted to depression detection. Basic notions related to the definition of depression were specified, the review includes both unimodal and multimodal corpora containing records of informants diagnosed with depression and control groups of non-depressed people. Theoretical and practical researches which present automated systems for depression detection were reviewed. The last ones include unimodal as well as multimodal systems. A part of reviewed systems addresses the challenge of regressive classification predicting the degree of depression severity (non-depressed, mild, moderate and severe), and another part solves a problem of binary classification predicting the presence of depression (if a person is depressed or not). An original classification of methods for computing of informative features for three communicative modalities (audio, video, text information) is presented. New methods for depression detection in every modality and all modalities in total are defined. The most popular methods for depression detection in reviewed studies are neural networks. The survey has shown that the main features of depression are psychomotor retardation that affects all communicative modalities and strong correlation with affective values of valency, activation and domination, also there has been observed an inverse correlation between depression and aggression. Discovered correlations confirm interrelation of affective disorders and human emotional states. The trend observed in many reviewed papers is that combining modalities improves the results of depression detection systems.


2021 ◽  
pp. 263497952110070
Author(s):  
Tuomo Hiippala

This article discusses the prospects and challenges of combining multimodality theory with distant viewing, a recent framework proposed in the field of digital humanities. This framework advocates the use of computational methods to enable large-scale analysis of visual and multimodal materials, which must be nevertheless supported by theories that explain how these materials are structured. Multimodality theory is well-positioned to support this effort by providing descriptive schemas that impose structure on the materials under analysis. The field of multimodality research can also benefit from adopting computational methods, which help to achieve the long-term goal of building large multimodal corpora for empirical research. However, despite their immense potential for multimodality research, the use of computational methods warrants caution, because they involve a number of potentially cascading risks that arise from biases inherent to the underlying data and different approaches to the phenomenon of multimodality.


Author(s):  
И.Ю. Владимиров ◽  
И.Н. Макаров

There are two common approaches to researching insight: the study of the emotional response to a solution (Aha! experience) and the study of the restructuring of representations. The relationship between them can be found by comparing functions they perform relative to each other. For the experimental investigation of insight, problems that are typically being used can be solved within a little amount of time and are highly similar in their structure. We believe that such laboratory designs of the tasks often lead to researchers missing out on the moments of impasse and initial restructuring of the search space. In the current study, using the method of multimodal corpora constructed from individual solutions, we gained partial confirmation of the key statements of the model of emotional regulation of the representational change. According to the model, an insight solution process is accompanied by emotions regulating the process of representational change. A feeling of impasse is a response to the lack of progress towards the solution. An Aha! experience appears in response to solvers performing actions that bring them a huge step closer to the solution of a problem. We believe that these emotional responses are experienced before the solution reaches consciousness and they motivate the solver to adapt their search space accordingly. The model we propose is a development of the ideas of Ya.A. Ponomarev on the role of emotions in regulating of insight problem solving andmodel of M. Ollinger and colleagues describing the phases of insight problem solving.


Pragmatics ◽  
2020 ◽  
pp. 153-159
Author(s):  
Joan Cutting ◽  
Kenneth Fordyce
Keyword(s):  

2020 ◽  
Vol 68 (4) ◽  
pp. 351-377
Author(s):  
Jakub Jehlička ◽  
Eva Lehečková

AbstractAspectuality of events has been shown to be construed through various means in typologically diverse languages, ranging from mainly grammatical devices to conventionalized lexical means. The rise of multimodal studies in linguistics allows incorporating yet another semiotic layer into the description. In this context, we present a cross-linguistic study of multimodal event construals in Czech and English spontaneous conversations, based on multimodal corpora. We follow Croft’s (2012) cognitive model of aspectual types, in order to take into account multiple parameters (out of which the features of (un)boundedness and directedness are the most prominent) determining a particular aspectual contour of a verb in a given context. We investigate which feature combinations are associated with (un)boundedness of corresponding co-speech gestures. The multivariate analysis revealed that in English, gesture boundedness is predicted by the predicate’s general aspectual type, whereas in Czech, the more fine-grained features of directedness and incrementality are stronger predictors.


Author(s):  
Dawn Knight ◽  
Svenja Adolphs
Keyword(s):  

2019 ◽  
Author(s):  
Tian Linger Xu ◽  
Kaya de Barbaro ◽  
Drew Abney ◽  
Ralf Cox

The temporal structure of behavior contains a rich source of information about its dynamic organization, origins, and development. Today, advances in sensing and data storage allow researchers to collect multiple dimensions of behavioral data at a fine temporal scale both in and out of the laboratory, leading to the curation of massive multimodal corpora of behavior. However, along with these new opportunities come new challenges. Theories are often underspecified as to the exact nature of these unfolding interactions, and psychologists have limited ready-to-use methods and training for quantifying structures and patterns in behavioral time series. In this paper, we will introduce four techniques to interpret and analyze high-density multi-modal behavior data, namely, to: (1) visualize the raw time series, (2) describe the overall distributional structure of temporal events (Burstiness calculation), (3) characterize the nonlinear dynamics over multiple timescales with Chromatic and Anisotropic Cross-Recurrence Quantification Analysis (CRQA), (4) and quantify the directional relations among a set of interdependent multimodal behavioral variables with Granger Causality. Each technique is introduced in a module with conceptual background, sample data drawn from empirical studies and ready-to-use Matlab scripts. The code modules showcase each technique’s application with detailed documentation to allow more advanced users to adapt them to their own datasets. Additionally, to make our modules more accessible to beginner programmers, we provide a “Programming Basics” module that introduces common functions for working with behavioral timeseries data in Matlab. Together, the materials provide a practical introduction to a range of analyses that psychologists can use to discover temporal structure in high-density behavioral data.


2017 ◽  
Vol 3 (3) ◽  
pp. 306-326 ◽  
Author(s):  
Tamás Péter Szabó ◽  
Robert A. Troyer

Abstract In ethnographically oriented linguistic landscape studies, social spaces are studied in co-operation with research participants, many times through mobile encounters such as walking. Talking, walking, photographing and video recording as well as writing the fieldwork diary are activities that result in the accumulation of heterogeneous, multimodal corpora. We analyze data from a Hungarian school ethnography project to reconstruct fieldwork encounters and analyze embodiment, the handling of devices (e.g. the photo camera) and verbal interaction in exploratory, participant-led walking tours. Our analysis shows that situated practices of embodied conduct and verbal interaction blur the boundaries between observation and observers, and thus LL research is not only about space- and place-making and sense-making routines, but the fieldwork encounters are also transformative and contribute to space- and place-making themselves. Our findings provide insight for ethnographic researchers and enrich the already robust qualitative and quantitative strategies employed in the field.


Sign in / Sign up

Export Citation Format

Share Document