multimodal communication
Recently Published Documents


TOTAL DOCUMENTS

282
(FIVE YEARS 77)

H-INDEX

23
(FIVE YEARS 2)

2022 ◽  
Vol 4 ◽  
Author(s):  
Neil Cohn ◽  
Joost Schilperoord

Language is typically embedded in multimodal communication, yet models of linguistic competence do not often incorporate this complexity. Meanwhile, speech, gesture, and/or pictures are each considered as indivisible components of multimodal messages. Here, we argue that multimodality should not be characterized by whole interacting behaviors, but by interactions of similar substructures which permeate across expressive behaviors. These structures comprise a unified architecture and align within Jackendoff's Parallel Architecture: a modality, meaning, and grammar. Because this tripartite architecture persists across modalities, interactions can manifest within each of these substructures. Interactions between modalities alone create correspondences in time (ex. speech with gesture) or space (ex. writing with pictures) of the sensory signals, while multimodal meaning-making balances how modalities carry “semantic weight” for the gist of the whole expression. Here we focus primarily on interactions between grammars, which contrast across two variables: symmetry, related to the complexity of the grammars, and allocation, related to the relative independence of interacting grammars. While independent allocations keep grammars separate, substitutive allocation inserts expressions from one grammar into those of another. We show that substitution operates in interactions between all three natural modalities (vocal, bodily, graphic), and also in unimodal contexts within and between languages, as in codeswitching. Altogether, we argue that unimodal and multimodal expressions arise as emergent interactive states from a unified cognitive architecture, heralding a reconsideration of the “language faculty” itself.


2021 ◽  
Vol LXXVII (77) ◽  
pp. 139-152
Author(s):  
CELINA HELIASZ-NOWOSIELSKA

W artykule zostały przedstawione wybrane wyniki części pilotażowej badania eksperymentalnego nad relacjonowaniem multimodalnych działań komunikacyjnych przez dorosłych użytkowników języka polskiego. Punktem wyjścia w przeprowadzonym eksperymencie były teoretyczne i empiryczne studia nad zaangażowaniem w działania komunikacyjne różnych modalności, tj. oprócz słów, także intonacji, prozodii, mimiki, gestów, ruchów całego ciała i poszczególnych jego członków, por. Austin (1962), Kendon (1994, 2000, 2004), Poggi (2007), Hellbernd i Sammler (2016). W ramach badania 101 osobom różniącym się wiekiem, płcią i wykształceniem zaprezentowano 23 fragmenty filmów dokumentalnych, pochodzących ze zbiorów Archiwum Szkoły Filmowej w Łodzi. Zadaniem uczestników badania było zrelacjonowanie, co robią bohaterowie filmów. W rezultacie powstał korpus, w którym do poszczególnych działań komunikacyjnych zarejestrowanych na filmach przypisano wypowiedzi uczestników badania. Na potrzeby pilotażu została przeprowadzona analiza jakościowa i ilościowa wypowiedzi dotyczących 10 fragmentów. Analizie tej zostało poddane znaczenie i odniesienie słów, którym uczestnicy badania posługiwali się w celu zrelacjonowania multimodalnych działań komunikacyjnych, zaobserwowanych na zaprezentowanych im filmach, a także frekwencja używanych w relacjach wyrażeń oraz rozbieżności między poszczególnymi relacjami. Analiza wykazała, że jeśli daną kolejkę tworzą kompozycje lub sekwencje działań o różnej modalności, to sprawozdawcy zdają sprawę albo z każdego z tych działań osobno, albo używają pojedynczego kwalifikatora na określenie całej grupy działań. Takim kwalifikatorem mogą być zarówno nazwy aktów performatywnych, jak i nazwy działań o dowolnej, określonej modalności. Jeśli rozmowa przebiega w sposób zmienny pod względem modalności, to przez poszczególnych obserwatorów może być postrzegana przez pryzmat różnych epizodów, które się na nią składają. Pewne działania bywają przy tym częściej niż inne uwzględniane lub pomijane w relacjach, co świadczy o różnicach wagi, jaką sprawozdawcy przywiązują do działań różnych rodzajów. Multimodality in reports on communication activities Summary: The article presents selected results of the pilot part of an experimental study on reporting multimodal communication activities by adult speakers of Polish. The starting point of the experiment was theoretical and empirical research on the involvement of various modalities in communication activities, in addition to words, intonation, prosody, facial expressions, gestures, movements of the whole body and its parts (Austin 1962; Kendon 1994, 2000, 2004; Poggi 2007; Hellbernd & Sammler 2016). In order to obtain data for the analysis, 23 fragments of documentary films from the collection of the Archives of the Film School in Łódź were presented to 101 adults, who differed in terms of age, sex and education. The participants of the experiment were asked to report on what the characters were doing. The result of the study was a database that included recordings of both the communication activities and the reports on them. The pilot part of the study covered the qualitative and quantitative analysis of 10 fragments of films. We analysed the meaning and reference of the vocabulary used by the participants to report on the multimodal communication activities observed in the films, the frequency of expressions used in the reports, as well as the discrepancies in the reports. The analysis showed that if a given turn was a composition or a sequence of activities of different modalities, the speaker could report on each of these activities separately or use a single qualifier to describe a whole group of activities. Such qualifiers can be both the names of performative acts and the names of actions of any specific modality. If the conversation changes in terms of modality, then it can be perceived by individual observers through the lens of the various episodes that make up the conversation. Certain actions are sometimes more often taken into account or omitted, which shows differences in the level of importance that speakers attach to different types of activities.


2021 ◽  
Vol 2 (2) ◽  
pp. 31-45
Author(s):  
Monsurat Aramide Nurudeen ◽  
Ebenezer Oluseun Ogungbe ◽  
Moshood Zakariyah

Film posters are complex forms of visual communication basically employed to promote films so as to seek for patronage from prospective viewers. Nollywood film poster designers or marketers employ a complex system of modes of multimodal communication to achieve their intended objectives. This study therefore investigates how these semiotic resources reveal the intention of the film poster designers and how other contextual variables influence the ability of the viewers to comprehend the messages embedded in film posters. The objectives of the study are to uncover the visual and linguistic semiotic resources in the film advertisement posters and their interaction. The study adopts a qualitative approach to the analyses of six randomly selected Nollywood film advertisement posters of three genres, namely: drama, thriller and comedy. Yuen’s Generic Structure Potential and Royce’s Ideational Intersemiotic Complementarity serve as the basis for the analysis of the selected texts. The study reveals that visual modes are more salient and frequently employed in the advertisement posters than the linguistic modes. However, both the visual and the linguistic modes offer complementary relationship for effective meaning-making in the selected Nollywood advertisement posters. The meanings derived are often contextual which appeal to the audience reasoning and sustain their interests. The study concludes by emphasizing the importance of the synergy of both linguistic and visual multimodal resources or modes of signification in the successful meaning-making and meaning-comprehension in the study of visual communication.


2021 ◽  
Author(s):  
◽  
Jordan Natan Hochenbaum

<p>Multimodal communication is an essential aspect of human perception, facilitating the ability to reason, deduce, and understand meaning. Utilizing multimodal senses, humans are able to relate to the world in many different contexts. This dissertation looks at surrounding issues of multimodal communication as it pertains to human-computer interaction. If humans rely on multimodality to interact with the world, how can multimodality benefit the ways in which humans interface with computers? Can multimodality be used to help the machine understand more about the person operating it and what associations derive from this type of communication? This research places multimodality within the domain of musical performance, a creative field rich with nuanced physical and emotive aspects. This dissertation asks, what kinds of new sonic collaborations between musicians and computers are possible through the use of multimodal techniques? Are there specific performance areas where multimodal analysis and machine learning can benefit training musicians? In similar ways can multimodal interaction or analysis support new forms of creative processes? Applying multimodal techniques to music-computer interaction is a burgeoning effort. As such the scope of the research is to lay a foundation of multimodal techniques for the future. In doing so the first work presented is a software system for capturing synchronous multimodal data streams from nearly any musical instrument, interface, or sensor system. This dissertation also presents a variety of multimodal analysis scenarios for machine learning. This includes automatic performer recognition for both string and drum instrument players, to demonstrate the significance of multimodal musical analysis. Training the computer to recognize who is playing an instrument suggests important information is contained not only within the acoustic output of a performance, but also in the physical domain. Machine learning is also used to perform automatic drum-stroke identification; training the computer to recognize which hand a drummer uses to strike a drum. There are many applications for drum-stroke identification including more detailed automatic transcription, interactive training (e.g. computer-assisted rudiment practice), and enabling efficient analysis of drum performance for metrics tracking. Furthermore, this research also presents the use of multimodal techniques in the context of everyday practice. A practicing musician played a sensoraugmented instrument and recorded his practice over an extended period of time, realizing a corpus of metrics and visualizations from his performance. Additional multimodal metrics are discussed in the research, and demonstrate new types of performance statistics obtainable from a multimodal approach. The primary contributions of this work include (1) a new software tool enabling musicians, researchers, and educators to easily capture multimodal information from nearly any musical instrument or sensor system; (2) investigating multimodal machine learning for automatic performer recognition of both string players and percussionists; (3) multimodal machine learning for automatic drum-stroke identification; (4a) applying multimodal techniques to musical pedagogy and training scenarios; (4b) investigating novel multimodal metrics; (5) lastly this research investigates the possibilities, affordances, and design considerations of multimodal musicianship both in the acoustic domain, as well as in other musical interface scenarios. This work provides a foundation from which engaging musical-computer interactions can occur in the future, benefitting from the unique nuances of multimodal techniques.</p>


2021 ◽  
Author(s):  
◽  
Jordan Natan Hochenbaum

<p>Multimodal communication is an essential aspect of human perception, facilitating the ability to reason, deduce, and understand meaning. Utilizing multimodal senses, humans are able to relate to the world in many different contexts. This dissertation looks at surrounding issues of multimodal communication as it pertains to human-computer interaction. If humans rely on multimodality to interact with the world, how can multimodality benefit the ways in which humans interface with computers? Can multimodality be used to help the machine understand more about the person operating it and what associations derive from this type of communication? This research places multimodality within the domain of musical performance, a creative field rich with nuanced physical and emotive aspects. This dissertation asks, what kinds of new sonic collaborations between musicians and computers are possible through the use of multimodal techniques? Are there specific performance areas where multimodal analysis and machine learning can benefit training musicians? In similar ways can multimodal interaction or analysis support new forms of creative processes? Applying multimodal techniques to music-computer interaction is a burgeoning effort. As such the scope of the research is to lay a foundation of multimodal techniques for the future. In doing so the first work presented is a software system for capturing synchronous multimodal data streams from nearly any musical instrument, interface, or sensor system. This dissertation also presents a variety of multimodal analysis scenarios for machine learning. This includes automatic performer recognition for both string and drum instrument players, to demonstrate the significance of multimodal musical analysis. Training the computer to recognize who is playing an instrument suggests important information is contained not only within the acoustic output of a performance, but also in the physical domain. Machine learning is also used to perform automatic drum-stroke identification; training the computer to recognize which hand a drummer uses to strike a drum. There are many applications for drum-stroke identification including more detailed automatic transcription, interactive training (e.g. computer-assisted rudiment practice), and enabling efficient analysis of drum performance for metrics tracking. Furthermore, this research also presents the use of multimodal techniques in the context of everyday practice. A practicing musician played a sensoraugmented instrument and recorded his practice over an extended period of time, realizing a corpus of metrics and visualizations from his performance. Additional multimodal metrics are discussed in the research, and demonstrate new types of performance statistics obtainable from a multimodal approach. The primary contributions of this work include (1) a new software tool enabling musicians, researchers, and educators to easily capture multimodal information from nearly any musical instrument or sensor system; (2) investigating multimodal machine learning for automatic performer recognition of both string players and percussionists; (3) multimodal machine learning for automatic drum-stroke identification; (4a) applying multimodal techniques to musical pedagogy and training scenarios; (4b) investigating novel multimodal metrics; (5) lastly this research investigates the possibilities, affordances, and design considerations of multimodal musicianship both in the acoustic domain, as well as in other musical interface scenarios. This work provides a foundation from which engaging musical-computer interactions can occur in the future, benefitting from the unique nuances of multimodal techniques.</p>


2021 ◽  
Vol 11 (11) ◽  
pp. 723
Author(s):  
Jose Belda-Medina

The number of publications on live online teaching and distance learning has significantly increased over the past two years since the outbreak and worldwide spread of the COVID-19 pandemic, but more research is needed on effective methodologies and their impact on the learning process. This research aimed to analyze student interaction and multimodal communication through Task-Based Language Teaching (TBLT) in a Synchronous Computer-Mediated Communication (SCMC) environment. For this purpose, 90 teacher candidates enrolled in the subject Applied Linguistics at a university were randomly assigned in different teams to create collaboratively digital infographics based on different language teaching methods. Then, all the teams explained their projects online and the classmates completed two multimedia activities based on each method. Finally, the participants discussed the self-perceived benefits (relevance, enjoyment, interest) and limitations (connectivity, distraction) of SCMC in language learning. Quantitative and qualitative data were gathered through pre- and post-tests, class observation and online discussion. The statistical data and research findings revealed a positive attitude towards the integration of TBLT in an SCMC environment and a high level of satisfaction with multimodal communication (written, verbal, visual) and student interaction. However, the language teacher candidates complained about the low quality of the digital materials, the use of technology just for substitution, and the lack of peer-to-peer interaction in their live online classes during the pandemic.


2021 ◽  
Author(s):  
Anna Zanoli ◽  
Marco Gamba ◽  
Alban Lemasson ◽  
Ivan Norscia ◽  
Elisabetta Palagi

Abstract Female primates can emit vocalizations associated with mating that can function as honest signals of fertility. Here, we investigated the role of mating calls and visual signals in female geladas Theropithecus gelada. Since females have a central role in the gelada society and seem to solicit sexual interactions, we answered whether they emit vocalizations in conjunction with gazing to increase mating success probability. Before and during copulations, females can emit pre-copulation calls and copulation calls. For the first time, we identified a new female vocalization emitted at the final stage of copulations (end-copulation call), possibly marking the occurrence of ejaculation. We found that longer pre-copulation call sequences were followed by both prolonged copulations and the presence of end-copulation calls thus suggesting that females use pre-copulation calls to ensure successful copula completion. Moreover, we found that different combinations of female vocal types and gazing had different effects on male vocal behavior and motivation to complete the copula. The analysis of the vocal and visual signals revealed a complex inter-sexual multimodal chattering with the leading role of females in the signal exchange. Such chattering, led by females, modulates male sexual arousal, thus increasing the probability of the copula success.


2021 ◽  
Vol 0 (0) ◽  
Author(s):  
John A. Bateman

Abstract Many studies investigating the use and effectiveness of multimodal communication are now confronting the need to engage with larger bodies of data in order to achieve more empirically robust accounts, moving beyond the earlier prevalence of small-scale ‘case studies’. In this article, I briefly characterise how recent developments in the theory of multimodality can be drawn upon to encourage and support this change in both scale and breadth. In particular, the contribution will show how refinements in the degree of formality of definitions of the core multimodal constructs of ‘semiotic mode’ and ‘materiality’ can help bridge the gap between exploratory investigations of complex multimodal practices and larger-scale corpus studies.


Sign in / Sign up

Export Citation Format

Share Document