verbal cues
Recently Published Documents


TOTAL DOCUMENTS

360
(FIVE YEARS 117)

H-INDEX

32
(FIVE YEARS 3)

2022 ◽  
pp. 202-224
Author(s):  
Robert Costello ◽  
Jodie Donovan

Autism Spectrum Disorder (ASD) is a prevalent neurodevelopmental disability among gamers where individuals belonging to this group of conditions have difficulty understanding non-verbal cues. Though game accessibility is a focal point in the games industry, there has been a keen focus placed on developing accessibility. Consequently, this study examines the perspective of video games from individuals who have autism to gain further insight into the needs of these individuals. The preliminary study is to discover if autistic users' difficulty reading non-verbal cues extends to their perception of a game environment and if these individuals can experience sensory distress while playing video games. A prototype was created to further understand the non-verbal cues to help shape the foundation of accessibility framework. The preliminary results concluded that autistic users frequently misread or fail to pick up on the non-verbal cues used by developers to drive game flow and narrative (e.g., sign-posting), in addition to experiencing sensory distress while playing video games.


2021 ◽  
Vol 10 (4) ◽  
pp. 1-42
Author(s):  
Zhao Han ◽  
Elizabeth Phillips ◽  
Holly A. Yanco

Although non-verbal cues such as arm movement and eye gaze can convey robot intention, they alone may not provide enough information for a human to fully understand a robot’s behavior. To better understand how to convey robot intention, we conducted an experiment ( N = 366 ) investigating the need for robots to explain , and the content and properties of a desired explanation such as timing , engagement importance , similarity to human explanations, and summarization . Participants watched a video where the robot was commanded to hand an almost-reachable cup and one of six reactions intended to show the unreachability : doing nothing (No Cue), turning its head to the cup (Look), or turning its head to the cup with the addition of repeated arm movement pointed towards the cup (Look & Point), and each of these with or without a Headshake. The results indicated that participants agreed robot behavior should be explained across all conditions, in situ , in a similar manner as what human explain, and provide concise summaries and respond to only a few follow-up questions by participants. Additionally, we replicated the study again with N = 366 participants after a 15-month span and all major conclusions still held.


Author(s):  
Tonghe Zhuang ◽  
Angelika Lingnau

AbstractObjects can be categorized at different levels of abstraction, ranging from the superordinate (e.g., fruit) and the basic (e.g., apple) to the subordinate level (e.g., golden delicious). The basic level is assumed to play a key role in categorization, e.g., in terms of the number of features used to describe these actions and the speed of processing. To which degree do these principles also apply to the categorization of observed actions? To address this question, we first selected a range of actions at the superordinate (e.g., locomotion), basic (e.g., to swim) and subordinate level (e.g., to swim breaststroke), using verbal material (Experiments 1–3). Experiments 4–6 aimed to determine the characteristics of these actions across the three taxonomic levels. Using a feature listing paradigm (Experiment 4), we determined the number of features that were provided by at least six out of twenty participants (common features), separately for the three different levels. In addition, we examined the number of shared (i.e., provided for more than one category) and distinct (i.e., provided for one category only) features. Participants produced the highest number of common features for actions at the basic level. Actions at the subordinate level shared more features with other actions at the same level than those at the superordinate level. Actions at the superordinate and basic level were described with more distinct features compared to those provided at the subordinate level. Using an auditory priming paradigm (Experiment 5), we observed that participants responded faster to action images preceded by a matching auditory cue corresponding to the basic and subordinate level, but not for superordinate level cues, suggesting that the basic level is the most abstract level at which verbal cues facilitate the processing of an upcoming action. Using a category verification task (Experiment 6), we found that participants were faster and more accurate to verify action categories (depicted as images) at the basic and subordinate level in comparison to the superordinate level. Together, in line with the object categorization literature, our results suggest that information about action categories is maximized at the basic level.


Author(s):  
Hilda E. Carrillo ◽  
Robin Pennington ◽  
Yibo (James) Zhang

Emojis act as non-verbal cues to disambiguate and communicate affect and are increasingly used in online corporate disclosures. Emotion work, a concept founded in social psychology, suggests that individuals adjust their behavior as emotions are evoked or suppressed. Despite the growing evidence that emojis may influence judgments and decisions due to their deliberate expression of context and affect, the accounting research community has yet to investigate emojis’ impact. We experimentally explore whether emojis can soften nonprofessional investors’ perceptions of bad news or enhance perceptions of good news. We find that emojis modestly suppress participants’ positive emotions on positive news, influencing their investment-related judgments and decision-making. Subsequent data collection fails to replicate the initial findings in a less experienced participant pool, suggesting that investing experience may play a role. Our study enhances our understanding of the unintended consequences of emojis and introduces a sociology-based principle into the accounting literature.


2021 ◽  
Vol 8 (2) ◽  
pp. 161-184
Author(s):  
Antonella Giacosa

During the sudden shift in education onto digital platforms due to the Covid-19 emergency, teachers became streamers and experimented with new tools to involve their students in video-mediated, multi-floor, multiparticipant, and multimodal interactions. In turn, students experienced new ways to participate in lessons and interact with instructors. This study focuses on clarification and repair in videoconferencing as a strategy to address trouble in video-mediated communication and to re-establish mutual understanding. Through participant observation of online classes, the researcher collected data on classroom interactions, which are analyzed through conversation analysis. The findings show how the digital affordances of video-mediated conversation help teachers and students manage intersubjectivity and compensate for the lack of non-verbal cues typical in face-to-face interaction, such as facial expressions or tone of voice. Consequently, this article argues that the wisdom gained during the pandemic can help teachers and lecturers better deal with clarification and repair in digital conversations. Ultimately, it can increase their digital interactional competence, thus giving way to more interaction and learning in EFL classes, both online and in-person. Key words: EMERGENCY REMOTE EDUCATION, CONVERSATION ANALYSIS, CLARIFICATION, REPAIR, EFL Durante la migración hacia las plataformas digitales en la educación debido a la emergencia sanitaria del Covid-19, el profesorado se ha convertido en transmisor digital y ha experimentado con nuevas herramientas para implicar a su alumnado en conversaciones mediadas por vídeo multiparticipativas y multimodales. A su vez, el alumnado ha experimentado nuevas formas de participación en las clases y de interacción con el profesorado. Este estudio se centra en la aclaración y en la reparación en las videoconferencias como una estrategia para afrontar los problemas en la comunicación mediada por vídeo y restablecer el entendimiento mutuo entre docentes y estudiantes. Mediante la observación participante de las sesiones en línea, la investigadora recogió datos sobre las interacciones en clase que son analizados a través del análisis conversacional. Los resultados muestran cómo las posibilidades digitales de la conversación mediada por vídeo ayudan al profesorado y al alumnado a manejar la intersubjetividad y a compensar la falta de señales no verbales propias de la interacción cara a cara, como son las expresiones faciales o el tono de voz. En consecuencia, en este artículo se sostiene que el conocimiento adquirido durante la pandemia puede ayudar al profesorado a afrontar mejor la aclaración y la reparación en las conversaciones digitales. En última instancia, este conocimiento puede aumentar la competencia interactiva digital del profesorado dando lugar a una mayor interacción y a un mayor aprendizaje en las clases de inglés como lengua extranjera, tanto en línea como presenciales. Palabras clave: EDUCACIÓN REMOTA DE EMERGENCIA, ANÁLISIS DE LA CONVERSACIÓN, ACLARACIÓN, REPARACIÓN, EFL Durante l'improvvisa migrazione della didattica sulle piattaforme digitali dovuto all'emergenza Covid-19, i docenti sono diventati streamer e hanno sperimentato nuovi strumenti per interagire e coinvolgere i propri studenti in conversazioni mediate dal il video. A loro volta, gli studenti hanno sperimentato nuovi modi per partecipare alla lezione e interagire con i professori e fra loro. Questo studio si concentra sul chiarimento e la riparazione nella videoconferenza come strategie per affrontare i problemi nella comunicazione mediata dal video e ristabilire la comprensione reciproca. Attraverso l'osservazione partecipante delle lezioni online, sono stati raccolti dati sulle interazioni in classe e sono poi stati analizzati attraverso l'analisi della conversazione. I risultati mostrano come alcune caratteristiche della conversazione mediata dal video aiutino insegnanti e studenti a gestire l'intersoggettività e a compensare la mancanza di segnali non verbali tipici dell'interazione in presenza, come le espressioni facciali o il tono di voce. Si sostiene che la consapevolezza guadagnata durante la pandemia può aiutare i docenti a capire come affrontare il chiarimento e la riparazione nelle conversazioni digitali. Inoltre, può aumentare la loro competenza interattiva digitale, permettendo a una maggiore interazione e apprendimento nelle classi EFL, sia online che in presenza. Parole chiave: ISTRUZIONE A DISTANZA IN EMERGENZA, ANALISI DELLA CONVERSAZIONE, CHIARIMENTO, RIPARAZIONE, EFL


Author(s):  
Peter Auer

Abstract Like many other languages, but unlike modern (standard) English, German has a distinct second person plural pronoun (ihr, ‘you guys’), contrasting with the second person singular pronoun (du). The second person plural pronoun addresses a turn to more than one, and possibly all co-present participants. This paper investigates turn-taking after such multiply addressed turns, taking as an example information-seeking questions, i.e., a sequential context in which a specific next action is relevant in the adjacent position. It might appear that in such a context, self-selection applies (Schegloff 1992: 122); more than one co-participant is addressed, but none selected as next speaker. In this paper, I show on the basis of spontaneous interactions recorded with mobile eye-tracking equipment that this is not the case and that TCU-final gaze is employed to select the next speaker. The participant not being gazed at TCU-finally is addressed, but not selected as the answerer in next position and may provide an answer in a sequential position after the first answer. The article demonstrates that gaze is an efficient way to allocate turns in the absence of verbal cues and thus contributes to our understanding of turn-taking from a multimodal perspective.


PLoS ONE ◽  
2021 ◽  
Vol 16 (11) ◽  
pp. e0259988
Author(s):  
Annie A. Butler ◽  
Lucy S. Robertson ◽  
Audrey P. Wang ◽  
Simon C. Gandevia ◽  
Martin E. Héroux

Passively grasping an unseen artificial finger induces ownership over this finger and an illusory coming together of one’s index fingers: a grasp illusion. Here we determine how interoceptive ability and attending to the upper limbs influence this illusion. Participants passively grasped an unseen artificial finger with their left index finger and thumb for 3 min while their right index finger, located 12 cm below, was lightly clamped. Experiment 1 (n = 30) investigated whether the strength of the grasp illusion (perceived index finger spacing and perceived ownership) is related to a person’s level of interoceptive accuracy (modified heartbeat counting task) and sensibility (Noticing subscale of the Multidimensional Assessment of Interoceptive Awareness). Experiment 2 (n = 30) investigated the effect of providing verbal or tactile cues to guide participants’ attention to their upper limbs. On their own, neither interoceptive accuracy and sensibility or verbal and tactile cueing had an effect on the grasp illusion. However, verbal cueing increased the strength of the grasp illusion in individuals with lower interoceptive ability. Across the observed range of interoceptive accuracy and sensibility, verbal cueing decreased perceived index spacing by 5.6 cm [1.91 to 9.38] (mean [95%CI]), and perceived ownership by ∼3 points on a 7-point Likert scale (slope -0.93 [-1.72 to -0.15]). Thus, attending to the upper limbs via verbal cues increases the strength of the grasp illusion in a way that is inversely proportional to a person’s level of interoceptive accuracy and sensibility.


Author(s):  
Ellyda Retpitasari ◽  
Naila Muna

The Covid-19 pandemic spreading in Indonesia has changed all aspects of social life. One of them is in the aspect of changing the culture of the Khataman al-Qur’an tradition in the Kediri Region. This study aims to describe the change in the Khataman al-Qur’an tradition in the Kediri Region. The research method used is the type of qualitative research with a phenomenological approach, while the analytical knife uses the theory of technological determinism. The results of the study state that changes in the implementation of Khataman al-Qur’an through WhatsApp Groups have positive and negative impacts. The positive impact is that it is easy to communicate for worship and maintain consistent motivation in reading the Qur'an. While the negative impact in the aspect of social solidarity such as the lack of emotional bonds and non-verbal cues between fellow members in the group. It is different from the dynamics of the implementation of the Khataman al-Qur’an which was previously held at a certain moment, but for now, it can be held at any time and become a daily habit of the community. In addition, there was a change in the implementation which was initially carried out with the custom of gatherings, and banquets serving food, while the presence of a pandemic changed the implementation of Khataman al-Qur’an through WhatsApp Groups.


2021 ◽  
Vol 12 ◽  
Author(s):  
Ubuka Tagami ◽  
Shu Imaizumi

Errors in discriminating right from left, termed right-left confusion, reflect a failure in translating visuospatial perceptions into verbal representation of right or left (i.e., visuo-verbal process). There may also be verbo-visual process, where verbal cues are translated into visual representations of space. To quantify these two processes underlying right-left confusion, Study 1 investigated the factor structure of the Right-Left Confusability Scale, which assesses daily experiences of right-left confusion. Exploratory factor analysis suggested that these two processes and another factor reflecting mental rotation underlie right-left confusion. Study 2 examined correlations between the (sub)scale scores and performance on orientation judgment tasks reflecting visuo-verbal and verbo-visual processes. Overall, self-reported measures were not associated with the behavioral performances presumably reflecting the two processes. These results suggest that the cognitive mechanisms underlying right-left confusion can be classified into visuo-verbal and verbo-visual processes and mental rotation, although their psychometric and behavioral indices might be distinct. Further studies may develop better assessments of right-left confusion reflecting these processes.


2021 ◽  
Author(s):  
Radiah Rivu ◽  
Ken Pfeuffer ◽  
Philipp Müller ◽  
Yomna Abdelrahman ◽  
Andreas Bulling ◽  
...  
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document