scholarly journals Gesture in the eye of the beholder: An eye-tracking study on factors determining the attention for gestures produced by people with aphasia

2021 ◽  
Author(s):  
Karin van Nispen ◽  
Kazuki Sekine ◽  
Ineke van der Meulen ◽  
Basil Christoph Preisig

Co-speech hand gestures are a ubiquitous form of nonverbal communication, which can express additional information that is not present in speech. Hand gestures may become more relevant when speech production is impaired as in patients with post-stroke aphasia. In fact, patients with aphasia produce more gestures than control speakers. Further, their gestures seem to be more relevant for the understanding of their communication. In the present study, we addressed the question whether the gestures produced by speakers with aphasia catch the attention of their addressees. Healthy volunteers (observers) watched short video clips while their eye movements were recorded. These video clips featured speakers with aphasia and control speakers describing two different scenarios (buying a sweater or having witnessed an accident). Our results show that hand gestures produced by speakers with aphasia are on average longer attended than gestures produced by control speakers. This effect is significant even when we control for the longer duration of the gestural movements in speakers with aphasia. Further, the amount of information in speech was correlated with gesture attention: gestures produced by speakers with less informative speech were attended more frequently. In conclusion, our results highlight two main points. First, overt attention for co-speech hand gesture increases with their communicative relevance. Second, these findings have clinical implications because they show that the extra effort that speakers with aphasia put into gesture is worthwhile, as interlocutors seem to notice their gestures.

Author(s):  
Sukhendra Singh ◽  
G. N. Rathna ◽  
Vivek Singhal

Introduction: Sign language is the only way to communicate for speech-impaired people. But this sign language is not known to normal people so this is the cause of barrier in communicating. This is the problem faced by speech impaired people. In this paper, we have presented our solution which captured hand gestures with Kinect camera and classified the hand gesture into its correct symbol. Method: We used Kinect camera not the ordinary web camera because the ordinary camera does not capture its 3d orientation or depth of an image from camera however Kinect camera can capture 3d image and this will make classification more accurate. Result: Kinect camera will produce a different image for hand gestures for ‘2’ and ‘V’ and similarly for ‘1’ and ‘I’ however, normal web camera will not be able to distinguish between these two. We used hand gesture for Indian sign language and our dataset had 46339, RGB images and 46339 depth images. 80% of the total images were used for training and the remaining 20% for testing. In total 36 hand gestures were considered to capture alphabets and alphabets from A-Z and 10 for numeric, 26 for digits from 0-9 were considered to capture alphabets and Keywords. Conclusion: Along with real-time implementation, we have also shown the comparison of the performance of the various machine learning models in which we have found out the accuracy of CNN on depth- images has given the most accurate performance than other models. All these resulted were obtained on PYNQ Z2 board.


2019 ◽  
Vol 17 (2) ◽  
pp. 147470491983972 ◽  
Author(s):  
Chunna Hou ◽  
Zhijun Liu

Researchers have found that compared with other existing conditions (e.g., pleasantness), information relevant to survival produced a higher rate of retrieval; this effect is known as the survival processing advantage (SPA). Previous experiments have examined that the advantage of memory can be extended to some different types of visual pictorial material, such as pictures and short video clips, but there were some arguments for whether face stimulus could be seen as a boundary condition of SPA. The current work explores whether there is a mnemonic advantage to different trustworthiness of face for human adaptation. In two experiments, we manipulated the facial trustworthiness (untrustworthy, neutral, and trustworthy), which is believed to provide information regarding survival decisions. Participants were asked to predict their avoidance or approach response tendency, when encountering strangers (represented by three classified faces of trustworthiness) in a survival scenario and the control scenario. The final surprise memory tests revealed that it was better to recognize both the trustworthy faces and untrustworthy faces, when the task was related to survival. Experiment 1 demonstrated the existence of a SPA in the bipolarity of facial untrustworthiness and trustworthiness. In Experiment 2, we replicated the SPA of trustworthy and untrustworthy face recognitions using a matched design, where we found this kind of memory benefits only in recognition tasks but not in source memory tasks. These results extend the generality of SPAs to face domain.


ORL ◽  
2021 ◽  
pp. 1-10
Author(s):  
Claudia Scherl ◽  
Johanna Stratemeier ◽  
Nicole Rotter ◽  
Jürgen Hesser ◽  
Stefan O. Schönberg ◽  
...  

<b><i>Introduction:</i></b> Augmented reality can improve planning and execution of surgical procedures. Head-mounted devices such as the HoloLens® (Microsoft, Redmond, WA, USA) are particularly suitable to achieve these aims because they are controlled by hand gestures and enable contactless handling in a sterile environment. <b><i>Objectives:</i></b> So far, these systems have not yet found their way into the operating room for surgery of the parotid gland. This study explored the feasibility and accuracy of augmented reality-assisted parotid surgery. <b><i>Methods:</i></b> 2D MRI holographic images were created, and 3D holograms were reconstructed from MRI DICOM files and made visible via the HoloLens. 2D MRI slices were scrolled through, 3D images were rotated, and 3D structures were shown and hidden only using hand gestures. The 3D model and the patient were aligned manually. <b><i>Results:</i></b> The use of augmented reality with the HoloLens in parotic surgery was feasible. Gestures were recognized correctly. Mean accuracy of superimposition of the holographic model and patient’s anatomy was 1.3 cm. Highly significant differences were seen in position error of registration between central and peripheral structures (<i>p</i> = 0.0059), with a least deviation of 10.9 mm (centrally) and highest deviation for the peripheral parts (19.6-mm deviation). <b><i>Conclusion:</i></b> This pilot study offers a first proof of concept of the clinical feasibility of the HoloLens for parotid tumor surgery. Workflow is not affected, but additional information is provided. The surgical performance could become safer through the navigation-like application of reality-fused 3D holograms, and it improves ergonomics without compromising sterility. Superimposition of the 3D holograms with the surgical field was possible, but further invention is necessary to improve the accuracy.


2021 ◽  
Vol 8 (1) ◽  
Author(s):  
Shahzad Ahmed ◽  
Dingyang Wang ◽  
Junyoung Park ◽  
Sung Ho Cho

AbstractIn the past few decades, deep learning algorithms have become more prevalent for signal detection and classification. To design machine learning algorithms, however, an adequate dataset is required. Motivated by the existence of several open-source camera-based hand gesture datasets, this descriptor presents UWB-Gestures, the first public dataset of twelve dynamic hand gestures acquired with ultra-wideband (UWB) impulse radars. The dataset contains a total of 9,600 samples gathered from eight different human volunteers. UWB-Gestures eliminates the need to employ UWB radar hardware to train and test the algorithm. Additionally, the dataset can provide a competitive environment for the research community to compare the accuracy of different hand gesture recognition (HGR) algorithms, enabling the provision of reproducible research results in the field of HGR through UWB radars. Three radars were placed at three different locations to acquire the data, and the respective data were saved independently for flexibility.


2018 ◽  
Vol 14 (7) ◽  
pp. 155014771879075 ◽  
Author(s):  
Kiwon Rhee ◽  
Hyun-Chool Shin

In the recognition of electromyogram-based hand gestures, the recognition accuracy may be degraded during the actual stage of practical applications for various reasons such as electrode positioning bias and different subjects. Besides these, the change in electromyogram signals due to different arm postures even for identical hand gestures is also an important issue. We propose an electromyogram-based hand gesture recognition technique robust to diverse arm postures. The proposed method uses both the signals of the accelerometer and electromyogram simultaneously to recognize correct hand gestures even for various arm postures. For the recognition of hand gestures, the electromyogram signals are statistically modeled considering the arm postures. In the experiments, we compared the cases that took into account the arm postures with the cases that disregarded the arm postures for the recognition of hand gestures. In the cases in which varied arm postures were disregarded, the recognition accuracy for correct hand gestures was 54.1%, whereas the cases using the method proposed in this study showed an 85.7% average recognition accuracy for hand gestures, an improvement of more than 31.6%. In this study, accelerometer and electromyogram signals were used simultaneously, which compensated the effect of different arm postures on the electromyogram signals and therefore improved the recognition accuracy of hand gestures.


Pythagoras ◽  
2010 ◽  
Vol 0 (72) ◽  
Author(s):  
Helmut Linneweber‐Lammerskitten ◽  
Marc Schäfer ◽  
Duncan Samson

This paper describes a collaborative research and development project between the University of Applied Sciences Northwestern Switzerland and Rhodes University in South Africa. The project seeks to establish, disseminate and research the efficacy and use of short video clips designed specifically for the autonomous learning of mathematics. Specific to the South African context is our interest in capitalising on the ubiquity of cellphone technology and the autonomous affordances offered by mobile learning. This paper engages with a number of theoretical and pedagogical issues relating to the design, production and use of these video clips. Although the focus is specific to the contexts of South Africa and Switzerland, the discussion is of broad applicability.


2019 ◽  
Vol 1 (2) ◽  
pp. 80-97
Author(s):  
Jesus H Lugo

Safe interactions between humans and robots are needed in several industrial processes and service tasks. Compliance design and control of mechanisms is a way to increase safety. This article presents a compliant revolute joint mechanism using a biphasic media variable stiffness actuator. The actuator has a member configured to transmit motion that is connected to a fluidic circuit, into which a biphasic control fluid circulates. Stiffness is controlled by changing pressure of control fluid into distribution lines. A mathematical model of the actuator is presented, a model-based control method is implemented to track the desired position and stiffness, and equations relating to the dynamics of the mechanism are provided. Results from force loaded and unloaded simulations and experiments with a physical prototype are discussed. The additional information covers a detailed description of the system and its physical implementation.


2021 ◽  
Vol 2021 ◽  
pp. 1-15
Author(s):  
Juhong Namgung ◽  
Siwoon Son ◽  
Yang-Sae Moon

In recent years, cyberattacks using command and control (C&C) servers have significantly increased. To hide their C&C servers, attackers often use a domain generation algorithm (DGA), which automatically generates domain names for the C&C servers. Accordingly, extensive research on DGA domain detection has been conducted. However, existing methods cannot accurately detect continuously generated DGA domains and can easily be evaded by an attacker. Recently, long short-term memory- (LSTM-) based deep learning models have been introduced to detect DGA domains in real time using only domain names without feature extraction or additional information. In this paper, we propose an efficient DGA domain detection method based on bidirectional LSTM (BiLSTM), which learns bidirectional information as opposed to unidirectional information learned by LSTM. We further maximize the detection performance with a convolutional neural network (CNN) + BiLSTM ensemble model using Attention mechanism, which allows the model to learn both local and global information in a domain sequence. Experimental results show that existing CNN and LSTM models achieved F1-scores of 0.9384 and 0.9597, respectively, while the proposed BiLSTM and ensemble models achieved higher F1-scores of 0.9618 and 0.9666, respectively. In addition, the ensemble model achieved the best performance for most DGA domain classes, enabling more accurate DGA domain detection than existing models.


2020 ◽  
Vol 1 ◽  
Author(s):  
G. Collavo ◽  
A. Lalayev ◽  
S. Angerer ◽  
M. Kraml ◽  
S. Bachner ◽  
...  

In this project, high school students (aged 16-17) tested various protocols of experiments in nanotechnology and evaluated them whether such experiments could also be performed by middle school students (aged 11-15) or even elementary school students (aged 6-10). Protocols pre-selected and provided by the instructing team consisting of Sciencetainment and the Department of Biosciences, University of Salzburg were applied. Laboratory techniques such as thin-layer chromatography, measuring the contact angle by high-resolution 3D microscopy and analyzing and constructing surface layers represented some of the experiments performed. Moreover, students produced short video clips and images and designed photo-collages out of microscopic and electron microscopic pictures. Hence, the school students acquired a number of soft skills during this special science day. 


2020 ◽  
Vol 2 (1) ◽  
pp. 60-73
Author(s):  
Rahmiy Kurniasary ◽  
Ismail Sukardi ◽  
Ahmad Syarifuddin

Hand gesture method including requires high memorization ability, some students are not active and focus in synchronizing the pronunciation of lafadz verses and doing hand gestures in learning to memorize and interpret the Qur'an. The purpose of this study was to determine the application of the method of hand gesture in learning to memorize and interpret the Qur'an of students inX garade in Madrasah Aliyah Negeri 1 Prabumulih. The research method used is descriptive qualitative analysis that discusses the application of the method of hand gesture in learning to memorize and interpret the Qur'an of students inX grade in Madrasah Aliyah Negeri 1 Prabumulih. The type of approach used descriptive qualitative with data collection techniques through observation, interviews, documentation and triangulation. Analysis of data qualitatively through three stages, namely data reduction, data presentation and conclusion stages. The results of research conducted by researchers are, first, the steps in the application of hand sign method by the teacher of Al-Qur'an Hadith in X.IPA3 includes teacher activities, namely the teacher explains the material and gives examples of verses to be memorized and interpreted using method of hand gestures on learning video shows on the projector. Student activities, namely students apply the method of hand gesture to the verse that has been taught. Second, supporting factors in the application of hand gesture methods in the form of internal factors, namely from the level of willingness and ability to memorize, external namely in terms of the use of media, teacher skills and a pleasant learning atmosphere. Third, the inhibiting factor in the application of the hand gesture method is the time required by each student, the level of student willingness, skills in making hand gestures and synchronization between the pronunciation of lafadz with hand movements.


Sign in / Sign up

Export Citation Format

Share Document