scholarly journals Do all aspects of learning benefit from iconicity? Evidence from motion capture

2019 ◽  
Vol 12 (1) ◽  
pp. 36-55
Author(s):  
ASHA SATO ◽  
MARIEKE SCHOUWSTRA ◽  
MOLLY FLAHERTY ◽  
SIMON KIRBY

abstractRecent work suggests that not all aspects of learning benefit from an iconicity advantage (Ortega, 2017). We present the results of an artificial sign language learning experiment testing the hypothesis that iconicity may help learners to learn mappings between forms and meanings, whilst having a negative impact on learning specific features of the form. We used a 3D camera (Microsoft Kinect) to capture participants’ gestures and quantify the accuracy with which they reproduce the target gestures in two conditions. In the iconic condition, participants were shown an artificial sign language consisting of congruent gesture–meaning pairs. In the arbitrary condition, the language consisted of non-congruent gesture–meaning pairs. We quantified the accuracy of participants’ gestures using dynamic time warping (Celebi et. al., 2013). Our results show that participants in the iconic condition learn mappings more successfully than participants in the arbitrary condition, but there is no difference in the accuracy with which participants reproduce the forms. While our work confirms that iconicity helps to establish form–meaning mappings, our study did not give conclusive evidence about the effect of iconicity on production; we suggest that iconicity may only have an impact on learning forms when these are complex.

SINERGI ◽  
2018 ◽  
Vol 22 (2) ◽  
pp. 91
Author(s):  
Zico Pratama Putera ◽  
Mila Desi Anasanti ◽  
Bagus Priambodo

The gesture is one of the most natural and expressive methods for the hearing impaired. Most researchers, however, focus on either static gestures, postures or a small group of dynamic gestures due to the complexity of dynamic gestures. We propose the Kinect Translation Tool to recognize the user's gesture. As a result, the Kinect Translation Tool can be used for bilateral communication with the deaf community. Since real-time detection of a large number of dynamic gestures is taken into account, some efficient algorithms and models are required. The dynamic time warping algorithm is used here to detect and translate the gesture. Kinect Sign Language should translate sign language into written and spoken words. Conversely, people can reply directly with their spoken word, which is converted into literal text together with the animated 3D sign language gestures. The user study, which included several prototypes of the user interface, was carried out with the observation of ten participants who had to gesture and spell the phrases in American Sign Language (ASL). The speech recognition tests for simple phrases have therefore shown good results. The system also recognized the participant's gesture very well during the test. The study suggested that a natural user interface with Microsoft Kinect could be interpreted as a sign language translator for the hearing impaired.


2018 ◽  
Vol 30 (3) ◽  
pp. 1437-1468 ◽  
Author(s):  
Adam Switonski ◽  
Henryk Josinski ◽  
Konrad Wojciechowski

2021 ◽  
Vol 65 (1) ◽  
pp. 10401-1-10401-10
Author(s):  
C. M. Vidhyapathi ◽  
Alex Noel Joseph Raj ◽  
S. Sundar

Abstract This article proposes an implementation of an action recognition system, which allows the user to perform operations in real time. The Microsoft Kinect (RGB-D) sensor plays a central role in this system, which provides the skeletal joint information of humans directly. Computationally efficient skeletal joint position features are considered for describing each action. The dynamic time warping algorithm (DTW) is a widely used algorithm in many applications such as similarity sequence search, classification, and speech recognition. It provides the highest accuracy compared to all other algorithms. However, the computational time of the DTW algorithm is a major drawback in real world applications. To speed up the basic DTW algorithm, a novel three-dimensional dynamic time warping (3D-DTW) classification algorithm is proposed in this work. The proposed 3D-DTW algorithm is implemented in both software and field programmable gate array (FPGA) hardware modeling techniques. The performance of the 3D-DTW algorithm is evaluated for 12 actions in which each action is described with the feature vector size of 576 over 32 frames. From our software modeling results, it has been shown that the proposed algorithm performs the action classification accurately. However, the computation time of the 3D-DTW algorithm increases linearly when we increase either the number of actions or the feature vector size of each action. For further speedup, an efficient custom 3D-DTW intellectual property (IP) core is developed using the Xilinx Vivado high-level synthesis (HLS) tool to accelerate the 3D-DTW algorithm in FPGA hardware. The CPU centric software modeling of the 3D-DTW algorithm is compared with its hardware accelerated custom IP core. It has been shown that the developed 3D-DTW Custom IP core computation time is 40 times faster than its software counterpart. As the hardware results are promising, a parallel hardware software co-design architecture is proposed for the Xilinx Zynq-7020 System on Chip (SoC) FPGA for action recognition. The HLS simulation and synthesis results are provided to support the practical implementation of the proposed architecture. Our proposed approach outperforms many of the existing state-of-the-art DTW based action recognition techniques by providing the highest accuracy of 97.77%.


Electronics ◽  
2020 ◽  
Vol 9 (9) ◽  
pp. 1400
Author(s):  
Wenguo Li ◽  
Zhizeng Luo ◽  
Xugang Xi

Movement trajectory recognition is the key link of sign language (SL) translation research, which directly affects the accuracy of SL translation results. A new method is proposed for the accurate recognition of movement trajectory. First, the gesture motion information collected should be converted into a fixed coordinate system by the coordinate transformation. The SL movement trajectory is reconstructed using the adaptive Simpson algorithm to maintain the originality and integrity of the trajectory. The algorithm is then extended to multidimensional time series by using Mahalanobis distance (MD). The activation function of generalized linear regression (GLR) is modified to optimize the dynamic time warping (DTW) algorithm, which ensures that the local shape characteristics are considered for the global amplitude characteristics and avoids the problem of abnormal matching in the process of trajectory recognition. Finally, the similarity measure method is used to calculate the distance between two warped trajectories, to judge whether they are classified to the same category. Experimental results show that this method is effective for the recognition of SL movement trajectory, and the accuracy of trajectory recognition is 86.25%. The difference ratio between the inter-class features and intra-class features of the movement trajectory is 20, and the generalization ability of the algorithm can be effectively improved.


Sensors ◽  
2020 ◽  
Vol 20 (14) ◽  
pp. 3879 ◽  
Author(s):  
Giovanni Saggio ◽  
Pietro Cavallo ◽  
Mariachiara Ricci ◽  
Vito Errico ◽  
Jonathan Zea ◽  
...  

We propose a sign language recognition system based on wearable electronics and two different classification algorithms. The wearable electronics were made of a sensory glove and inertial measurement units to gather fingers, wrist, and arm/forearm movements. The classifiers were k-Nearest Neighbors with Dynamic Time Warping (that is a non-parametric method) and Convolutional Neural Networks (that is a parametric method). Ten sign-words were considered from the Italian Sign Language: cose, grazie, maestra, together with words with international meaning such as google, internet, jogging, pizza, television, twitter, and ciao. The signs were repeated one-hundred times each by seven people, five male and two females, aged 29–54 y ± 10.34 (SD). The adopted classifiers performed with an accuracy of 96.6% ± 3.4 (SD) for the k-Nearest Neighbors plus the Dynamic Time Warping and of 98.0% ± 2.0 (SD) for the Convolutional Neural Networks. Our system was made of wearable electronics among the most complete ones, and the classifiers top performed in comparison with other relevant works reported in the literature.


2021 ◽  
Vol 8 (1) ◽  
pp. 200839
Author(s):  
Sophie Van Der Zee ◽  
Paul Taylor ◽  
Ruth Wong ◽  
John Dixon ◽  
Tarek Menacere

Studies of the nonverbal correlates of deception tend to examine liars' behaviours as independent from the behaviour of the interviewer, ignoring joint action. To address this gap, experiment 1 examined the effect of telling a truth and easy, difficult and very difficult lies on nonverbal coordination. Nonverbal coordination was measured automatically by applying a dynamic time warping algorithm to motion-capture data. In experiment 2, interviewees also received instructions that influenced the attention they paid to either the nonverbal or verbal behaviour of the interviewer. Results from both experiments found that interviewer–interviewee nonverbal coordination increased with lie difficulty. This increase was not influenced by the degree to which interviewees paid attention to their nonverbal behaviour, nor by the degree of interviewer's suspicion. Our findings are consistent with the broader proposition that people rely on automated processes such as mimicry when under cognitive load.


Sign in / Sign up

Export Citation Format

Share Document