scholarly journals The Multimodal Mediation of Knowledge: Instructors’ Explanations in a Scientific Café

2019 ◽  
Vol 8 (2) ◽  
Author(s):  
Claire Polo ◽  
Jean-Marc Colletta

Abstract In this paper, we present an in depth study of the interplay between various semiotic modes in the specific instructional setting of a scientific café. We analyzed the multimodal performance of five female instructors delivering a monologue explanation during instruction on the following dimensions: speech, gesture (hand gestures, head orientation and gaze) and use of written didactical material. Results first point out the crucial role played by referential hand gesture together with gaze-body behavior both in representing new concepts (conceptual mediation) and in building bridges between information displayed in several modes (semiotic mediation). They also show cross-individual differences in instructors’ multimodal performance, that we propose to interpret as three diverse modes of mediating knowledge, guiding being the only one providing both conceptual and semiotic mediation.

Author(s):  
Sukhendra Singh ◽  
G. N. Rathna ◽  
Vivek Singhal

Introduction: Sign language is the only way to communicate for speech-impaired people. But this sign language is not known to normal people so this is the cause of barrier in communicating. This is the problem faced by speech impaired people. In this paper, we have presented our solution which captured hand gestures with Kinect camera and classified the hand gesture into its correct symbol. Method: We used Kinect camera not the ordinary web camera because the ordinary camera does not capture its 3d orientation or depth of an image from camera however Kinect camera can capture 3d image and this will make classification more accurate. Result: Kinect camera will produce a different image for hand gestures for ‘2’ and ‘V’ and similarly for ‘1’ and ‘I’ however, normal web camera will not be able to distinguish between these two. We used hand gesture for Indian sign language and our dataset had 46339, RGB images and 46339 depth images. 80% of the total images were used for training and the remaining 20% for testing. In total 36 hand gestures were considered to capture alphabets and alphabets from A-Z and 10 for numeric, 26 for digits from 0-9 were considered to capture alphabets and Keywords. Conclusion: Along with real-time implementation, we have also shown the comparison of the performance of the various machine learning models in which we have found out the accuracy of CNN on depth- images has given the most accurate performance than other models. All these resulted were obtained on PYNQ Z2 board.


2021 ◽  
pp. 147612702110120
Author(s):  
Siavash Alimadadi ◽  
Andrew Davies ◽  
Fredrik Tell

Research on the strategic organization of time often assumes that collective efforts are motivated by and oriented toward achieving desirable, although not necessarily well-defined, future states. In situations surrounded by uncertainty where work has to proceed urgently to avoid an impending disaster, however, temporal work is guided by engaging with both desirable and undesirable future outcomes. Drawing on a real-time, in-depth study of the inception of the Restoration and Renewal program of the Palace of Westminster, we investigate how organizational actors develop a strategy for an uncertain and highly contested future while safeguarding ongoing operations in the present and preserving the heritage of the past. Anticipation of undesirable future events played a crucial role in mobilizing collective efforts to move forward. We develop a model of future desirability in temporal work to identify how actors construct, link, and navigate interpretations of desirable and undesirable futures in their attempts to create a viable path of action. By conceptualizing temporal work based on the phenomenological quality of the future, we advance understanding of the strategic organization of time in pluralistic contexts characterized by uncertainty and urgency.


2021 ◽  
Vol 8 (1) ◽  
Author(s):  
Shahzad Ahmed ◽  
Dingyang Wang ◽  
Junyoung Park ◽  
Sung Ho Cho

AbstractIn the past few decades, deep learning algorithms have become more prevalent for signal detection and classification. To design machine learning algorithms, however, an adequate dataset is required. Motivated by the existence of several open-source camera-based hand gesture datasets, this descriptor presents UWB-Gestures, the first public dataset of twelve dynamic hand gestures acquired with ultra-wideband (UWB) impulse radars. The dataset contains a total of 9,600 samples gathered from eight different human volunteers. UWB-Gestures eliminates the need to employ UWB radar hardware to train and test the algorithm. Additionally, the dataset can provide a competitive environment for the research community to compare the accuracy of different hand gesture recognition (HGR) algorithms, enabling the provision of reproducible research results in the field of HGR through UWB radars. Three radars were placed at three different locations to acquire the data, and the respective data were saved independently for flexibility.


2018 ◽  
Vol 14 (7) ◽  
pp. 155014771879075 ◽  
Author(s):  
Kiwon Rhee ◽  
Hyun-Chool Shin

In the recognition of electromyogram-based hand gestures, the recognition accuracy may be degraded during the actual stage of practical applications for various reasons such as electrode positioning bias and different subjects. Besides these, the change in electromyogram signals due to different arm postures even for identical hand gestures is also an important issue. We propose an electromyogram-based hand gesture recognition technique robust to diverse arm postures. The proposed method uses both the signals of the accelerometer and electromyogram simultaneously to recognize correct hand gestures even for various arm postures. For the recognition of hand gestures, the electromyogram signals are statistically modeled considering the arm postures. In the experiments, we compared the cases that took into account the arm postures with the cases that disregarded the arm postures for the recognition of hand gestures. In the cases in which varied arm postures were disregarded, the recognition accuracy for correct hand gestures was 54.1%, whereas the cases using the method proposed in this study showed an 85.7% average recognition accuracy for hand gestures, an improvement of more than 31.6%. In this study, accelerometer and electromyogram signals were used simultaneously, which compensated the effect of different arm postures on the electromyogram signals and therefore improved the recognition accuracy of hand gestures.


2020 ◽  
Vol 2 (1) ◽  
pp. 60-73
Author(s):  
Rahmiy Kurniasary ◽  
Ismail Sukardi ◽  
Ahmad Syarifuddin

Hand gesture method including requires high memorization ability, some students are not active and focus in synchronizing the pronunciation of lafadz verses and doing hand gestures in learning to memorize and interpret the Qur'an. The purpose of this study was to determine the application of the method of hand gesture in learning to memorize and interpret the Qur'an of students inX garade in Madrasah Aliyah Negeri 1 Prabumulih. The research method used is descriptive qualitative analysis that discusses the application of the method of hand gesture in learning to memorize and interpret the Qur'an of students inX grade in Madrasah Aliyah Negeri 1 Prabumulih. The type of approach used descriptive qualitative with data collection techniques through observation, interviews, documentation and triangulation. Analysis of data qualitatively through three stages, namely data reduction, data presentation and conclusion stages. The results of research conducted by researchers are, first, the steps in the application of hand sign method by the teacher of Al-Qur'an Hadith in X.IPA3 includes teacher activities, namely the teacher explains the material and gives examples of verses to be memorized and interpreted using method of hand gestures on learning video shows on the projector. Student activities, namely students apply the method of hand gesture to the verse that has been taught. Second, supporting factors in the application of hand gesture methods in the form of internal factors, namely from the level of willingness and ability to memorize, external namely in terms of the use of media, teacher skills and a pleasant learning atmosphere. Third, the inhibiting factor in the application of the hand gesture method is the time required by each student, the level of student willingness, skills in making hand gestures and synchronization between the pronunciation of lafadz with hand movements.


2020 ◽  
pp. 1-15
Author(s):  
Anna Bishop ◽  
Erica A. Cartmill

Abstract Classic Maya (a.d. 250–900) art is filled with expressive figures in a variety of highly stylized poses and postures. These poses are so specific that they appear to be intentionally communicative, yet their meanings remain elusive. A few studies have scratched the surface of this issue, suggesting that a correlation exists between body language and social roles in Maya art. The present study examines whether one type of body language (hand gestures) in Classic Maya art represents and reflects elements of social structure. This analysis uses a coding approach derived from studies of hand gesture in conversation to apply an interactional approach to a static medium, thereby broadening the methods used to analyze gesture in ancient art. Statistics are used to evaluate patterns of gesture use in palace scenes across 289 figures on 94 different vases, with results indicating that the form and angling of gestures are related to social hierarchy. Furthermore, this study considers not just the individual status of each figure, but the interaction between figures. The results not only shed light on how gesture was depicted in Maya art, but also demonstrate how figural representation reflects social structure.


2020 ◽  
Vol 6 (8) ◽  
pp. 73 ◽  
Author(s):  
Munir Oudah ◽  
Ali Al-Naji ◽  
Javaan Chahl

Hand gestures are a form of nonverbal communication that can be used in several fields such as communication between deaf-mute people, robot control, human–computer interaction (HCI), home automation and medical applications. Research papers based on hand gestures have adopted many different techniques, including those based on instrumented sensor technology and computer vision. In other words, the hand sign can be classified under many headings, such as posture and gesture, as well as dynamic and static, or a hybrid of the two. This paper focuses on a review of the literature on hand gesture techniques and introduces their merits and limitations under different circumstances. In addition, it tabulates the performance of these methods, focusing on computer vision techniques that deal with the similarity and difference points, technique of hand segmentation used, classification algorithms and drawbacks, number and types of gestures, dataset used, detection range (distance) and type of camera used. This paper is a thorough general overview of hand gesture methods with a brief discussion of some possible applications.


2020 ◽  
Vol 2020 ◽  
pp. 1-14
Author(s):  
Guoliang Chen ◽  
Kaikai Ge

In this paper, a fusion method based on multiple features and hidden Markov model (HMM) is proposed for recognizing dynamic hand gestures corresponding to an operator’s instructions in robot teleoperation. In the first place, a valid dynamic hand gesture from continuously obtained data according to the velocity of the moving hand needs to be separated. Secondly, a feature set is introduced for dynamic hand gesture expression, which includes four sorts of features: palm posture, bending angle, the opening angle of the fingers, and gesture trajectory. Finally, HMM classifiers based on these features are built, and a weighted calculation model fusing the probabilities of four sorts of features is presented. The proposed method is evaluated by recognizing dynamic hand gestures acquired by leap motion (LM), and it reaches recognition rates of about 90.63% for LM-Gesture3D dataset created by the paper and 93.3% for Letter-gesture dataset, respectively.


2017 ◽  
Vol 10 (27) ◽  
pp. 1329-1342 ◽  
Author(s):  
Javier O. Pinzon Arenas ◽  
Robinson Jimenez Moreno ◽  
Paula C. Useche Murillo

This paper presents the implementation of a Region-based Convolutional Neural Network focused on the recognition and localization of hand gestures, in this case 2 types of gestures: open and closed hand, in order to achieve the recognition of such gestures in dynamic backgrounds. The neural network is trained and validated, achieving a 99.4% validation accuracy in gesture recognition and a 25% average accuracy in RoI localization, which is then tested in real time, where its operation is verified through times taken for recognition, execution behavior through trained and untrained gestures, and complex backgrounds.


2019 ◽  
Vol 2019 ◽  
pp. 1-7 ◽  
Author(s):  
Peng Liu ◽  
Xiangxiang Li ◽  
Haiting Cui ◽  
Shanshan Li ◽  
Yafei Yuan

Hand gesture recognition is an intuitive and effective way for humans to interact with a computer due to its high processing speed and recognition accuracy. This paper proposes a novel approach to identify hand gestures in complex scenes by the Single-Shot Multibox Detector (SSD) deep learning algorithm with 19 layers of a neural network. A benchmark database with gestures is used, and general hand gestures in the complex scene are chosen as the processing objects. A real-time hand gesture recognition system based on the SSD algorithm is constructed and tested. The experimental results show that the algorithm quickly identifies humans’ hands and accurately distinguishes different types of gestures. Furthermore, the maximum accuracy is 99.2%, which is significantly important for human-computer interaction application.


Sign in / Sign up

Export Citation Format

Share Document