visual channel
Recently Published Documents


TOTAL DOCUMENTS

100
(FIVE YEARS 34)

H-INDEX

18
(FIVE YEARS 1)

2021 ◽  
Vol 4 (1) ◽  
pp. 78-88
Author(s):  
Andita Aprilia Fridayanti
Keyword(s):  

Latar belakang penelitian ini adalah seiring dengan perkembangan ilmu pengetahuan dan teknologi atau iptek, dan guru dituntut agar dapat mengimplementasikan iptek pada proses pembelajaran untuk memiliki keterampilan dalam menggunakan media pembelajaran. Dengan demikian proses pelaksanaan pembelajaran bahasa Arab yang memerlukan sarana interaktif untuk mempermudah pelaksanaan pengkajian mengenai bahasa Arab. Salah satu sarana yang digunakan yaitu media audio visual channel Youtube yang mempelajari mengenai kosakata bahasa Arab-Indonesia. Rumusan masalah pada penelitian ini yaitu bagaimana penerapan proses pembelajaran bahasa Arab menggunakan media audio visual channel youtube di MTs NU Mranggen. Tujuan penelitian ini adalah untuk menganalisis penerapan proses pembelajaran bahasa Arab menggunakan media audio visual channel Youtube di MTs NU Mranggen. Pendekatan yang di dilakukan pada penelitian ini yaitu pendekatan kualitatif adalah pendekatan yang dilakukan untuk menjelaskan secara mendalam mengenai kejadian yang dialami oleh subjek penelitian. Hasil pada penelitian yang dilakukan diperoleh bahwa penerapan media audio visual channel Youtube pada proses belajar bahasa Arab di MTs NU Mranggen dapat merangsang siswa untuk mengikuti pembelajaran dan memperhatikan guru dalam menerangkan materi sehingga proses pembelajaran dapat menjadi lebih aktif, afektif, dan kreatif.


2021 ◽  
Vol 5 (ISS) ◽  
pp. 1-17
Author(s):  
Edwin Chau ◽  
Jiakun Yu ◽  
Cagatay Goncu ◽  
Anusha Withana

Eyes-free operation of mobile devices is critical in situations where the visual channel is either unavailable or attention is needed elsewhere. In such situations, vibrotactile tracing along paths or lines can help users to navigate and identify symbols and shapes without visual information. In this paper, we investigated the applicability of different metrics that can measure the effectiveness of vibrotactile line tracing methods on touch screens. In two user studies, we compare trace Length Error, Area Error, and Fréchet Distance as alternatives to commonly used trace Time. Our results show that a lower Fréchet distance is correlated better with the comprehension of a line trace. Furthermore, we show that distinct feedback methods perform differently with varying geometric features in lines and propose a segmented line design for tactile line tracing studies. We believe the results will inform future designs of eyes-free operation techniques and studies.


Author(s):  
Tara V. McCarty ◽  
Dawn J. Sowers ◽  
Sophie J. Wolf ◽  
Krista M. Wilkinson

Purpose Individuals with cortical visual impairment (CVI) can have difficulties with visual processing due to physical damage or atypical structures of visual pathways or visual processing centers in the brain. Many individuals with CVI have concomitant disabilities, including significant communication support needs; these individuals can benefit from augmentative and alternative communication (AAC). Because much AAC involves a visual channel, implementation of AAC must consider the unique visual processing skills and challenges in CVI. However, little is known empirically about how to best design AAC for individuals with CVI. This study examined processing of visual stimuli in four young adolescents with CVI. Method This study used a within-subjects experimental design that sought to provide an in-depth description of the visual engagement of individuals with CVI when viewing stimuli of various levels of complexity, either with or without a social cue. Results Participants engaged most with the simplest stimuli (relative to the size of those stimuli) and engaged more when a social cue was provided during the task. The level of engagement with more complex stimuli was related to participants' score on the CVI Range, a clinical assessment tool that characterizes level of visual functioning. Conclusions Implications for AAC include considerations for the internal complexity of AAC symbols and the complexity of the arrays created for individuals with CVI. Clinicians working with children with CVI who use AAC should consider the unique features of their visual processing.


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Maximilian C. Fink ◽  
Nicole Heitzmann ◽  
Matthias Siebeck ◽  
Frank Fischer ◽  
Martin R. Fischer

Abstract Background Simulation-based learning with virtual patients is a highly effective method that could potentially be further enhanced by including reflection phases. The effectiveness of reflection phases for learning to diagnose has mainly been demonstrated for problem-centered instruction with text-based cases, not for simulation-based learning. To close this research gap, we conducted a study on learning history-taking using virtual patients. In this study, we examined the added benefit of including reflection phases on learning to diagnose accurately, the associations between knowledge and learning, and the diagnostic process. Methods A sample of N = 121 medical students completed a three-group experiment with a control group and pre- and posttests. The pretest consisted of a conceptual and strategic knowledge test and virtual patients to be diagnosed. In the learning phase, two intervention groups worked with virtual patients and completed different types of reflection phases, while the control group learned with virtual patients but without reflection phases. The posttest again involved virtual patients. For all virtual patients, diagnostic accuracy was assessed as the primary outcome. Current hypotheses were tracked during reflection phases and in simulation-based learning to measure diagnostic process. Results Regarding the added benefit of reflection phases, an ANCOVA controlling for pretest performance found no difference in diagnostic accuracy at posttest between the three conditions, F(2, 114) = 0.93, p = .398. Concerning knowledge and learning, both pretest conceptual knowledge and strategic knowledge were not associated with learning to diagnose accurately through reflection phases. Learners’ diagnostic process improved during simulation-based learning and the reflection phases. Conclusions Reflection phases did not have an added benefit for learning to diagnose accurately in virtual patients. This finding indicates that reflection phases may not be as effective in simulation-based learning as in problem-centered instruction with text-based cases and can be explained with two contextual differences. First, information processing in simulation-based learning uses the verbal channel and the visual channel, while text-based learning only draws on the verbal channel. Second, in simulation-based learning, serial cue cases are used to gather information step-wise, whereas, in text-based learning, whole cases are used that present all data at once.


Author(s):  
Yaroslava Gnezdilova

The article discovers discourse-specific characteristics of metacommunication, typical of TV/radio broadcasts. It identifies that TV/radio discourse belongs to oral media discourse that uses audio (and visual) channel of information transfer. The study marks the realization of six main types of metacommunication that have been introduced by the corresponding groups of meta-means, called phatic, regulative, referential, reflective, cohesive and modal, in TV/radio discourse in order to establish its metacommunication discourse-specific characteristics and to indicate "regular meta-means of oral media discourse". Based on the quantitative analysis, the article confirms that metacommunication specifics of TV/radio discourse is in its regulative character, and, consequently, almost all possible patterns of accentuating (denoting appeals, intentions, intensifications, degree, as well as attracting or affecting) and commentating (denoting confirmations, disagreements, distancing, explanations, generalizations, comparatives and conditions, confessions etc.) means are realized there. It also proves that the second most important in TV/radio discourse is a phatic metacommunication due to the host’s use of contact-establishing, contact-maintaining and contact-terminating meta-means. A special attention has been paid to the study of the same phatic means used in the beginning and at the end of the program. The research specifies that referential metacommunication is least typical of TV/radio discourse because the spontaneity of oral speech does not presuppose careful recollection of word-to-word citations, their authors etc.


2021 ◽  
Vol 3 ◽  
Author(s):  
Maria-Elissavet Nikolaidou ◽  
Vasilios Karfis ◽  
Maria Koutsouba ◽  
Arno Schroll ◽  
Adamantios Arampatzis

Dance has been suggested to be an advantageous exercise modality for improving postural balance performance and reducing the risk of falls in the older population. The main purpose of this study was to investigate whether visual restriction impacts older dancers and non-dancers differently during a quiet stance balance performance test. We hypothesized higher balance performance and greater balance deterioration due to visual restriction in dancers compared with non-dancers, indicating the superior contribution of the visual channel in the expected higher balance performances of dancers. Sixty-nine (38 men, 31 women, 74 ± 6 years) healthy older adults participated and were grouped into a Greek traditional dance group (n = 31, two to three times/week for 1.5 h/session, minimum of 3 years) and a non-dancer control group (n = 38, no systematic exercise history). The participants completed an assessment of one-legged quiet stance trials using both left and right legs and with eyes open while standing barefoot on a force plate (Wii, A/D converter, 1,000 Hz; Biovision) and two-legged trials with both eyes open and closed. The possible differences in the anthropometric and one-legged balance parameters were examined by a univariate ANOVA with group and sex as fixed factors. This ANOVA was performed using the same fixed factors and vision as the repeated measures factor for the two-legged balance parameters. In the one-legged task, the dance group showed significantly lower values in anteroposterior and mediolateral sway amplitudes (p = 0.001 and p = 0.035) and path length measured in both directions (p = 0.001) compared with the non-dancers. In the two-legged stance, we found a significant vision effect on path length (p < 0.001) and anteroposterior amplitude (p < 0.001), whereas mediolateral amplitude did not differ significantly (p = 0.439) between closed and open eyes. The dance group had a significantly lower CoP path length (p = 0.006) and anteroposterior (p = 0.001) and mediolateral sway amplitudes (p = 0.003) both in the eyes-open and eyes-closed trials compared with the control group. The superior balance performance in the two postural tasks found in the dancers is possibly the result of the coordinated, aesthetically oriented intersegmental movements, including alternations between one- and two-legged stance phases, that comes with dance. Visual restriction resulted in a similar deterioration of balance performance in both groups, thus suggesting that the contribution of the visual channel alone cannot explain the superior balance performance of dancers.


2021 ◽  
Vol 11 (18) ◽  
pp. 8772
Author(s):  
Laura Raya ◽  
Sara A. Boga ◽  
Marcos Garcia-Lorenzo ◽  
Sofia Bayona

Technological advances enable the capture and management of complex data sets that need to be correctly understood. Visualisation techniques can help in complex data analysis and exploration, but sometimes the visual channel is not enough, or it is not always available. Some authors propose using the haptic channel to reinforce or substitute the visual sense, but the limited human haptic short-term memory still poses a challenge. We present the haptic tuning fork, a reference signal displayed before the haptic information for increasing the discriminability of haptic icons. With this reference, the user does not depend only on short-term memory. We have decided to evaluate the usefulness of the haptic tuning fork in impedance kinesthetic devices as these are the most common. Furthermore, since the renderable signal ranges are device-dependent, we introduce a methodology to select a discriminable set of signals called the haptic scale. Both the haptic tuning fork and the haptic scale proved their usefulness in the performed experiments regarding haptic stimuli varying in frequency.


Sensors ◽  
2021 ◽  
Vol 21 (17) ◽  
pp. 5722
Author(s):  
Jianmin Wang ◽  
Yujia Liu ◽  
Tianyang Yue ◽  
Chengji Wang ◽  
Jinjing Mao ◽  
...  

Anthropomorphic robots need to maintain effective and emotive communication with humans as automotive agents to establish and maintain effective human–robot performances and positive human experiences. Previous research has shown that the characteristics of robot communication positively affect human–robot interaction outcomes such as usability, trust, workload, and performance. In this study, we investigated the characteristics of transparency and anthropomorphism in robotic dual-channel communication, encompassing the voice channel (low or high, increasing the amount of information provided by textual information) and the visual channel (low or high, increasing the amount of information provided by expressive information). The results showed the benefits and limitations of increasing the transparency and anthropomorphism, demonstrating the significance of the careful implementation of transparency methods. The limitations and future directions are discussed.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
James Trujillo ◽  
Asli Özyürek ◽  
Judith Holler ◽  
Linda Drijvers

AbstractIn everyday conversation, we are often challenged with communicating in non-ideal settings, such as in noise. Increased speech intensity and larger mouth movements are used to overcome noise in constrained settings (the Lombard effect). How we adapt to noise in face-to-face interaction, the natural environment of human language use, where manual gestures are ubiquitous, is currently unknown. We asked Dutch adults to wear headphones with varying levels of multi-talker babble while attempting to communicate action verbs to one another. Using quantitative motion capture and acoustic analyses, we found that (1) noise is associated with increased speech intensity and enhanced gesture kinematics and mouth movements, and (2) acoustic modulation only occurs when gestures are not present, while kinematic modulation occurs regardless of co-occurring speech. Thus, in face-to-face encounters the Lombard effect is not constrained to speech but is a multimodal phenomenon where the visual channel carries most of the communicative burden.


2021 ◽  
Author(s):  
David Miralles ◽  
Guillem Garrofé ◽  
Calota Parés ◽  
Alejandro González ◽  
Gerard Serra ◽  
...  

Abstract The cognitive connection between the senses of touch and vision is probably the best-known case of cross-modality. Recent discoveries suggest that the mapping between both senses is learned rather than innate. These evidences open the door to a dynamic cross-modality that allows individuals to adaptively develop within their environment. Mimicking this aspect of human learning, we propose a new cross-modal mechanism that allows artificial cognitive systems (ACS) to adapt quickly to unforeseen perceptual anomalies generated by the environment or by the system itself. In this context, visual recognition systems have advanced remarkably in recent years thanks to the creation of large-scale datasets together with the advent of deep learning algorithms. However, such advances have not occurred on the haptic mode, mainly due to the lack of two-handed dexterous datasets that allow learning systems to process the tactile information of human object exploration. This data imbalance limits the creation of synchronized multimodal datasets that would enable the development of cross-modality in ACS during object exploration. In this work, we use a multimodal dataset recently generated from tactile sensors placed on a collection of objects that capture haptic data from human manipulation, together with the corresponding visual counterpart. Using this data, we create a cross-modal learning transfer mechanism capable of detecting both sudden and permanent anomalies in the visual channel and still maintain visual object recognition performance by retraining the visual mode for a few minutes using haptic information. Here we show the importance of cross-modality in perceptual awareness and its ecological capabilities to self-adapt to different environments.


Sign in / Sign up

Export Citation Format

Share Document