audio feedback
Recently Published Documents


TOTAL DOCUMENTS

194
(FIVE YEARS 55)

H-INDEX

13
(FIVE YEARS 3)

2022 ◽  
pp. 1039-1057
Author(s):  
Melissa Cain ◽  
Melissa Fanshawe

As educators, we aim for students to seek, identify, and utilize a range of feedback to gain an understanding of their present performance in relation to learning goals, and ultimately to identify and use tools to close the gap between present and desired performance. We strive for all students to be their “own first assessors”—intelligent deciders—and develop the independence to self-assess the quality of their own work when they leave higher education institutions and enter the workforce. For students with a print disability such as vision impairment or blindness, traditional forms of feedback may not be successful in providing the information they need to close the gap. The most important issue for these students is access to feedback and agency in the feedback conversation. It is incumbent on higher education educators to find ways to provide equity of access to the provision and reception of feedback for all students. As such, this chapter explores ways for providing feedback to students with a vision impairment to ensure they are able to contextualize and utilize the feedback to improve learning outcomes. This is achieved by aligning the use of mobile technologies and audio feedback with the key principles of connectivism—autonomy, connectedness, diversity, and openness—to provide educators with recommendations.


2021 ◽  
Vol 11 (12) ◽  
pp. 1562-1570
Author(s):  
Mohammed Abdullah Alharbi ◽  
Abdurrazzag Alghammas

Due to the importance of instructor’s feedback on students’ written assignment as part of formative assessment and the relatively new way of delivering audio feedback, this case study aimed to explore instructor’s audio vs. written feedback provided on 15 pairs of undergraduates’ written tasks through Google Docs over an academic semester in a Saudi public university. The data was collected from actual feedback comments in both modes and follow-up interviews with the students. The content analysis of feedback revealed that audio feedback differed from written feedback in terms of quantity and content. Despite the potential of audio feedback revealed through the content analysis, the majority of students (16) preferred written feedback over audio feedback for its clarity, easiness, easy access to feedback and its focus on a certain issue in the assignments, whereas 14 of them preferred audio feedback. Several challenges highlighted by the students, including its length and detailed instruction and the difficulty in accessing it served as good points for several pedagogical implications for instructors in this study.


2021 ◽  
Vol 11 (12) ◽  
pp. 1610-1621
Author(s):  
Shufen Chen

In order to effectively improve the English speaking ability of Chinese college students, this paper explores the effectiveness of oral English practice and teacher audio feedback via WeChat Mini Program Sharedaka. The research instruments include a 10-week daka practice, two questionnaires and an interview. It has been found: 1) Oral English practice via Sharedaka has a positive impact on Chinese college students’ English speaking ability. 2) Teacher audio feedback better caters to students’ need and helps improve their pronunciation and intonation. 3) Communication via Sharedaka creates a more relaxed atmosphere between teachers and students.


2021 ◽  
Author(s):  
◽  
Jason Long

<p>A closed-loop control system is any configuration that feeds information about its output back into the control stream. These types of systems have been in use for hundreds of years in various engineering related disciplines to carry out operations such as keeping rooms at the correct temperature, implementing cruise control in cars, and precisely positioning industrial machinery. When a musician performs a piece, a type of biological closed loop is invoked in which the player continuously listens to the sound of their instrument, and adjusts their actions in order to ensure their performance is as desired.  However, most musical robots do not possess this ability, instead relying on open-loop systems without feedback. This results in the need for much manual intervention from the operators of these robots, unintuitive control interfaces for composing and performing music with them, and tuning, timing, dynamics and other issues occurring during performances.  This thesis investigates applying closed-loop audio feedback techniques to the creation of musical robots to equip them with new expressive capabilities, interactive applications, musical accuracy, and greater autonomy. In order to realise these objectives, following an investigation of the history of musical automata and musical robotic control systems, several new robotic musical instruments are developed based on the principals of utilising embedded musical information retrieval techniques to allow the instruments to continuously ‘listen’ to themselves while they play.  The mechanical and electronic systems and firmware of a closed-loop glockenspiel, a modular unpitched percussion control system, and a robotic chordophone control system are described in detail, utilising new software and hardware created to be accessible to electronic artists. The novel capabilities of the instruments are demonstrated both through quantitative evaluations of the performance of their subsystems, and through composing original musical works specifically for the instruments. This paradigm shift in musical robotic construction paves the way for a new class of robots that are intuitive to use, highly accurate and reliable, and possess a unique level of musical expressiveness.</p>


2021 ◽  
Author(s):  
◽  
Jason Long

<p>A closed-loop control system is any configuration that feeds information about its output back into the control stream. These types of systems have been in use for hundreds of years in various engineering related disciplines to carry out operations such as keeping rooms at the correct temperature, implementing cruise control in cars, and precisely positioning industrial machinery. When a musician performs a piece, a type of biological closed loop is invoked in which the player continuously listens to the sound of their instrument, and adjusts their actions in order to ensure their performance is as desired.  However, most musical robots do not possess this ability, instead relying on open-loop systems without feedback. This results in the need for much manual intervention from the operators of these robots, unintuitive control interfaces for composing and performing music with them, and tuning, timing, dynamics and other issues occurring during performances.  This thesis investigates applying closed-loop audio feedback techniques to the creation of musical robots to equip them with new expressive capabilities, interactive applications, musical accuracy, and greater autonomy. In order to realise these objectives, following an investigation of the history of musical automata and musical robotic control systems, several new robotic musical instruments are developed based on the principals of utilising embedded musical information retrieval techniques to allow the instruments to continuously ‘listen’ to themselves while they play.  The mechanical and electronic systems and firmware of a closed-loop glockenspiel, a modular unpitched percussion control system, and a robotic chordophone control system are described in detail, utilising new software and hardware created to be accessible to electronic artists. The novel capabilities of the instruments are demonstrated both through quantitative evaluations of the performance of their subsystems, and through composing original musical works specifically for the instruments. This paradigm shift in musical robotic construction paves the way for a new class of robots that are intuitive to use, highly accurate and reliable, and possess a unique level of musical expressiveness.</p>


Electronics ◽  
2021 ◽  
Vol 10 (22) ◽  
pp. 2756
Author(s):  
Mukhriddin Mukhiddinov ◽  
Jinsoo Cho

Individuals suffering from visual impairments and blindness encounter difficulties in moving independently and overcoming various problems in their routine lives. As a solution, artificial intelligence and computer vision approaches facilitate blind and visually impaired (BVI) people in fulfilling their primary activities without much dependency on other people. Smart glasses are a potential assistive technology for BVI people to aid in individual travel and provide social comfort and safety. However, practically, the BVI are unable move alone, particularly in dark scenes and at night. In this study we propose a smart glass system for BVI people, employing computer vision techniques and deep learning models, audio feedback, and tactile graphics to facilitate independent movement in a night-time environment. The system is divided into four models: a low-light image enhancement model, an object recognition and audio feedback model, a salient object detection model, and a text-to-speech and tactile graphics generation model. Thus, this system was developed to assist in the following manner: (1) enhancing the contrast of images under low-light conditions employing a two-branch exposure-fusion network; (2) guiding users with audio feedback using a transformer encoder–decoder object detection model that can recognize 133 categories of sound, such as people, animals, cars, etc., and (3) accessing visual information using salient object extraction, text recognition, and refreshable tactile display. We evaluated the performance of the system and achieved competitive performance on the challenging Low-Light and ExDark datasets.


2021 ◽  
Vol 5 (ISS) ◽  
pp. 1-33
Author(s):  
Dr-Ing Lauren Thevin ◽  
Nicolas Rodier ◽  
Bernard Oriola ◽  
Martin HACHET ◽  
Christophe Jouffrais ◽  
...  

Board games allow us to share collective entertainment experiences. They entertain because of the interactions between players, physical manipulation of tokens and decision making. Unfortunately, most board games exclude people with visual impairments as they were not initially designed for players with special needs. Through a user-centered design process with an accessible game library and visually impaired players, we observed challenges and solutions in making existing board games accessible through handcrafted solutions (tactile stickers, braille labels, etc.). In a second step, we used Spatial Augmented Reality (SAR), to make existing board games inclusive by adding interactivity (GameARt). In a case study with an existing board game considered as not accessible (Jamaica), we designed an interactive SAR version with touch detection (JamaicAR). We evaluated this prototype in a user study with 5 groups of 3 players each, including sighted, low vision and blind players. All players, independent of visual status, were able to play the Augmented Reality game. Moreover, the game was rated positively by all players regarding attractiveness, play engrossment, enjoyment and social connectivity. Our work shows that Spatial Augmented Reality has the potential to make board games accessible to people with visual impairments when handcrafted adaptations fall short.


2021 ◽  
Vol 4 (2) ◽  
pp. 1
Author(s):  
Maryam Hashemi

It has been a long time that human is interested in learning how to control involuntary actions such as heartbeat, blood pressure, breathing, etc., so a lot of research has been done on this issue so far. One of the methods using in this field is biofeedback, in which someone can roughly control an involuntary action or having better control over some voluntary actions such as muscle contraction through some visual or audio feedback from those actions. This study is to design and development of one biofeedback instrument, which is GSR (Galvanic Skin Response), and examine some signals that have been taken by this device.


PLoS ONE ◽  
2021 ◽  
Vol 16 (9) ◽  
pp. e0256753
Author(s):  
Leonard F. Engels ◽  
Leonardo Cappello ◽  
Anke Fischer ◽  
Christian Cipriani

Dexterous use of the hands depends critically on sensory feedback, so it is generally agreed that functional supplementary feedback would greatly improve the use of hand prostheses. Much research still focuses on improving non-invasive feedback that could potentially become available to all prosthesis users. However, few studies on supplementary tactile feedback for hand prostheses demonstrated a functional benefit. We suggest that confounding factors impede accurate assessment of feedback, e.g., testing non-amputee participants that inevitably focus intently on learning EMG control, the EMG’s susceptibility to noise and delays, and the limited dexterity of hand prostheses. In an attempt to assess the effect of feedback free from these constraints, we used silicone digit extensions to suppress natural tactile feedback from the fingertips and thus used the tactile feedback-deprived human hand as an approximation of an ideal feed-forward tool. Our non-amputee participants wore the extensions and performed a simple pick-and-lift task with known weight, followed by a more difficult pick-and-lift task with changing weight. They then repeated these tasks with one of three kinds of audio feedback. The tests were repeated over three days. We also conducted a similar experiment on a person with severe sensory neuropathy to test the feedback without the extensions. Furthermore, we used a questionnaire based on the NASA Task Load Index to gauge the subjective experience. Unexpectedly, we did not find any meaningful differences between the feedback groups, neither in the objective nor the subjective measurements. It is possible that the digit extensions did not fully suppress sensation, but since the participant with impaired sensation also did not improve with the supplementary feedback, we conclude that the feedback failed to provide relevant grasping information in our experiments. The study highlights the complex interaction between task, feedback variable, feedback delivery, and control, which seemingly rendered even rich, high-bandwidth acoustic feedback redundant, despite substantial sensory impairment.


Author(s):  
Dongjun Yang ◽  
Wongyu Lee ◽  
Jehyeok Oh

Although the use of audio feedback with devices such as metronomes during cardiopulmonary resuscitation (CPR) is a simple method for improving CPR quality, its effect on the quality of pediatric CPR has not been adequately evaluated. In this study, 64 healthcare providers performed CPR (with one- and two-handed chest compression (OHCC and THCC, respectively)) on a pediatric resuscitation manikin (Resusci Junior QCPR), with and without audio feedback using a metronome (110 beats/min). CPR was performed on the floor, with a compression-to-ventilation ratio of 30:2. For both OHCC and THCC, the rate of achievement of an adequate compression rate during CPR was significantly higher when performed with metronome feedback than that without metronome feedback (CPR with vs. without feedback: 100.0% (99.0, 100.0) vs. 94.0% (69.0, 99.0), p < 0.001, for OHCC, and 100.0% (98.5, 100.0) vs. 91.0% (34.5, 98.5), p < 0.001, for THCC). However, the rate of achievement of adequate compression depth during the CPR performed was significantly higher without metronome feedback than that with metronome feedback (CPR with vs. without feedback: 95.0% (23.5, 99.5) vs. 98.5% (77.5, 100.0), p = 0.004, for OHCC, and 99.0% (95.5, 100.0) vs. 100.0% (99.0, 100.0), p = 0.003, for THCC). Although metronome feedback during pediatric CPR could increase the rate of achievement of adequate compression rates, it could cause decreased compression depth.


Sign in / Sign up

Export Citation Format

Share Document