Microsound and Macrocosm

Author(s):  
Armin Schäfer ◽  
Julia Kursell

This chapter investigates concepts of space in French composer Gérard Grisey’s music. From the 1970s onward, he used sound spectrograms, introducing the compositional technique of “spectralism,” which can be rooted in Arnold Schoenberg’s concept ofKlangfarbe. The cycleLes Espaces acoustiques(1974–1985) uses this technique to create a sequence of musical forms that grow from the acoustic seed of a single tone. The cycle can be traced back to a new role for acoustic space, which emerged in early atonal composition. Grisey confronts the natural order of acoustic space with the human order of producing and perceiving sounds. The dis-symmetry between these two orders of magnitude is further explored in Grisey’sLe Noir de l’Étoile(1990) for six percussionists, magnetic tape, and real-time astrophysical signals. This piece unfolds a triadic constellation of spatial orders where human perception and performance are staged between musical micro-space and cosmic marco-space.

2012 ◽  
Author(s):  
R. A. Grier ◽  
H. Thiruvengada ◽  
S. R. Ellis ◽  
P. Havig ◽  
K. S. Hale ◽  
...  

Author(s):  
Richard Stone ◽  
Minglu Wang ◽  
Thomas Schnieders ◽  
Esraa Abdelall

Human-robotic interaction system are increasingly becoming integrated into industrial, commercial and emergency service agencies. It is critical that human operators understand and trust automation when these systems support and even make important decisions. The following study focused on human-in-loop telerobotic system performing a reconnaissance operation. Twenty-four subjects were divided into groups based on level of automation (Low-Level Automation (LLA), and High-Level Automation (HLA)). Results indicated a significant difference between low and high word level of control in hit rate when permanent error occurred. In the LLA group, the type of error had a significant effect on the hit rate. In general, the high level of automation was better than the low level of automation, especially if it was more reliable, suggesting that subjects in the HLA group could rely on the automatic implementation to perform the task more effectively and more accurately.


Author(s):  
Afef Hfaiedh ◽  
Ahmed Chemori ◽  
Afef Abdelkrim

In this paper, the control problem of a class I of underactuated mechanical systems (UMSs) is addressed. The considered class includes nonlinear UMSs with two degrees of freedom and one control input. Firstly, we propose the design of a robust integral of the sign of the error (RISE) control law, adequate for this special class. Based on a change of coordinates, the dynamics is transformed into a strict-feedback (SF) form. A Lyapunov-based technique is then employed to prove the asymptotic stability of the resulting closed-loop system. Numerical simulation results show the robustness and performance of the original RISE toward parametric uncertainties and disturbance rejection. A comparative study with a conventional sliding mode control reveals a significant robustness improvement with the proposed original RISE controller. However, in real-time experiments, the amplification of the measurement noise is a major problem. It has an impact on the behaviour of the motor and reduces the performance of the system. To deal with this issue, we propose to estimate the velocity using the robust Levant differentiator instead of the numerical derivative. Real-time experiments were performed on the testbed of the inertia wheel inverted pendulum to demonstrate the relevance of the proposed observer-based RISE control scheme. The obtained real-time experimental results and the obtained evaluation indices show clearly a better performance of the proposed observer-based RISE approach compared to the sliding mode and the original RISE controllers.


Author(s):  
Gaurav Chaurasia ◽  
Arthur Nieuwoudt ◽  
Alexandru-Eugen Ichim ◽  
Richard Szeliski ◽  
Alexander Sorkine-Hornung

We present an end-to-end system for real-time environment capture, 3D reconstruction, and stereoscopic view synthesis on a mobile VR headset. Our solution allows the user to use the cameras on their VR headset as their eyes to see and interact with the real world while still wearing their headset, a feature often referred to as Passthrough. The central challenge when building such a system is the choice and implementation of algorithms under the strict compute, power, and performance constraints imposed by the target user experience and mobile platform. A key contribution of this paper is a complete description of a corresponding system that performs temporally stable passthrough rendering at 72 Hz with only 200 mW power consumption on a mobile Snapdragon 835 platform. Our algorithmic contributions for enabling this performance include the computation of a coarse 3D scene proxy on the embedded video encoding hardware, followed by a depth densification and filtering step, and finally stereoscopic texturing and spatio-temporal up-sampling. We provide a detailed discussion and evaluation of the challenges we encountered, as well as algorithm and performance trade-offs in terms of compute and resulting passthrough quality.;AB@The described system is available to users as the Passthrough+ feature on Oculus Quest. We believe that by publishing the underlying system and methods, we provide valuable insights to the community on how to design and implement real-time environment sensing and rendering on heavily resource constrained hardware.


Author(s):  
Yugo Hayashi

AbstractResearch on collaborative learning has revealed that peer-collaboration explanation activities facilitate reflection and metacognition and that establishing common ground and successful coordination are keys to realizing effective knowledge-sharing in collaborative learning tasks. Studies on computer-supported collaborative learning have investigated how awareness tools can facilitate coordination within a group and how the use of external facilitation scripts can elicit elaborated knowledge during collaboration. However, the separate and joint effects of these tools on the nature of the collaborative process and performance have rarely been investigated. This study investigates how two facilitation methods—coordination support via learner gaze-awareness feedback and metacognitive suggestion provision via a pedagogical conversational agent (PCA)—are able to enhance the learning process and learning gains. Eighty participants, organized into dyads, were enrolled in a 2 × 2 between-subject study. The first and second factors were the presence of real-time gaze feedback (no vs. visible gaze) and that of a suggestion-providing PCA (no vs. visible agent), respectively. Two evaluation methods were used: namely, dialog analysis of the collaborative process and evaluation of learning gains. The real-time gaze feedback and PCA suggestions facilitated the coordination process, while gaze was relatively more effective in improving the learning gains. Learners in the Gaze-feedback condition achieved superior learning gains upon receiving PCA suggestions. A successful coordination/high learning performance correlation was noted solely for learners receiving visible gaze feedback and PCA suggestions simultaneously (visible gaze/visible agent). This finding has the potential to yield improved collaborative processes and learning gains through integration of these two methods as well as contributing towards design principles for collaborative-learning support systems more generally.


2020 ◽  
Vol 32 ◽  
pp. 03054
Author(s):  
Akshata Parab ◽  
Rashmi Nagare ◽  
Omkar Kolambekar ◽  
Parag Patil

Vision is one of the very essential human senses and it plays a major role in human perception about surrounding environment. But for people with visual impairment their definition of vision is different. Visually impaired people are often unaware of dangers in front of them, even in familiar environment. This study proposes a real time guiding system for visually impaired people for solving their navigation problem and to travel without any difficulty. This system will help the visually impaired people by detecting the objects and giving necessary information about that object. This information may include what the object is, its location, its precision, distance from the visually impaired etc. All these information will be conveyed to the person through audio commands so that they can navigate freely anywhere anytime with no or minimal assistance. Object detection is done using You Only Look Once (YOLO) algorithm. As the process of capturing the video/images and sending it to the main module has to be carried at greater speed, Graphics Processing Unit (GPU) is used. This will help in enhancing the overall speed of the system and will help the visually Impaired to get the maximum necessary instructions as quickly as possible. The process starts from capturing the real time video, sending it for analysis and processing and get the calculated results. The results obtained from analysis are conveyed to user by means of hearing aid. As a result by this system the blind or the visually impaired people can visualize the surrounding environment and travel freely from source to destination on their own.


2013 ◽  
Vol 73 (6) ◽  
pp. 851-865 ◽  
Author(s):  
Anne Benoit ◽  
Fanny Dufossé ◽  
Alain Girault ◽  
Yves Robert

2015 ◽  
Vol 738-739 ◽  
pp. 1105-1110 ◽  
Author(s):  
Yuan Qing Qin ◽  
Ying Jie Cheng ◽  
Chun Jie Zhou

This paper mainly surveys the state-of-the-art on real-time communicaton in industrial wireless local networks(WLANs), and also identifys the suitable approaches to deal with the real-time requirements in future. Firstly, this paper summarizes the features of industrial WLANs and the challenges it encounters. Then according to the real-time problems of industrial WLAN, the fundamental mechanism of each recent representative resolution is analyzed in detail. Meanwhile, the characteristics and performance of these resolutions are adequately compared. Finally, this paper concludes the current of the research and discusses the future development of industrial WLANs.


Sign in / Sign up

Export Citation Format

Share Document