Adaptive H ∞ -control for robust visual feedback systems

1999 ◽  
Vol 32 (2) ◽  
pp. 2298-2303
Author(s):  
Akira Maruyama ◽  
Masayuki Fujita
2002 ◽  
Vol 12 (1) ◽  
pp. 25-33
Author(s):  
K.J. Chen ◽  
E.A. Keshner ◽  
B.W. Peterson ◽  
T.C. Hain

Control of the head involves somatosensory, vestibular, and visual feedback. The dynamics of these three feedback systems must be identified in order to gain a greater understanding of the head control system. We have completed one step in the development of a head control model by identifying the dynamics of the visual feedback system. A mathematical model of human head tracking of visual targets in the horizontal plane was fit to experimental data from seven subjects performing a visual head tracking task. The model incorporates components based on the underlying physiology of the head control system. Using optimization methods, we were able to identify neural processing delay, visual control gain, and neck viscosity parameters in each experimental subject.


Author(s):  
Garyth Nair ◽  
David M. Howard ◽  
Graham F. Welch

Modern personal computers are fast enough to analyze singing and provide real-time visual feedback of relevant acoustic elements. This feedback provides a quantitative dimension to the learning process in support of developing appropriate sung outputs. However, no computer-based system can replace the singing teacher, as the qualitative listening of an experienced musician cannot be replicated by a computer algorithm. The application of real-time visual displays can facilitate greater efficiency in learning fundamental skills through direct feedback in lessons and during private practice, leaving the teacher more time to work on qualitative aspects of performance that a computer cannot contribute to, such as stagecraft, interpretation, understanding the words, collaborating with an accompanist, and when to use different voice qualities. This chapter describes typical displays that are used in real-time visual feedback systems for singing training and considers how spectrography in particular can be used in pedagogical practice in the voice studio.


Critical Care ◽  
2017 ◽  
Vol 21 (1) ◽  
Author(s):  
Andrea Cortegiani ◽  
Vincenzo Russotto ◽  
Enrico Baldi ◽  
Enrico Contri ◽  
Santi Maurizio Raineri ◽  
...  

Author(s):  
R. Thandeeswaran ◽  
Rajat Pawar ◽  
Mallika Rai

The automotive industry has reached a stage categorisation of the degree of the automation has become crucial. According to the levels of automation defined by SAE, the automotive industry is already past the first four and development is now being heavily concentrated on level 5, that is, driving independent of human control. This obviously requires an array of sensors, microcontrollers and visual feedback systems like cameras, LiDAR (Light Detection and Ranging) to be present in the vehicle. With security concerns omnipresent among these devices, they are now ported to the realm of vehicles and must be tackled so that unsafe driving conditions are never experienced. In this paper, Section 3 elaborates upon the technologies that have shaped autonomous cars into the form known today and Section 4 explains the network architecture and network security amongst these cars. Section 5 describes the rippling effect of this evolution in the automotive industry on other supportive industries, Section 6 talks about the challenges posed to the development of AVs and finally, Section 7 discusses the future of autonomous vehicles in India.


Sign in / Sign up

Export Citation Format

Share Document