scholarly journals Head movements quadruple the range of speeds encoded by the insect motion vision system in hawkmoths

2017 ◽  
Vol 284 (1864) ◽  
pp. 20171622 ◽  
Author(s):  
Shane P. Windsor ◽  
Graham K. Taylor

Flying insects use compensatory head movements to stabilize gaze. Like other optokinetic responses, these movements can reduce image displacement, motion and misalignment, and simplify the optic flow field. Because gaze is imperfectly stabilized in insects, we hypothesized that compensatory head movements serve to extend the range of velocities of self-motion that the visual system encodes. We tested this by measuring head movements in hawkmoths Hyles lineata responding to full-field visual stimuli of differing oscillation amplitudes, oscillation frequencies and spatial frequencies. We used frequency-domain system identification techniques to characterize the head's roll response, and simulated how this would have affected the output of the motion vision system, modelled as a computational array of Reichardt detectors. The moths' head movements were modulated to allow encoding of both fast and slow self-motion, effectively quadrupling the working range of the visual system for flight control. By using its own output to drive compensatory head movements, the motion vision system thereby works as an adaptive sensor, which will be especially beneficial in nocturnal species with inherently slow vision. Studies of the ecology of motion vision must therefore consider the tuning of motion-sensitive interneurons in the context of the closed-loop systems in which they function.

2017 ◽  
Vol 7 (1) ◽  
pp. 20160093 ◽  
Author(s):  
Ivo G. Ros ◽  
Partha S. Bhagavatula ◽  
Huai-Ti Lin ◽  
Andrew A. Biewener

Flying animals must successfully contend with obstacles in their natural environments. Inspired by the robust manoeuvring abilities of flying animals, unmanned aerial systems are being developed and tested to improve flight control through cluttered environments. We previously examined steering strategies that pigeons adopt to fly through an array of vertical obstacles (VOs). Modelling VO flight guidance revealed that pigeons steer towards larger visual gaps when making fast steering decisions. In the present experiments, we recorded three-dimensional flight kinematics of pigeons as they flew through randomized arrays of horizontal obstacles (HOs). We found that pigeons still decelerated upon approach but flew faster through a denser array of HOs compared with the VO array previously tested. Pigeons exhibited limited steering and chose gaps between obstacles most aligned to their immediate flight direction, in contrast to VO navigation that favoured widest gap steering. In addition, pigeons navigated past the HOs with more variable and decreased wing stroke span and adjusted their wing stroke plane to reduce contact with the obstacles. Variability in wing extension, stroke plane and wing stroke path was greater during HO flight. Pigeons also exhibited pronounced head movements when negotiating HOs, which potentially serve a visual function. These head-bobbing-like movements were most pronounced in the horizontal (flight direction) and vertical directions, consistent with engaging motion vision mechanisms for obstacle detection. These results show that pigeons exhibit a keen kinesthetic sense of their body and wings in relation to obstacles. Together with aerodynamic flapping flight mechanics that favours vertical manoeuvring, pigeons are able to navigate HOs using simple rules, with remarkable success.


Perception ◽  
1987 ◽  
Vol 16 (3) ◽  
pp. 299-308 ◽  
Author(s):  
Alexander H Wertheim

During a pursuit eye movement made in darkness across a small stationary stimulus, the stimulus is perceived as moving in the opposite direction to the eyes. This so-called Filehne illusion is usually explained by assuming that during pursuit eye movements the extraretinal signal (which informs the visual system about eye velocity so that retinal image motion can be interpreted) falls short. A study is reported in which the concept of an extraretinal signal is replaced by the concept of a reference signal, which serves to inform the visual system about the velocity of the retinae in space. Reference signals are evoked in response to eye movements, but also in response to any stimulation that may yield a sensation of self-motion, because during self-motion the retinae also move in space. Optokinetic stimulation should therefore affect reference signal size. To test this prediction the Filehne illusion was investigated with stimuli of different optokinetic potentials. As predicted, with briefly presented stimuli (no optokinetic potential) the usual illusion always occurred. With longer stimulus presentation times the magnitude of the illusion was reduced when the spatial frequency of the stimulus was reduced (increased optokinetic potential). At very low spatial frequencies (strongest optokinetic potential) the illusion was inverted. The significance of the conclusion, that reference signal size increases with increasing optokinetic stimulus potential, is discussed. It appears to explain many visual illusions, such as the movement aftereffect and center–surround induced motion, and it may bridge the gap between direct Gibsonian and indirect inferential theories of motion perception.


Author(s):  
Д.А. Смирнов ◽  
В.Г. Бондарев ◽  
А.В. Николенко

Проведен краткий анализ как отечественных, так и зарубежных систем межсамолетной навигации. В ходе анализа были отражены недостатки систем межсамолетной навигации и представлен актуальный подход повышения точности системы навигации за счет применения системы технического зрения. Для определения местоположения ведущего самолета предлагается рассмотреть в качестве измерительного комплекса систему технического зрения, которая способна решать большой круг задач на различных этапах, в частности, и полет строем. Систему технического зрения предлагается установить на ведомом самолете с целью измерения всех параметров, необходимых для формирования автоматического управления полетом летательного аппарата. Обработка изображений ведущего самолета выполняется с целью определения координат трех идентичных точек на фоточувствительных матрицах. Причем в качестве этих точек выбираются оптически контрастные элементы конструкции летательного аппарата, например окончания крыла, хвостового оперения и т.д. Для упрощения процедуры обработки изображений возможно использование полупроводниковых источников света в инфракрасном диапазоне (например, с длиной волны λ = 1,54 мкм), что позволяет работать даже в сложных метеоусловиях. Такой подход может быть использован при автоматизации полета строем более чем двух летательных аппаратов, при этом необходимо только оборудовать системой технического зрения все ведомые самолеты группы The article provides a brief analysis of both domestic and foreign inter-aircraft navigation systems. In the course of the analysis, we found the shortcomings of inter-aircraft navigation systems and presented an up-to-date approach to improving the accuracy of the navigation system through the use of a technical vision system. To determine the location of the leading aircraft, we proposed to consider a technical vision system as a measuring complex, which is able to solve a large range of tasks at various stages, in particular, flight in formation. We proposed to install the technical vision system on the slave aircraft in order to measure all the parameters necessary for the formation of automatic flight control of the aircraft. We performed an image processing of the leading aircraft to determine the coordinates of three identical points on photosensitive matrices. Moreover, we selected optically contrasting elements of the aircraft structure as these points, for example, the end of the wing, tail, etc. To simplify the image processing procedure, it is possible to use semiconductor light sources in the infrared range (for example, with a wavelength of λ = 1.54 microns), which allows us to work even in difficult weather conditions. This approach can be used when automating a flight in formation of more than two aircraft, while it is only necessary to equip all the guided aircraft of the group with a technical vision system


2020 ◽  
Author(s):  
Vasily Matkivsky ◽  
Alexander Moiseev ◽  
Pavel Shilyagin ◽  
Alexander Rodionov ◽  
Hendrik Spahr ◽  
...  

A method for numerical estimation and correction of aberrations of the eye in fundus imaging with optical coherence tomography (OCT) is presented. Aberrations are determined statistically by using the estimate based on likelihood function maximization. The method can be considered as an extension of the phase gradient autofocusing algorithm in synthetic aperture radar imaging to 2D optical aberrations correction. The efficiency of the proposed method has been demonstrated in OCT fundus imaging with 6λ aberrations. After correction, single photoreceptors were resolved. It is also shown that wavefront distortions with high spatial frequencies can be determined and corrected.Graphical Abstract for Table of Contents[Text. This work is dedicated to development a method for numerical estimation and correction of aberrations of the eye in fundus imaging with OCT. Aberration evaluation is performed statistically by using estimate based on likelihood function maximization. The efficiency of the proposed method has been demonstrated in OCT fundus imaging with 6λ aberrations. It has been shown that spatial high-frequency wavefront distortions can be determined]


2020 ◽  
Author(s):  
Maria-Bianca Leonte ◽  
Aljoscha Leonhardt ◽  
Alexander Borst ◽  
Alex S. Mauss

AbstractVisual motion detection is among the best understood neuronal computations. One assumed behavioural role is to detect self-motion and to counteract involuntary course deviations, extensively investigated in tethered walking or flying flies. In free flight, however, any deviation from a straight course is signalled by both the visual system as well as by proprioceptive mechanoreceptors called ‘halteres’, which are the equivalent of the vestibular system in vertebrates. Therefore, it is yet unclear to what extent motion vision contributes to course control, or whether straight flight is completely controlled by proprioceptive feedback from the halteres. To answer these questions, we genetically rendered flies motion-blind by blocking their primary motion-sensitive neurons and quantified their free-flight performance. We found that such flies have difficulties maintaining a straight flight trajectory, much like control flies in the dark. By unilateral wing clipping, we generated an asymmetry in propulsory force and tested the ability of flies to compensate for this perturbation. While wild-type flies showed a remarkable level of compensation, motion-blind animals exhibited pronounced circling behaviour. Our results therefore unequivocally demonstrate that motion vision is necessary to fly straight under realistic conditions.


Perception ◽  
1998 ◽  
Vol 27 (8) ◽  
pp. 937-949 ◽  
Author(s):  
Takanao Yajima ◽  
Hiroyasu Ujike ◽  
Keiji Uchikawa

The two main questions addressed in this study were (a) what effect does yoking the relative expansion and contraction (EC) of retinal images to forward and backward head movements have on the resultant magnitude and stability of perceived depth, and (b) how does this relative EC image motion interact with the depth cues of motion parallax? Relative EC image motion was produced by moving a small CCD camera toward and away from the stimulus, two random-dot surfaces separated in depth, in synchrony with the observers' forward and backward head movements. Observers viewed the stimuli monocularly, on a helmet-mounted display, while moving their heads at various velocities, including zero velocity. The results showed that (a) the magnitude of perceived depth was smaller with smaller head velocities (<10 cm s−1), including the zero-head-velocity condition, than with a larger velocity (10 cm s−1), and (b) perceived depth, when motion parallax and the EC image motion cues were simultaneously presented, is equal to the greater of the two possible perceived depths produced from either of these two cues alone. The results suggested the role of nonvisual information of self-motion on perceiving depth.


Author(s):  
Chauncey F. Graetzel ◽  
Steven N. Fry ◽  
Felix Beyeler ◽  
Yu Sun ◽  
Bradley J. Nelson

2004 ◽  
Vol 91 (1) ◽  
pp. 1-12 ◽  
Author(s):  
Thomas Matheson ◽  
Stephen M. Rogers ◽  
Holger G. Krapp

We demonstrate pronounced differences in the visual system of a polyphenic locust species that can change reversibly between two forms (phases), which vary in morphology and behavior. At low population densities, individuals of Schistocerca gregaria develop into the solitarious phase, are cryptic, and tend to avoid other locusts. At high densities, individuals develop instead into the swarm-forming gregarious phase. We analyzed in both phases the responses of an identified visual interneuron, the descending contralateral movement detector (DCMD), which responds to approaching objects. We demonstrate that habituation of DCMD is fivefold stronger in solitarious locusts. In both phases, the mean time of peak firing relative to the time to collision nevertheless occurs with a similar characteristic delay after an approaching object reaches a particular angular extent on the retina. Variation in the time of peak firing is greater in solitarious locusts, which have lower firing rates. Threshold angle and delay are therefore conserved despite changes in habituation or behavioral phase state. The different rates of habituation should contribute to different predator escape strategies or flight control for locusts living either in a swarm or as isolated individuals. For example, increased variability in the habituated responses of solitarious locusts should render their escape behaviors less predictable. Relative resistance to habituation in gregarious locusts should permit the continued responsiveness required to avoid colliding with other locusts in a swarm. These results will permit us to analyze neuronal plasticity in a model system with a well-defined and controllable behavioral context.


Author(s):  
Xiangyang Xu ◽  
Qiao Chen ◽  
Ruixin Xu

Similar to auditory perception of sound system, color perception of the human visual system also presents a multi-frequency channel property. In order to study the multi-frequency channel mechanism of how the human visual system processes color information, the paper proposed a psychophysical experiment to measure the contrast sensitivities based on 17 color samples of 16 spatial frequencies on CIELAB opponent color space. Correlation analysis was carried out on the psychophysical experiment data, and the results show obvious linear correlations of observations for different spatial frequencies of different observers, which indicates that a linear model can be used to model how human visual system processes spatial frequency information. The results of solving the model based on the experiment data of color samples show that 9 spatial frequency tuning curves can exist in human visual system with each lightness, R–G and Y–B color channel and each channel can be represented by 3 tuning curves, which reflect the “center-around” form of the human visual receptive field. It is concluded that there are 9 spatial frequency channels in human vision system. The low frequency tuning curve of a narrow-frequency bandwidth shows the characteristics of lower level receptive field for human vision system, the medium frequency tuning curve shows a low pass property of the change of medium frequent colors and the high frequency tuning curve of a width-frequency bandwidth, which has a feedback effect on the low and medium frequency channels and shows the characteristics of higher level receptive field for human vision system, which represents the discrimination of details.


Sensors ◽  
2019 ◽  
Vol 19 (17) ◽  
pp. 3802 ◽  
Author(s):  
Ahmed F. Fadhil ◽  
Raghuveer Kanneganti ◽  
Lalit Gupta ◽  
Henry Eberle ◽  
Ravi Vaidyanathan

Networked operation of unmanned air vehicles (UAVs) demands fusion of information from disparate sources for accurate flight control. In this investigation, a novel sensor fusion architecture for detecting aircraft runway and horizons as well as enhancing the awareness of surrounding terrain is introduced based on fusion of enhanced vision system (EVS) and synthetic vision system (SVS) images. EVS and SVS image fusion has yet to be implemented in real-world situations due to signal misalignment. We address this through a registration step to align EVS and SVS images. Four fusion rules combining discrete wavelet transform (DWT) sub-bands are formulated, implemented, and evaluated. The resulting procedure is tested on real EVS-SVS image pairs and pairs containing simulated turbulence. Evaluations reveal that runways and horizons can be detected accurately even in poor visibility. Furthermore, it is demonstrated that different aspects of EVS and SVS images can be emphasized by using different DWT fusion rules. The procedure is autonomous throughout landing, irrespective of weather. The fusion architecture developed in this study holds promise for incorporation into manned heads-up displays (HUDs) and UAV remote displays to assist pilots landing aircraft in poor lighting and varying weather. The algorithm also provides a basis for rule selection in other signal fusion applications.


Sign in / Sign up

Export Citation Format

Share Document