AbstractTo safely navigate their environment, flying insects rely on visual cues, such as optic flow. Which cues insects can extract from their environment depends closely on the spatial and temporal response properties of their visual system. These in turn can vary between individuals that differ in body size. How optic flow-based flight control depends on the spatial structure of visual cues, and how this relationship scales with body size, has previously been investigated in insects with apposition compound eyes. Here, we characterised the visual flight control response limits and their relationship to body size in an insect with superposition compound eyes: the hummingbird hawkmoth Macroglossum stellatarum. We used the hawkmoths’ centring response in a flight tunnel as a readout for their reception of translational optic flow stimuli of different spatial frequencies. We show that their responses cut off at different spatial frequencies when translational optic flow was presented on either one, or both tunnel walls. Combined with differences in flight speed, this suggests that their flight control was primarily limited by their temporal rather than spatial resolution. We also observed strong individual differences in flight performance, but no correlation between the spatial response cutoffs and body or eye size.
Most algorithms for steering, obstacle avoidance, and moving object detection rely on accurate self-motion estimation, a problem animals solve in real time as they navigate through diverse environments. One biological solution leverages optic flow, the changing pattern of motion experienced on the eye during self-motion. Here I present ARTFLOW, a biologically inspired neural network that learns patterns in optic flow to encode the observer’s self-motion. The network combines the fuzzy ART unsupervised learning algorithm with a hierarchical architecture based on the primate visual system. This design affords fast, local feature learning across parallel modules in each network layer. Simulations show that the network is capable of learning stable patterns from optic flow simulating self-motion through environments of varying complexity with only one epoch of training. ARTFLOW trains substantially faster and yields self-motion estimates that are far more accurate than a comparable network that relies on Hebbian learning. I show how ARTFLOW serves as a generative model to predict the optic flow that corresponds to neural activations distributed across the network.
Colonoscopy is a popular procedure which is used to detect an abnormality. Early diagnosis can help to heal many patients. The purpose of this paper is removing/reducing some artifacts to improve the visual quality of colonoscopy videos to provide better information for physicians. This work complements a series of work consisting of three previously published papers. In this paper, optic flow is used for motion compensation, where a number of consecutive images are registered to integrate some information to create a new image that has/reveals more information than the original one. Colon images were classified into informative and noninformative images by using a deep neural network. Then, two different strategies were used to treat informative and noninformative images. Informative images were treated by using Lucas Kanade with an adaptive temporal mean/median filter, whereas noninformative images were treated by using Lucas Kanade with a derivative of Gaussian (LKDOG) and adaptive temporal median images. Comparison showed that this work achieved better results than those achieved by the state-of-the-art strategies for the same degraded colon images data set. The new proposed algorithm reduced the error alignment by a factor of about 0.3, with a 100% successful image alignment ratio. In conclusion, this algorithm achieved better results than the state-of-the-art approaches in case of enhancing the informative images as shown in the results section; also, it helped to reveal some information from noninformative images that have very few details/no details.
When animals move through the world, their own movements generate widefield optic flow across their eyes. In insects, such widefield motion is encoded by optic lobe neurons. These lobula plate tangential cells (LPTCs) synapse with optic flow-sensitive descending neurons, which in turn project to areas that control neck, wing and leg movements. As the descending neurons play a role in sensorimotor transformation, it is important to understand their spatio-temporal response properties. Recent work shows that a relatively fast and efficient way to quantify such response properties is to use m-sequences or other white noise techniques. Therefore, here we used m-sequences to quantify the impulse responses of optic flow-sensitive descending neurons in male Eristalis tenax hoverflies. We focused on roll impulse responses as hoverflies perform exquisite head roll stabilizing reflexes, and the descending neurons respond particularly well to roll. We found that the roll impulse responses were fast, peaking after 16.5–18.0 ms. This is similar to the impulse response time to peak (18.3 ms) to widefield horizontal motion recorded in hoverfly LPTCs. We found that the roll impulse response amplitude scaled with the size of the stimulus impulse, and that its shape could be affected by the addition of constant velocity roll or lift. For example, the roll impulse response became faster and stronger with the addition of excitatory stimuli, and vice versa. We also found that the roll impulse response had a long return to baseline, which was significantly and substantially reduced by the addition of either roll or lift.
Optokinetic responses function to maintain retinal image stabilization by minimizing optic flow that occurs during self-motion. The hovering ability of hummingbirds is an extreme example of this behaviour. Optokinetic responses are mediated by direction-selective neurons with large receptive fields in the accessory optic system (AOS) and pretectum. Recent studies in hummingbirds showed that, compared to other bird species, (i) the pretectal nucleus lentiformis mesencephali (LM) is hypertrophied, (ii) LM has a unique distribution of direction preferences, and (iii) LM neurons are more tightly tuned to stimulus velocity. In this study, we sought to determine if there are concomitant changes in the nucleus of the basal optic root (nBOR) of the AOS. We recorded the visual response properties of nBOR neurons to largefield drifting random dot patterns and sine wave gratings in Anna's hummingbirds and zebra finches and compared these with archival data from pigeons. We found no differences with respect to the distribution of direction preferences: Neurons responsive to upwards, downwards and nasal-to-temporal motion were equally represented in all three species, and neurons responsive to temporal-to-nasal motion were rare or absent (<5%). Compared to zebra finches and pigeons, however, hummingbird nBOR neurons were more tightly tuned to stimulus velocity of random dot stimuli. Moreover, in response to drifting gratings, hummingbird nBOR neurons are more tightly tuned in the spatio-temporal domain. These results, in combination with specialization in LM, supports a hypothesis that hummingbirds have evolved to be "optic flow specialist" to cope with the optomotor demands of sustained hovering flight.
To date, numerous studies have demonstrated the fundamental role played by optic flow in the control of goal-directed displacement tasks in insects. Optic flow was first introduced by Gibson as part of their ecological approach to perception and action. While this theoretical approach (as a whole) has been demonstrated to be particularly suitable for the study of goal-directed displacements in humans, its usefulness in carrying out entomological field studies remains to be established. In this review we would like to demonstrate that the ecological approach to perception and action could be relevant for the entomologist community in their future investigations. This approach could provide a conceptual and methodological framework for the community in order to: (i) take a critical look at the research carried out to date, (ii) develop rigorous and innovative experimental protocols, and (iii) define scientific issues that push the boundaries of the current scientific field. After a concise literature review about the perceptual control of displacement in insects, we will present the framework proposed by Gibson and suggest its added value for carrying out research in the field of behavioral ecology in insects.
To accurately track self-location, animals need to integrate their movements through space. In amniotes, representations of self-location have been found in regions such as the hippocampus. It is unknown whether more ancient brain regions contain such representations and by which pathways they may drive locomotion. Fish displaced by water currents must prevent uncontrolled drift to potentially dangerous areas. We found that larval zebrafish track such movements and can later swim back to their earlier location. Whole-brain functional imaging revealed the circuit enabling this process of positional homeostasis. Position-encoding brainstem neurons integrate optic flow, then bias future swimming to correct for past displacements by modulating inferior olive and cerebellar activity. Manipulation of position-encoding or olivary neurons abolished positional homeostasis or evoked behavior as if animals had experienced positional shifts. These results reveal a multiregional hindbrain circuit in vertebrates for optic flow integration, memory of self-location, and its neural pathway to behavior.
Motor control deficits outlasting self-reported symptoms are often reported following mild traumatic brain injury (mTBI). The exact duration and nature of these deficits remains unknown. The current study aimed to compare postural responses to static or dynamic virtual visual inputs and during standard clinical tests of balance in 38 children between 9 and 18 years-of-age, at 2 weeks, 3 and 12 months post-concussion. Body sway amplitude (BSA) and postural instability (vRMS) were measured in a 3D virtual reality (VR) tunnel (i.e., optic flow) moving in the antero-posterior direction in different conditions. Measures derived from standard clinical balance evaluations (BOT-2, Timed tasks) and post-concussion symptoms (PCSS-R) were also assessed. Results were compared to those of 38 healthy non-injured children following a similar testing schedule and matched according to age, gender, and premorbid level of physical activity. Results highlighted greater postural response with BSA and vRMS measures at 3 months post-mTBI, but not at 12 months when compared to controls, whereas no differences were observed in post-concussion symptoms between mTBI and controls at 3 and 12 months. These deficits were specifically identified using measures of postural response in reaction to 3D dynamic visual inputs in the VR paradigm, while items from the BOT-2 and the 3 timed tasks did not reveal deficits at any of the test sessions. PCSS-R scores correlated between sessions and with the most challenging condition of the BOT-2 and as well as with the timed tasks, but not with BSA and vRMS. Scores obtained in the most challenging conditions of clinical balance tests also correlated weakly with BSA and vRMS measures in the dynamic conditions. These preliminary findings suggest that using 3D dynamic visual inputs such as optic flow in a controlled VR environment could help detect subtle postural impairments and inspire the development of clinical tools to guide rehabilitation and return to play recommendations.