image motion
Recently Published Documents


TOTAL DOCUMENTS

764
(FIVE YEARS 108)

H-INDEX

50
(FIVE YEARS 4)

2022 ◽  
Vol 21 (12) ◽  
pp. 298
Author(s):  
Zi-Yue Wang ◽  
De-Qing Ren ◽  
Raffi Saadetian

Abstract Measurements of the daytime seeing profile of the atmospheric turbulence are crucial for evaluating a solar astronomical site so that research on the profile of the atmospheric turbulence as a function of altitude C n 2 ( h n ) becomes more and more critical for performance estimation and optimization of future adaptive optics (AO) including the multi-conjugate adaptive optics (MCAO) systems. Recently, the S-DIMM+ method has been successfully used to measure daytime turbulence profiles above the New Solar Telescope (NST) on Big Bear Lake. However, such techniques are limited by the requirement of using a large solar telescope which is not realistic for a new potential astronomical site. Meanwhile, the A-MASP (advanced multiple-aperture seeing profiler) method is more portable and has been proved that can reliably retrieve the seeing profile up to 16 km with the Dunn Solar Telescope (DST) on the National Solar Observatory (Townson, Kellerer et al.). But the turbulence of the ground layer is calculated by combining A-MASP and S-DIMM+ (Solar Differential Image Motion Monitor+) due to the limitation of the two-individual-telescopes structure. To solve these problems, we introduce the two-telescope seeing profiler (TTSP) which consists of two portable individual telescopes. Numerical simulations have been conducted to evaluate the performance of TTSP. We find our TTSP can effectively retrieve seeing profiles of four turbulence layers with a relative error of less than 4% and is dependable for actual seeing measurement.


2021 ◽  
Vol 14 (1) ◽  
pp. 87
Author(s):  
Yeping Peng ◽  
Zhen Tang ◽  
Genping Zhao ◽  
Guangzhong Cao ◽  
Chao Wu

Unmanned air vehicle (UAV) based imaging has been an attractive technology to be used for wind turbine blades (WTBs) monitoring. In such applications, image motion blur is a challenging problem which means that motion deblurring is of great significance in the monitoring of running WTBs. However, an embarrassing fact for these applications is the lack of sufficient WTB images, which should include better pairs of sharp images and blurred images captured under the same conditions for network model training. To overcome the challenge of image pair acquisition, a training sample synthesis method is proposed. Sharp images of static WTBs were first captured, and then video sequences were prepared by running WTBs at different speeds. The blurred images were identified from the video sequences and matched to the sharp images using image difference. To expand the sample dataset, rotational motion blurs were simulated on different WTBs. Synthetic image pairs were then produced by fusing sharp images and images of simulated blurs. Finally, a total of 4000 image pairs were obtained. To conduct motion deblurring, a hybrid deblurring network integrated with DeblurGAN and DeblurGANv2 was deployed. The results show that the integration of DeblurGANv2 and Inception-ResNet-v2 provides better deblurred images, in terms of both metrics of signal-to-noise ratio (80.138) and structural similarity (0.950) than those obtained from the comparable networks of DeblurGAN and MobileNet-DeblurGANv2.


Author(s):  
Alicia Dautt-Silva ◽  
Raymond de Callafon

Abstract The task of trajectory planning for a dual-mirror optical pointing system greatly benefits from carefully designed dynamic input signals. This paper summarizes the application of multivariable input shaping (IS) for a dual-mirror system, starting from initial open-loop step-response data. The optical pointing system presented consists of two Fast Steering Mirrors (FSM) for which dynamically coupled input signals are designed, while adhering to mechanical and input signal constraints. For the solution, the planned trajectories for the dual-mirrors are determined via (inverse) kinematic analysis. A linear program (LP) problem is used to compute the dynamic input signal for each of the FSMs, with one of the mirrors acting as an image motion compensation device that guarantees tracking of a planned trajectory within a specified accuracy and the operating constraints of the FSMs.


2021 ◽  
Vol Publish Ahead of Print ◽  
Author(s):  
Harold E. Bedell ◽  
Dorcas K. Tsang ◽  
Michael T. Ukwade
Keyword(s):  

2021 ◽  
Vol 2145 (1) ◽  
pp. 012015
Author(s):  
Ronald Macatangay ◽  
Somsawat Rattanasoon

Abstract Forecasting the astronomical seeing above an observatory can assist astronomers plan their observations. In this study, the astronomical seeing above the Thai National Observatory (TNO) in Doi Inthanon, Chiang Mai, Thailand was simulated using the Weather Research and Forecasting (WRF) model. The model outputs were then compared to Polaris seeing observations and using the Differential Image Motion Monitor (DIMM). Results showed that the forecasts capture the variation of the astronomical seeing fairly well. However, bias correction is needed on the simulations due to lack of data from meteorological balloons to constrain the model.


2021 ◽  
Vol 15 ◽  
Author(s):  
Yihan Lin ◽  
Wei Ding ◽  
Shaohua Qiang ◽  
Lei Deng ◽  
Guoqi Li

With event-driven algorithms, especially spiking neural networks (SNNs), achieving continuous improvement in neuromorphic vision processing, a more challenging event-stream dataset is urgently needed. However, it is well-known that creating an ES-dataset is a time-consuming and costly task with neuromorphic cameras like dynamic vision sensors (DVS). In this work, we propose a fast and effective algorithm termed Omnidirectional Discrete Gradient (ODG) to convert the popular computer vision dataset ILSVRC2012 into its event-stream (ES) version, generating about 1,300,000 frame-based images into ES-samples in 1,000 categories. In this way, we propose an ES-dataset called ES-ImageNet, which is dozens of times larger than other neuromorphic classification datasets at present and completely generated by the software. The ODG algorithm implements image motion to generate local value changes with discrete gradient information in different directions, providing a low-cost and high-speed method for converting frame-based images into event streams, along with Edge-Integral to reconstruct the high-quality images from event streams. Furthermore, we analyze the statistics of ES-ImageNet in multiple ways, and a performance benchmark of the dataset is also provided using both famous deep neural network algorithms and spiking neural network algorithms. We believe that this work shall provide a new large-scale benchmark dataset for SNNs and neuromorphic vision.


2021 ◽  
Author(s):  
Yusuke Ujitoko ◽  
Takahiro Kawabe

Humans can judge the softness of elastic materials through only visual cues. However, factors contributing to the judgement of visual softness are not yet fully understood. We conducted a psychophysical experiment to determine which factors and motion features contribute to the apparent softness of materials. Observers watched video clips in which materials were indented from the top surface to a certain depth, and reported the apparent softness of the materials. The depth and speed of indentation were systematically manipulated. As physical characteristics of materials, compliance was also controlled. It was found that higher indentation speeds resulted in larger softness rating scores and the variation with the indentation speed was successfully explained by the image motion speed. The indentation depth had a powerful effect on the softness rating scores whose variation with the indentation depth was consistently explained by motion features related to overall deformation. Higher material compliance resulted in higher rating scores while their effect was not straightforwardly explained by the motion features. We conclude that the brain makes visual judgments about the softness of materials under indentation on the basis of the motion speed and deformation magnitude while motion features related to material compliance require further study.


2021 ◽  
Author(s):  
Adam Mani ◽  
Xinzhu Yang ◽  
Tiffany Zhao ◽  
David M Berson

Optokinetic nystagmus (OKN) is a visuomotor reflex that works in tandem with the vestibulo-ocular reflex (VOR) to stabilize the retinal image during self-motion. OKN requires information about both the direction and speed of retinal image motion. Both components are computed within the retina because they are already encoded in the spike trains of the specific class of retinal output neurons that drives OKN ─ the ON direction-selective ganglion cells (ON DSGCs). The synaptic circuits that shape the directional tuning of ON DSGCs, anchored by starburst amacrine cells, are largely established. By contrast, little is known about the cells and circuits that account for the slow speed preference of ON DSGCs and, thus, of OKN that they drive. A recent study in rabbit retina implicates feedforward glycinergic inhibition as the key suppressor of ON DSGC responses to fast motion. Here, we used serial-section electron microscopy, patch recording, pharmacology, and optogenetic and chemogenetic manipulations to probe this circuit in mouse retina. We confirm a central role for feedforward glycinergic inhibition onto ON DSGCs and identify a surprising primary source for this inhibition ─ the VGluT3 amacrine cell (VG3 cell). VG3 cells are retinal interneurons that release both glycine and glutamate, exciting some neurons and inhibiting others. Their role in suppressing the response of ON DSGCs to rapid global motion is surprising. VG3 cells had been thought to provide glutamatergic excitation to ON-DSGCs, not glycinergic inhibition, and because they have strong receptive fields surrounds which might have been expected to render them unresponsive to global motion. In fact, VG3 cells are robustly activated by the sorts of fast global motion that suppress ON DSGCs and weaken optokinetic responses as revealed by dendritic Ca+2 imaging, since surround suppression is less prominent when probed with moving gratings than with spots. VG3 cells excite many ganglion cell types through their release of glutatmate. We confirmed that for one such type, the ON-OFF DSGCs, VG3 cells enhance the response to fast motion in these cells, just as they suppress it in ON DSGCs. Together, our results assign a novel function to VGluT3 cells in shaping the velocity range over which retinal slip drives compensatory image stabilizing eye movements. In addition, fast speed motion signal from VGluT3 cells is used by ON-OFF DSGCs to extend the speed range over which they respond, and might be used to shape the speed tuning or temporal bandwidth of the responses of other RGCs.


2021 ◽  
Vol 21 (9) ◽  
pp. 2046
Author(s):  
Michele A. Cox ◽  
Janis Intoy ◽  
Benjamin Moon ◽  
Ruei-Jr Wu ◽  
Jonathan D. Victor ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document