light pattern
Recently Published Documents


TOTAL DOCUMENTS

136
(FIVE YEARS 29)

H-INDEX

14
(FIVE YEARS 2)

2022 ◽  
Vol 7 (1) ◽  
pp. 2270002
Author(s):  
Francesca D'Elia ◽  
Francesco Pisani ◽  
Alessandro Tredicucci ◽  
Dario Pisignano ◽  
Andrea Camposeo

2021 ◽  
Author(s):  
Shuailong Zhang ◽  
Mohammed Elsayed ◽  
Ran Peng ◽  
Yujie Chen ◽  
Yanfeng Zhang ◽  
...  

2021 ◽  
Vol 81 (10) ◽  
Author(s):  
Angel Abusleme ◽  
Thomas Adam ◽  
Shakeel Ahmad ◽  
Rizwan Ahmed ◽  
Sebastiano Aiello ◽  
...  

AbstractAtmospheric neutrinos are one of the most relevant natural neutrino sources that can be exploited to infer properties about cosmic rays and neutrino oscillations. The Jiangmen Underground Neutrino Observatory (JUNO) experiment, a 20 kton liquid scintillator detector with excellent energy resolution is currently under construction in China. JUNO will be able to detect several atmospheric neutrinos per day given the large volume. A study on the JUNO detection and reconstruction capabilities of atmospheric $$\nu _e$$ ν e  and $$\nu _\mu $$ ν μ  fluxes is presented in this paper. In this study, a sample of atmospheric neutrino Monte Carlo events has been generated, starting from theoretical models, and then processed by the detector simulation. The excellent timing resolution of the 3” PMT light detection system of JUNO detector and the much higher light yield for scintillation over Cherenkov allow to measure the time structure of the scintillation light with very high precision. Since $$\nu _e$$ ν e  and $$\nu _\mu $$ ν μ  interactions produce a slightly different light pattern, the different time evolution of light allows to discriminate the flavor of primary neutrinos. A probabilistic unfolding method has been used, in order to infer the primary neutrino energy spectrum from the detector experimental observables. The simulated spectrum has been reconstructed between 100 MeV and 10 GeV, showing a great potential of the detector in the atmospheric low energy region.


Sensors ◽  
2021 ◽  
Vol 21 (18) ◽  
pp. 6097
Author(s):  
Taichu Shi ◽  
Yang Qi ◽  
Cheng Zhu ◽  
Ying Tang ◽  
Ben Wu

In this paper, we propose and experimentally demonstrate a three-dimensional (3D) microscopic system that reconstructs a 3D image based on structured light illumination. The spatial pattern of the structured light changes according to the profile of the object, and by measuring the change, a 3D image of the object is reconstructed. The structured light is generated with a digital micro-mirror device (DMD), which controls the structured light pattern to change in a kHz rate and enables the system to record the 3D information in real time. The working distance of the imaging system is 9 cm at a resolution of 20 μm. The resolution, working distance, and real-time 3D imaging enable the system to be applied in bridge and road crack examinations, and structure fault detection of transportation infrastructures.


2021 ◽  
Vol 49 (3) ◽  
pp. 12444
Author(s):  
Ingrida ŠAULIENĖ ◽  
Laura ŠUKIENĖ ◽  
Gintautas DAUNYS ◽  
Gediminas VALIULIS ◽  
Lukas VAITKEVIČIUS

Technological progress in modern scientific development generates opportunities that create new ways to learn more about objects and systems of nature. An important indicator in choosing research methods is not only accuracy but also the time and human resources required to achieve results. This research demonstrates the possibilities of using an automatic particle detector that works based on scattered light pattern and laser-induced fluorescence for plant biodiversity investigation. Airborne pollen data were collected by two different devices, and results were analysed in light of the application for plant biodiversity observation. This paper explained the possibility to gain knowledge with a new type of method that would enable biodiversity monitoring programs to be extended to include information on the diversity of airborne particles of biological origin. It was revealed that plant conservation could be complemented by new tools to test the effectiveness of management plans and optimise mitigation measures to reduce impacts on biodiversity.


2021 ◽  
Vol 11 (15) ◽  
pp. 6936
Author(s):  
Sebastian Babilon ◽  
Sebastian Beck ◽  
Julian Kunkel ◽  
Julian Klabes ◽  
Paul Myland ◽  
...  

As one factor among others, circadian effectiveness depends on the spatial light distribution of the prevalent lighting conditions. In a typical office context focusing on computer work, the light that is experienced by the office workers is usually composed of a direct component emitted by the room luminaires and the computer monitors as well as by an indirect component reflected from the walls, surfaces, and ceiling. Due to this multi-directional light pattern, spatially resolved light measurements are required for an adequate prediction of non-visual light-induced effects. In this work, we therefore propose a novel methodological framework for spatially resolved light measurements that allows for an estimate of the circadian effectiveness of a lighting situation for variable field of view (FOV) definitions. Results of exemplary in-field office light measurements are reported and compared to those obtained from standard spectral radiometry to validate the accuracy of the proposed approach. The corresponding relative error is found to be of the order of 3–6%, which denotes an acceptable range for most practical applications. In addition, the impact of different FOVs as well as non-zero measurement angles will be investigated.


2021 ◽  
Vol 2021 ◽  
pp. 1-16
Author(s):  
Xiaojun Jia ◽  
Zihao Liu

Pattern encoding and decoding are two challenging problems in a three-dimensional (3D) reconstruction system using coded structured light (CSL). In this paper, a one-shot pattern is designed as an M-array with eight embedded geometric shapes, in which each 2 × 2 subwindow appears only once. A robust pattern decoding method for reconstructing objects from a one-shot pattern is then proposed. The decoding approach relies on the robust pattern element tracking algorithm (PETA) and generic features of pattern elements to segment and cluster the projected structured light pattern from a single captured image. A deep convolution neural network (DCNN) and chain sequence features are used to accurately classify pattern elements and key points (KPs), respectively. Meanwhile, a training dataset is established, which contains many pattern elements with various blur levels and distortions. Experimental results show that the proposed approach can be used to reconstruct 3D objects.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Jan Antolik ◽  
Quentin Sabatier ◽  
Charlie Galle ◽  
Yves Frégnac ◽  
Ryad Benosman

AbstractThe neural encoding of visual features in primary visual cortex (V1) is well understood, with strong correlates to low-level perception, making V1 a strong candidate for vision restoration through neuroprosthetics. However, the functional relevance of neural dynamics evoked through external stimulation directly imposed at the cortical level is poorly understood. Furthermore, protocols for designing cortical stimulation patterns that would induce a naturalistic perception of the encoded stimuli have not yet been established. Here, we demonstrate a proof of concept by solving these issues through a computational model, combining (1) a large-scale spiking neural network model of cat V1 and (2) a virtual prosthetic system transcoding the visual input into tailored light-stimulation patterns which drive in situ the optogenetically modified cortical tissue. Using such virtual experiments, we design a protocol for translating simple Fourier contrasted stimuli (gratings) into activation patterns of the optogenetic matrix stimulator. We then quantify the relationship between spatial configuration of the imposed light pattern and the induced cortical activity. Our simulations in the absence of visual drive (simulated blindness) show that optogenetic stimulation with a spatial resolution as low as 100 $$\upmu$$ μ m, and light intensity as weak as $$10^{16}$$ 10 16 photons/s/cm$$^2$$ 2 is sufficient to evoke activity patterns in V1 close to those evoked by normal vision.


Sign in / Sign up

Export Citation Format

Share Document