Psychophysical Study of Human Visual Perception of Flicker Artifacts in Automotive Digital Mirror Replacement Systems

Author(s):  
Nicolai Behmann ◽  
Sousa Weddige ◽  
Holger Blume

Aliasing effects due to time-discrete capturing of amplitude-modulated light with a digital image sensor are perceived as flicker by humans. Especially when observing these artifacts in digital mirror replacement systems, they are annoying and can pose a risk. Therefore, ISO 16505 requires flicker-free reproduction for 90 % of people in these systems. Various psychophysical studies investigate the influence of large-area flickering of displays, environmental light, or flickering in television applications on perception and concentration. However, no detailed knowledge of subjective annoyance/irritation due to flicker from camera-monitor systems as a mirror replacement in vehicles exist so far, but the number of these systems is constantly increasing. This psychophysical study used a novel data set from real-world driving scenes and synthetic simulation with synthetic flicker. More than 25 test persons were asked to quantify the subjective annoyance level of different flicker frequencies, amplitudes, mean values, sizes, and positions. The results show that for digital mirror replacement systems, human subjective annoyance due to flicker is greatest in the 15 Hz range with increasing amplitude and magnitude. Additionally, the sensitivity to flicker artifacts increases with the duration of observation.

2021 ◽  
Vol 13 (10) ◽  
pp. 2001
Author(s):  
Antonella Boselli ◽  
Alessia Sannino ◽  
Mariagrazia D’Emilio ◽  
Xuan Wang ◽  
Salvatore Amoruso

During the summer of 2017, multiple huge fires occurred on Mount Vesuvius (Italy), dispersing a large quantity of ash in the surrounding area ensuing the burning of tens of hectares of Mediterranean scrub. The fires affected a very large area of the Vesuvius National Park and the smoke was driven by winds towards the city of Naples, causing daily peak values of particulate matter (PM) concentrations at ground level higher than the limit of the EU air quality directive. The smoke plume spreading over the area of Naples in this period was characterized by active (lidar) and passive (sun photometer) remote sensing as well as near-surface (optical particle counter) observational techniques. The measurements allowed us to follow both the PM variation at ground level and the vertical profile of fresh biomass burning aerosol as well as to analyze the optical and microphysical properties. The results evidenced the presence of a layer of fine mode aerosol with large mean values of optical depth (AOD > 0.25) and Ångstrom exponent (γ > 1.5) above the observational site. Moreover, the lidar ratio and aerosol linear depolarization obtained from the lidar observations were about 40 sr and 4%, respectively, consistent with the presence of biomass burning aerosol in the atmosphere.


2014 ◽  
Vol 4 (3) ◽  
pp. 274-280 ◽  
Author(s):  
Ying Chen ◽  
Wanpeng Xu ◽  
Rongsheng Zhao ◽  
Xiangning Chen

2002 ◽  
Vol 722 ◽  
Author(s):  
M. Vieira ◽  
M. Fernandes ◽  
A. Fantoni ◽  
P. Louro ◽  
R. Schwarz

AbstractBased on the Laser Scanned Photodiode (LSP) image sensor we present an optical fingerprint reader for biometric authentication. The device configuration and the scanning system are optimized for this specific purpose.The scanning technique for fingerprint acquisition is improved and the effects of the probe beam size, wavelength and flux, the scan time and modulation frequency on image contrast and resolution will be analyzed under different electrical bias. An optical model of the image acquisition process is presented and supported by a two dimensional simulation.Results show that a trade-off between read-out parameters (fingerprint scanner) and the biometric sensing element structure (p-i-n structure) are needed to minimize the cross talk between the fingerprint ridges and the fingerprint valleys. In the heterostructures with wide band gap/low conductivity doped layers the user-specific information is detected with a good contrast while the resolution of the sensor is around 20 νm. A further increase in the contrast is achieved by slightly reverse biasing the sensor with a sensitivity of 6.5 νWcm-2 and a flux range of two orders of magnitude.


2016 ◽  
Vol 16 (8) ◽  
pp. 5075-5090 ◽  
Author(s):  
Robert E. Holz ◽  
Steven Platnick ◽  
Kerry Meyer ◽  
Mark Vaughan ◽  
Andrew Heidinger ◽  
...  

Abstract. Despite its importance as one of the key radiative properties that determines the impact of upper tropospheric clouds on the radiation balance, ice cloud optical thickness (IOT) has proven to be one of the more challenging properties to retrieve from space-based remote sensing measurements. In particular, optically thin upper tropospheric ice clouds (cirrus) have been especially challenging due to their tenuous nature, extensive spatial scales, and complex particle shapes and light-scattering characteristics. The lack of independent validation motivates the investigation presented in this paper, wherein systematic biases between MODIS Collection 5 (C5) and CALIOP Version 3 (V3) unconstrained retrievals of tenuous IOT (< 3) are examined using a month of collocated A-Train observations. An initial comparison revealed a factor of 2 bias between the MODIS and CALIOP IOT retrievals. This bias is investigated using an infrared (IR) radiative closure approach that compares both products with MODIS IR cirrus retrievals developed for this assessment. The analysis finds that both the MODIS C5 and the unconstrained CALIOP V3 retrievals are biased (high and low, respectively) relative to the IR IOT retrievals. Based on this finding, the MODIS and CALIOP algorithms are investigated with the goal of explaining and minimizing the biases relative to the IR. For MODIS we find that the assumed ice single-scattering properties used for the C5 retrievals are not consistent with the mean IR COT distribution. The C5 ice scattering database results in the asymmetry parameter (g) varying as a function of effective radius with mean values that are too large. The MODIS retrievals have been brought into agreement with the IR by adopting a new ice scattering model for Collection 6 (C6) consisting of a modified gamma distribution comprised of a single habit (severely roughened aggregated columns); the C6 ice cloud optical property models have a constant g ≈ 0.75 in the mid-visible spectrum, 5–15 % smaller than C5. For CALIOP, the assumed lidar ratio for unconstrained retrievals is fixed at 25 sr for the V3 data products. This value is found to be inconsistent with the constrained (predominantly nighttime) CALIOP retrievals. An experimental data set was produced using a modified lidar ratio of 32 sr for the unconstrained retrievals (an increase of 28 %), selected to provide consistency with the constrained V3 results. These modifications greatly improve the agreement with the IR and provide consistency between the MODIS and CALIOP products. Based on these results the recently released MODIS C6 optical products use the single-habit distribution given above, while the upcoming CALIOP V4 unconstrained algorithm will use higher lidar ratios for unconstrained retrievals.


2007 ◽  
Vol 24 (2) ◽  
pp. 457-462 ◽  
Author(s):  
Lucélia Donatti ◽  
Edith Fanta

The Antarctic fish Trematomus newnesi (Boulenger, 1902) occurs from benthic to pelagic habitats, in seasonally and daily varied photic conditions that induce retinomotor movements. Fish were experimentally kept under constant darkness or light, and 12Light/12Dark for seven days. The retinomotor movement of the pigment epithelium was established through the pigment index, while that of the cones was calculated as the length of the myoid. The retinomotor movement of the pigment epithelium in T.newnesi,revealed that the adaptation to constant light occurred in the one hour of exposure, remaining constant for the next seven days. However, the adaptation to constant darkness, was slower. The difference between the mean values of the pigment indices in the time intervals of sampling was significant in the first hours of the experiment, and only after six hours they were not significant any more. The myoid of cones became elongated in darkness and contracted in light. In the experiments where T.newnesiwas exposed initially to 12 hours light followed by 12 hours darkness 12 was evidenced that the speed and intensity of the retinomotor movements was higher when darkness changed into light, than when light changed into darkness.


2011 ◽  
Vol 5 (3) ◽  
pp. 1547-1582
Author(s):  
S. Gruber

Abstract. Permafrost underlies much of Earths' surface and interacts with climate, eco-systems and human systems. It is a complex phenomenon controlled by climate and (sub-) surface properties and reacts to change with variable delay. Heterogeneity and sparse data challenge the modeling of its spatial distribution. Currently, there is no data set to adequately inform global studies of permafrost. The available data set for the Northern Hemisphere is frequently used for model evaluation, but its quality and consistency are difficult to assess. A global model of permafrost extent and dataset of permafrost zonation are presented and discussed, extending earlier studies by including the Southern Hemisphere, by consistent data and methods, and most importantly, by attention to uncertainty and scaling. Established relationships between air temperature and the occurrence of permafrost are re-formulated into a model that is parametrized using published estimates. It is run with a high-resolution (<1 km) global elevation data and air temperatures based on the NCAR-NCEP reanalysis and CRU TS 2.0. The resulting data provides more spatial detail and a consistent extrapolation to remote regions, while aggregated values resemble previous studies. The estimated uncertainties affect regional patterns and aggregate number, but provide interesting insight. The permafrost area, i.e. the actual surface area underlain by permafrost, north of 60° S is estimated to be 13–18 × 106 km2 or 9–14 % of the exposed land surface. The global permafrost area including Antarctic and sub-sea permafrost is estimated to be 16–21 × 106 km2. The global permafrost region, i.e. the exposed land surface below which some permafrost can be expected, is estimated to be 22 ± 3 × 106 km2. A large proportion of this exhibits considerable topography and spatially-discontinuous permafrost, underscoring the importance of attention to scaling issues and heterogeneity in large-area models.


2008 ◽  
Vol 8 (2) ◽  
pp. 6653-6681 ◽  
Author(s):  
A. Konare ◽  
C. Liousse ◽  
B. Guillaume ◽  
F. Solmon ◽  
P. Assamoi ◽  
...  

Abstract. Africa, as a major aerosol source in the world, plays a key role in regional and global geochemical cycles and climate change. Combustion carbonaceous particles, central in this context through their radiative and hygroscopic properties, require ad hoc emission inventories. These inventories must incorporate fossil fuels FF (industries, traffic,...), biofuels BF (charcoal, wood burning,... quite common in Africa for domestic use), and biomass burning BB regularly occurring over vast areas all over the African continent. This latter, subject to rapid massive demographic, migratory, industrial and socio-economic changes, requires continuous emission inventories updating, so as to keep pace with this evolution. Two such different inventories, L96 and L06 with main focus on BB emissions, have been implemented for comparison within the regional climate model RegCM3 endowed with a specialized carbonaceous aerosol module. Resulting modeled black carbon BC and organic carbon OC fields have been compared to past and present composite data set available in Africa. This data set includes measurements from intensive field campaigns (EXPRESSO 1996, SAFARI 2000), from the IDAF/DEBITS surface network and from MODIS, focused on selected west, central and southern African sub-domains. This composite approach has been adopted to take advantage of possible combinations between satellite high-resolution coverage of Africa, regional modeling, use of an established surface network, together with the patchy detailed knowledge issued from past short intensive regional field experiments. Stemming from these particular comparisons, one prominent conclusion is the need for continuous detailed time and spatial updating of combustion emission inventories apt to reflect the rapid transformations of the African continent.


Author(s):  
U. Roy ◽  
R. Sudarsan ◽  
R. D. Sriram ◽  
K. W. Lyons ◽  
M. R. Duffey

Abstract Tolerance design is the process of deriving a description of geometric tolerance specifications for a product from a set of specifications on the desired properties of the product. Existing approaches to tolerance analysis and synthesis entail detailed knowledge of geometry of assemblies and are mostly applicable during advanced stages of design, leading to a less than optimal design process. During the design process of assemblies, both assembly structure and associated tolerance information evolve continuously and significant gains can be achieved by effectively using this information to influence the design of an assembly. Any pro-active approach to the assembly or tolerance analysis in the early design stages will involve decision making with incomplete information models. In order to carry out early tolerance synthesis and analysis in the conceptual stages of the product design, we need to devise techniques for representing function-behavior-assembly models that will allow analysis and synthesis of tolerances, even with the incomplete data set. A ‘function’ (what the system is for) is associated with the transformation of an input physical entity into an output physical entity by the system. The problem or customer’s need, initially described by functional requirements on an assembly, and associated constraints on the functional requirements derives the concept of an assembly. This specification of functional requirements and constraints define a functional model for the assembly. Many researchers have studied functional representation (function based taxonomy and ontology), function to form mapping, and behavior representation (behavior means how the system/product works). However, there is no comprehensive function-assembly-behavior (FAB) integrated model. In this paper, we discuss the integration of function, assembly, and behavior representation into a comprehensive information model (FAB models). To do this, we need to develop appropriate assembly models and tolerance models that would enable the designer to incrementally understand the build-up or propagation of tolerances (i.e., constraints) and optimize the layout, features, or assembly realizations. This will ensure ease of tolerance delivery.


Author(s):  
Monica Chis

Clustering is an important technique used in discovering some inherent structure present in data. The purpose of cluster analysis is to partition a given data set into a number of groups such that data in a particular cluster are more similar to each other than objects in different clusters. Hierarchical clustering refers to the formation of a recursive clustering of the data points: a partition into many clusters, each of which is itself hierarchically clustered. Hierarchical structures solve many problems in a large area of interests. In this paper a new evolutionary algorithm for detecting the hierarchical structure of an input data set is proposed. Problem could be very useful in economy, market segmentation, management, biology taxonomy and other domains. A new linear representation of the cluster structure within the data set is proposed. An evolutionary algorithm evolves a population of clustering hierarchies. Proposed algorithm uses mutation and crossover as (search) variation operators. The final goal is to present a data clustering representation to find fast a hierarchical clustering structure.


1998 ◽  
Author(s):  
Toshio Kameshima ◽  
Noriyuki Kaifu ◽  
Eiichi Takami ◽  
Masakazu Morishita ◽  
Tatsuya Yamazaki

Sign in / Sign up

Export Citation Format

Share Document