Depth Modulation: Composing motion in immersive audiovisual spaces

2012 ◽  
Vol 17 (2) ◽  
pp. 156-162
Author(s):  
Ewa Trębacz

The field of electroacoustic music has witnessed years of extensive exploration of aural spatial perception and an abundance of spatialisation techniques. Today the growing ubiquity of visual 3D technologies gives artists a similar opportunity in the realm of visual music. With the use of stereoscopic video we now have the ability to compose individual depth cues independently. The process of continuous change of the perceived depth of the audiovisual space over time is being referred to as depth modulation, and can only be fully appreciated through motion.What can be achieved through the separation and manipulation of visual and sonic spatial cues? What can we learn about the way we perceive space if the basic components building our understanding of the surrounding environment are artificially split and re-arranged?Visual music appears to be a perfect field for such experimentation. Strata of visual and aural depth cues can be used to create audiovisual counterpoints in three-dimensional spaces. The choice of abstract imagery and the lack of obvious narrative storylines allow us to focus our perception on the evolution of the immersive audiovisual space itself. A new language of an immersive audiovisual medium should emerge as a delicate, ever-changing balance between all previously separated and altered components.

2020 ◽  
Vol 3 (1) ◽  
pp. 10501-1-10501-9
Author(s):  
Christopher W. Tyler

Abstract For the visual world in which we operate, the core issue is to conceptualize how its three-dimensional structure is encoded through the neural computation of multiple depth cues and their integration to a unitary depth structure. One approach to this issue is the full Bayesian model of scene understanding, but this is shown to require selection from the implausibly large number of possible scenes. An alternative approach is to propagate the implied depth structure solution for the scene through the “belief propagation” algorithm on general probability distributions. However, a more efficient model of local slant propagation is developed as an alternative.The overall depth percept must be derived from the combination of all available depth cues, but a simple linear summation rule across, say, a dozen different depth cues, would massively overestimate the perceived depth in the scene in cases where each cue alone provides a close-to-veridical depth estimate. On the other hand, a Bayesian averaging or “modified weak fusion” model for depth cue combination does not provide for the observed enhancement of perceived depth from weak depth cues. Thus, the current models do not account for the empirical properties of perceived depth from multiple depth cues.The present analysis shows that these problems can be addressed by an asymptotic, or hyperbolic Minkowski, approach to cue combination. With appropriate parameters, this first-order rule gives strong summation for a few depth cues, but the effect of an increasing number of cues beyond that remains too weak to account for the available degree of perceived depth magnitude. Finally, an accelerated asymptotic rule is proposed to match the empirical strength of perceived depth as measured, with appropriate behavior for any number of depth cues.


Sensors ◽  
2019 ◽  
Vol 19 (18) ◽  
pp. 4001 ◽  
Author(s):  
Shuhe Chang ◽  
Haoyu Zhang ◽  
Haiying Xu ◽  
Xinghua Sang ◽  
Li Wang ◽  
...  

In the process of electron beam freeform fabrication (EBF3), due to the continuous change of thermal conditions and variability in wire feeding in the deposition process, geometric deviations are generated in the deposition of each layer. In order to prevent the layer-by-layer accumulation of the deviation, it is necessary to perform online geometry measurement for each deposition layer, based on which the error compensation can be done for the previous deposition layer in the next deposition layer. However, the traditional three-dimensional reconstruction method that employs structured laser cannot meet the requirements of long-term stable operation in the manufacturing process of EBF3. Therefore, this paper proposes a method to measure the deposit surfaces based on the position information of electron beam speckle, in which an electron beam is used to bombard the surface of the deposit to generate the speckle. Based on the structured information of the electron beam in the vacuum chamber, the three-dimensional reconstruction of the surface of the deposited parts is realized without need of additional structured laser sensor. In order to improve the detection accuracy, the detection error is theoretically analyzed and compensated. The absolute error after compensation is smaller than 0.1 mm, and the precision can reach 0.1%, which satisfies the requirements of 3D reconstruction of the deposited parts. An online measurement system is built for the surface of deposited parts in the process of electron beam freeform fabrication, which realizes the online 3D reconstruction of the surface of the deposited layer. In addition, in order to improve the detection stability of the whole system, the image processing algorithm suitable for this scene is designed. The reliability and speed of the algorithm are improved by ROI extraction, threshold segmentation, and expansion corrosion. In addition, the speckle size information can also reflect the thermal conditions of the surface of the deposited parts. Hence, it can be used for online detection of defects such as infusion and voids.


2019 ◽  
Vol 29 ◽  
pp. 62-66
Author(s):  
Dave Payling

This article discusses the author’s visual music compositional practice in the context of similar work in this field. It specifically examines three pieces created between 2015 and 2017 that fused digital animation techniques with electronic sound. This approach contrasted with the author’s earlier compositions, which featured electroacoustic music and video concrète.


Sensors ◽  
2020 ◽  
Vol 20 (16) ◽  
pp. 4605
Author(s):  
Chien-Hsiung Chen ◽  
Meng-Xi Chen

This study examined how users acquire spatial knowledge in an onscreen three-dimensional virtual environment when using overview maps. This experiment adopted a three (the size of overview maps) x two (the transparency of overview maps) between-subjects design. Three levels of the size of overview maps were evaluated, i.e., 1/2, 1/8, and 1/16 screen size. Comparisons between 20% transparent and 80% transparent were made. We asked 108 participants to complete spatial perception tasks and fill out questionnaires regarding their feelings. The results indicate the following: (1) The effects of the transparency of overview maps on users’ spatial perception vary with the size of overview maps. The 80% transparent overview map is significantly more efficient than the 20% transparent overview map in the condition of 1/2 screen size. However, the result is opposite in the condition of 1/8 screen size. (2) Users like the 80% transparent overview map significantly better than the 20% transparent overview map in the condition of 1/2 screen size. (3) Concerning subjective evaluations of satisfaction, preference, and system usability, overview maps in the condition of 1/8 screen size are significantly better than those in the condition of 1/2 screen size.


2011 ◽  
Vol 16 (1) ◽  
pp. 63-68 ◽  
Author(s):  
Aki Pasoulas

Inspired by Denis Smalley's theoretical ideas on spectromorphology and Albert Bregman's (1990) auditory scene analysis, I began an investigation into the formation and segregation of timescales1 in electroacoustic music. This research inevitably led me to an exploration of the factors that shape our perception of time passing and estimation of durations, where spectromorphological issues intermingle with extra-musical associations, autobiographical experiences, emotional responses, and the surrounding environment at the time of listening. Ultimately, time perception affects the structural balance of a composition. This paper, which is part of my ongoing research, examines how the perception of time is affected by the semantic meaning and the spectromorphological characteristics of sound events.


2015 ◽  
Vol 8 (7) ◽  
pp. 2329-2353 ◽  
Author(s):  
M. Rautenhaus ◽  
M. Kern ◽  
A. Schäfler ◽  
R. Westermann

Abstract. We present "Met.3D", a new open-source tool for the interactive three-dimensional (3-D) visualization of numerical ensemble weather predictions. The tool has been developed to support weather forecasting during aircraft-based atmospheric field campaigns; however, it is applicable to further forecasting, research and teaching activities. Our work approaches challenging topics related to the visual analysis of numerical atmospheric model output – 3-D visualization, ensemble visualization and how both can be used in a meaningful way suited to weather forecasting. Met.3D builds a bridge from proven 2-D visualization methods commonly used in meteorology to 3-D visualization by combining both visualization types in a 3-D context. We address the issue of spatial perception in the 3-D view and present approaches to using the ensemble to allow the user to assess forecast uncertainty. Interactivity is key to our approach. Met.3D uses modern graphics technology to achieve interactive visualization on standard consumer hardware. The tool supports forecast data from the European Centre for Medium Range Weather Forecasts (ECMWF) and can operate directly on ECMWF hybrid sigma-pressure level grids. We describe the employed visualization algorithms, and analyse the impact of the ECMWF grid topology on computing 3-D ensemble statistical quantities. Our techniques are demonstrated with examples from the T-NAWDEX-Falcon 2012 (THORPEX – North Atlantic Waveguide and Downstream Impact Experiment) campaign.


2005 ◽  
Vol 93 (1) ◽  
pp. 620-626 ◽  
Author(s):  
Jay Hegdé ◽  
David C. Van Essen

Disparity tuning in visual cortex has been shown using a variety of stimulus types that contain stereoscopic depth cues. It is not known whether different stimuli yield similar disparity tuning curves. We studied whether cells in visual area V4 of the macaque show similar disparity tuning profiles when the same set of disparity values were tested using bars or dynamic random dot stereograms, which are among the most commonly used stimuli for this purpose. In a majority of V4 cells (61%), the shape of the disparity tuning profile differed significantly for the two stimulus types. The two sets of stimuli yielded statistically indistinguishable disparity tuning profiles for only a small minority (6%) of V4 cells. These results indicate that disparity tuning in V4 is stimulus-dependent. Given the fact that bar stimuli contain two-dimensional (2-D) shape cues, and the random dot stereograms do not, our results also indicate that V4 cells represent 2-D shape and binocular disparity in an interdependent fashion, revealing an unexpected complexity in the analysis of depth and three-dimensional shape.


Sign in / Sign up

Export Citation Format

Share Document