scholarly journals TheVeiled Virginillustrates visual segmentation of shape by cause

2020 ◽  
Vol 117 (21) ◽  
pp. 11735-11743 ◽  
Author(s):  
Flip Phillips ◽  
Roland W. Fleming

Three-dimensional (3D) shape perception is one of the most important functions of vision. It is crucial for many tasks, from object recognition to tool use, and yet how the brain represents shape remains poorly understood. Most theories focus on purely geometrical computations (e.g., estimating depths, curvatures, symmetries). Here, however, we find that shape perception also involves sophisticated inferences that parse shapes into features with distinct causal origins. Inspired by marble sculptures such as Strazza’sThe Veiled Virgin(1850), which vividly depict figures swathed in cloth, we created composite shapes by wrapping unfamiliar forms in textile, so that the observable surface relief was the result of complex interactions between the underlying object and overlying fabric. Making sense of such structures requires segmenting the shape based on their causes, to distinguish whether lumps and ridges are due to the shrouded object or to the ripples and folds of the overlying cloth. Three-dimensional scans of the objects with and without the textile provided ground-truth measures of the true physical surface reliefs, against which observers’ judgments could be compared. In a virtual painting task, participants indicated which surface ridges appeared to be caused by the hidden object and which were due to the drapery. In another experiment, participants indicated the perceived depth profile of both surface layers. Their responses reveal that they can robustly distinguish features belonging to the textile from those due to the underlying object. Together, these findings reveal the operation of visual shape-segmentation processes that parse shapes based on their causal origin.

2018 ◽  
Vol 102 (10) ◽  
pp. 1413-1418 ◽  
Author(s):  
Hiromasa Sawamura ◽  
Céline R Gillebert ◽  
James T Todd ◽  
Guy A Orban

Background/aimsTo evaluate the perception of three-dimensional (3D) shape in patients with strabismus and the contributions of stereopsis and monocular cues to this perception.MethodsTwenty-one patients with strabismus with and 20 without stereo acuity as well as 25 age-matched normal volunteers performed two tasks: (1) identifying the closest vertices of 3D shapes from monocular shading (3D-SfS), texture (3D-SfT) or motion cues (3D-SfM) and from binocular disparity (3D-SfD), (2) discriminating 1D elementary features of these cues.ResultsDiscrimination of the elementary features of luminance, texture and motion did not differ across groups. When the distances between reported and actual closest vertices were resolved into sagittal and frontoparallel plane components, sagittal components in 3D-SfS and frontoparallel components in 3D-SfT indicated larger errors in patients with strabismus without stereo acuity than in normal subjects. These patients could not discriminate one-dimensional elementary features of binocular disparity. Patients with strabismus with stereo acuity performed worse for both components of 3D-SfD and frontoparallel components of 3D-SfT compared with normal subjects. No differences were observed in the perception of 3D-SfM across groups. A comparison between normal subjects and patients with strabismus with normal stereopsis revealed no deficit in 3D shape perception from any cue.ConclusionsBinocular stereopsis is essential for fine perception of 3D shape, even when 3D shape is defined by monocular static cues. Interaction between these cues may occur in ventral occipitotemporal regions, where 3D-SfS, 3D-SfT and 3D-SfD are processed in the same or neighbouring cortical regions. Our findings demonstrate the perceptual benefit of binocular stereopsis in patients with strabismus.


2021 ◽  
Vol 8 (1) ◽  
pp. 205395172110135
Author(s):  
Florian Jaton

This theoretical paper considers the morality of machine learning algorithms and systems in the light of the biases that ground their correctness. It begins by presenting biases not as a priori negative entities but as contingent external referents—often gathered in benchmarked repositories called ground-truth datasets—that define what needs to be learned and allow for performance measures. I then argue that ground-truth datasets and their concomitant practices—that fundamentally involve establishing biases to enable learning procedures—can be described by their respective morality, here defined as the more or less accounted experience of hesitation when faced with what pragmatist philosopher William James called “genuine options”—that is, choices to be made in the heat of the moment that engage different possible futures. I then stress three constitutive dimensions of this pragmatist morality, as far as ground-truthing practices are concerned: (I) the definition of the problem to be solved (problematization), (II) the identification of the data to be collected and set up (databasing), and (III) the qualification of the targets to be learned (labeling). I finally suggest that this three-dimensional conceptual space can be used to map machine learning algorithmic projects in terms of the morality of their respective and constitutive ground-truthing practices. Such techno-moral graphs may, in turn, serve as equipment for greater governance of machine learning algorithms and systems.


2002 ◽  
Vol 457 ◽  
pp. 157-180 ◽  
Author(s):  
TURGUT SARPKAYA

The instabilities in a sinusoidally oscillating non-separated flow over smooth circular cylinders in the range of Keulegan–Carpenter numbers, K, from about 0.02 to 1 and Stokes numbers, β, from about 103 to 1.4 × 106 have been observed from inception to chaos using several high-speed imagers and laser-induced fluorescence. The instabilities ranged from small quasi-coherent structures, as in Stokes flow over a flat wall (Sarpkaya 1993), to three-dimensional spanwise perturbations because of the centrifugal forces induced by the curvature of the boundary layer (Taylor–Görtler instability). These gave rise to streamwise-oriented counter-rotating vortices or mushroom-shaped coherent structures as K approached the Kh values theoretically predicted by Hall (1984). Further increases in K for a given β led first to complex interactions between the coherent structures and then to chaotic motion. The mapping of the observations led to the delineation of four states of flow in the (K, β)-plane: stable, marginal, unstable, and chaotic.


2018 ◽  
Vol 618 ◽  
pp. A87 ◽  
Author(s):  
E. Khomenko ◽  
N. Vitas ◽  
M. Collados ◽  
A. de Vicente

In recent decades, REALISTIC three-dimensional radiative-magnetohydrodynamic simulations have become the dominant theoretical tool for understanding the complex interactions between the plasma and magnetic field on the Sun. Most of such simulations are based on approximations of magnetohydrodynamics, without directly considering the consequences of the very low degree of ionization of the solar plasma in the photosphere and bottom chromosphere. The presence of a large amount of neutrals leads to a partial decoupling of the plasma and magnetic field. As a consequence, a series of non-ideal effects, i.e., the ambipolar diffusion, Hall effect, and battery effect, arise. The ambipolar effect is the dominant in the solar chromosphere. We report on the first three-dimensional realistic simulations of magneto-convection including ambipolar diffusion and battery effects. The simulations are carried out using the newly developed MANCHA3Dcode. Our results reveal that ambipolar diffusion causes measurable effects on the amplitudes of waves excited by convection in the simulations, on the absorption of Poynting flux and heating, and on the formation of chromospheric structures. We provide a low limit on the chromospheric temperature increase owing to the ambipolar effect using the simulations with battery-excited dynamo fields.


2022 ◽  
Vol 41 (1) ◽  
pp. 1-17
Author(s):  
Xin Chen ◽  
Anqi Pang ◽  
Wei Yang ◽  
Peihao Wang ◽  
Lan Xu ◽  
...  

In this article, we present TightCap, a data-driven scheme to capture both the human shape and dressed garments accurately with only a single three-dimensional (3D) human scan, which enables numerous applications such as virtual try-on, biometrics, and body evaluation. To break the severe variations of the human poses and garments, we propose to model the clothing tightness field—the displacements from the garments to the human shape implicitly in the global UV texturing domain. To this end, we utilize an enhanced statistical human template and an effective multi-stage alignment scheme to map the 3D scan into a hybrid 2D geometry image. Based on this 2D representation, we propose a novel framework to predict clothing tightness field via a novel tightness formulation, as well as an effective optimization scheme to further reconstruct multi-layer human shape and garments under various clothing categories and human postures. We further propose a new clothing tightness dataset of human scans with a large variety of clothing styles, poses, and corresponding ground-truth human shapes to stimulate further research. Extensive experiments demonstrate the effectiveness of our TightCap to achieve the high-quality human shape and dressed garments reconstruction, as well as the further applications for clothing segmentation, retargeting, and animation.


10.2196/21105 ◽  
2021 ◽  
Vol 6 (1) ◽  
pp. e21105
Author(s):  
Arpita Mallikarjuna Kappattanavar ◽  
Nico Steckhan ◽  
Jan Philipp Sachs ◽  
Harry Freitas da Cruz ◽  
Erwin Böttinger ◽  
...  

Background A majority of employees in the industrial world spend most of their working time in a seated position. Monitoring sitting postures can provide insights into the underlying causes of occupational discomforts such as low back pain. Objective This study focuses on the technologies and algorithms used to classify sitting postures on a chair with respect to spine and limb movements. Methods A total of three electronic literature databases were surveyed to identify studies classifying sitting postures in adults. Quality appraisal was performed to extract critical details and assess biases in the shortlisted papers. Results A total of 14 papers were shortlisted from 952 papers obtained after a systematic search. The majority of the studies used pressure sensors to measure sitting postures, whereas neural networks were the most frequently used approaches for classification tasks in this context. Only 2 studies were performed in a free-living environment. Most studies presented ethical and methodological shortcomings. Moreover, the findings indicate that the strategic placement of sensors can lead to better performance and lower costs. Conclusions The included studies differed in various aspects of design and analysis. The majority of studies were rated as medium quality according to our assessment. Our study suggests that future work for posture classification can benefit from using inertial measurement unit sensors, since they make it possible to differentiate among spine movements and similar postures, considering transitional movements between postures, and using three-dimensional cameras to annotate the data for ground truth. Finally, comparing such studies is challenging, as there are no standard definitions of sitting postures that could be used for classification. In addition, this study identifies five basic sitting postures along with different combinations of limb and spine movements to help guide future research efforts.


2020 ◽  
Author(s):  
Jiji Chen ◽  
Hideki Sasaki ◽  
Hoyin Lai ◽  
Yijun Su ◽  
Jiamin Liu ◽  
...  

Abstract We demonstrate residual channel attention networks (RCAN) for restoring and enhancing volumetric time-lapse (4D) fluorescence microscopy data. First, we modify RCAN to handle image volumes, showing that our network enables denoising competitive with three other state-of-the-art neural networks. We use RCAN to restore noisy 4D super-resolution data, enabling image capture over tens of thousands of images (thousands of volumes) without apparent photobleaching. Second, using simulations we show that RCAN enables class-leading resolution enhancement, superior to other networks. Third, we exploit RCAN for denoising and resolution improvement in confocal microscopy, enabling ~2.5-fold lateral resolution enhancement using stimulated emission depletion (STED) microscopy ground truth. Fourth, we develop methods to improve spatial resolution in structured illumination microscopy using expansion microscopy ground truth, achieving improvements of ~1.4-fold laterally and ~3.4-fold axially. Finally, we characterize the limits of denoising and resolution enhancement, suggesting practical benchmarks for evaluating and further enhancing network performance.


2017 ◽  
Vol 10 (3) ◽  
pp. 285-289 ◽  
Author(s):  
Katrina L Ruedinger ◽  
David R Rutkowski ◽  
Sebastian Schafer ◽  
Alejandro Roldán-Alzate ◽  
Erick L Oberstar ◽  
...  

Background and purposeSafe and effective use of newly developed devices for aneurysm treatment requires the ability to make accurate measurements in the angiographic suite. Our purpose was to determine the parameters that optimize the geometric accuracy of three-dimensional (3D) vascular reconstructions.MethodsAn in vitro flow model consisting of a peristaltic pump, plastic tubing, and 3D printed patient-specific aneurysm models was used to simulate blood flow in an intracranial aneurysm. Flow rates were adjusted to match values reported in the literature for the internal carotid artery. 3D digital subtraction angiography acquisitions were obtained using a commercially available biplane angiographic system. Reconstructions were done using Edge Enhancement (EE) or Hounsfield Unit (HU) kernels and a Normal or Smooth image characteristic. Reconstructed images were analyzed using the vendor's aneurysm analysis tool. Ground truth measurements were derived from metrological scans of the models with a microCT. Aneurysm volume, surface area, dome height, minimum and maximum ostium diameter were determined for the five models.ResultsIn all cases, measurements made with the EE kernel most closely matched ground truth values. Differences in values derived from reconstructions displayed with Smooth or Normal image characteristics were small and had only little impact on the geometric parameters considered.ConclusionsReconstruction parameters impact the accuracy of measurements made using the aneurysm analysis tool of a commercially available angiographic system. Absolute differences between measurements made using reconstruction parameters determined as optimal in this study were, overall, very small. The significance of these differences, if any, will depend on the details of each individual case.


Sign in / Sign up

Export Citation Format

Share Document