scholarly journals Juggling with representations: On the information transfer between imagery, point clouds, and meshes for multi-modal semantics

2021 ◽  
Vol 176 ◽  
pp. 55-68 ◽  
Author(s):  
Dominik Laupheimer ◽  
Norbert Haala
Author(s):  
D. Laupheimer ◽  
M. H. Shams Eddin ◽  
N. Haala

Abstract. The semantic segmentation of the huge amount of acquired 3D data has become an important task in recent years. We propose a novel association mechanism that enables information transfer between two 3D representations: point clouds and meshes. The association mechanism can be used in a two-fold manner: (i) feature transfer to stabilize semantic segmentation of one representation with features from the other representation and (ii) label transfer to achieve the semantic annotation of both representations. We claim that point clouds are an intermediate product whereas meshes are a final user product that jointly provides geometrical and textural information. For this reason, we opt for semantic mesh segmentation in the first place. We apply an off-the-shelf PointNet++ to a textured urban triangle mesh as generated from LiDAR and oblique imagery. For each face within a mesh, a feature vector is computed and optionally extended by inherent LiDAR features as provided by the sensor (e.g. intensity). The feature vector extension is accomplished with the proposed association mechanism. By these means, we leverage inherent features from both data representations for the semantic mesh segmentation (multi-modality). We achieve an overall accuracy of 86:40% on the face-level on a dedicated test mesh. Neglecting LiDAR-inherent features in the per-face feature vectors decreases mean intersection over union by ∼2%. Leveraging our association mechanism, we transfer predicted mesh labels to the LiDAR point cloud at a stroke. To this end, we semantically segment the point cloud by implicit usage of geometric and textural mesh features. The semantic point cloud segmentation achieves an overall accuracy close to 84% on the point-level for both feature vector compositions.


Author(s):  
David A. Grano ◽  
Kenneth H. Downing

The retrieval of high-resolution information from images of biological crystals depends, in part, on the use of the correct photographic emulsion. We have been investigating the information transfer properties of twelve emulsions with a view toward 1) characterizing the emulsions by a few, measurable quantities, and 2) identifying the “best” emulsion of those we have studied for use in any given experimental situation. Because our interests lie in the examination of crystalline specimens, we've chosen to evaluate an emulsion's signal-to-noise ratio (SNR) as a function of spatial frequency and use this as our critereon for determining the best emulsion.The signal-to-noise ratio in frequency space depends on several factors. First, the signal depends on the speed of the emulsion and its modulation transfer function (MTF). By procedures outlined in, MTF's have been found for all the emulsions tested and can be fit by an analytic expression 1/(1+(S/S0)2). Figure 1 shows the experimental data and fitted curve for an emulsion with a better than average MTF. A single parameter, the spatial frequency at which the transfer falls to 50% (S0), characterizes this curve.


Author(s):  
D. Van Dyck

An (electron) microscope can be considered as a communication channel that transfers structural information between an object and an observer. In electron microscopy this information is carried by electrons. According to the theory of Shannon the maximal information rate (or capacity) of a communication channel is given by C = B log2 (1 + S/N) bits/sec., where B is the band width, and S and N the average signal power, respectively noise power at the output. We will now apply to study the information transfer in an electron microscope. For simplicity we will assume the object and the image to be onedimensional (the results can straightforwardly be generalized). An imaging device can be characterized by its transfer function, which describes the magnitude with which a spatial frequency g is transferred through the device, n is the noise. Usually, the resolution of the instrument ᑭ is defined from the cut-off 1/ᑭ beyond which no spadal information is transferred.


2009 ◽  
Vol 14 (1) ◽  
pp. 78-89 ◽  
Author(s):  
Kenneth Hugdahl ◽  
René Westerhausen

The present paper is based on a talk on hemispheric asymmetry given by Kenneth Hugdahl at the Xth European Congress of Psychology, Praha July 2007. Here, we propose that hemispheric asymmetry evolved because of a left hemisphere speech processing specialization. The evolution of speech and the need for air-based communication necessitated division of labor between the hemispheres in order to avoid having duplicate copies in both hemispheres that would increase processing redundancy. It is argued that the neuronal basis of this labor division is the structural asymmetry observed in the peri-Sylvian region in the posterior part of the temporal lobe, with a left larger than right planum temporale area. This is the only example where a structural, or anatomical, asymmetry matches a corresponding functional asymmetry. The increase in gray matter volume in the left planum temporale area corresponds to a functional asymmetry of speech processing, as indexed from both behavioral, dichotic listening, and functional neuroimaging studies. The functional anatomy of the corpus callosum also supports such a view, with regional specificity of information transfer between the hemispheres.


2006 ◽  
Author(s):  
Ayse P. Gurses ◽  
Yan Xiao ◽  
Paul Gorman ◽  
Brian Hazlehurst ◽  
Grant Bochicchio ◽  
...  

Author(s):  
Jiayong Yu ◽  
Longchen Ma ◽  
Maoyi Tian, ◽  
Xiushan Lu

The unmanned aerial vehicle (UAV)-mounted mobile LiDAR system (ULS) is widely used for geomatics owing to its efficient data acquisition and convenient operation. However, due to limited carrying capacity of a UAV, sensors integrated in the ULS should be small and lightweight, which results in decrease in the density of the collected scanning points. This affects registration between image data and point cloud data. To address this issue, the authors propose a method for registering and fusing ULS sequence images and laser point clouds, wherein they convert the problem of registering point cloud data and image data into a problem of matching feature points between the two images. First, a point cloud is selected to produce an intensity image. Subsequently, the corresponding feature points of the intensity image and the optical image are matched, and exterior orientation parameters are solved using a collinear equation based on image position and orientation. Finally, the sequence images are fused with the laser point cloud, based on the Global Navigation Satellite System (GNSS) time index of the optical image, to generate a true color point cloud. The experimental results show the higher registration accuracy and fusion speed of the proposed method, thereby demonstrating its accuracy and effectiveness.


2019 ◽  
Author(s):  
Sarah Puhl ◽  
Torben Steenbock ◽  
Carmen Herrmann ◽  
Jürgen Heck

Pinching molecules via chemical strain suggests intuitive consequences, such as compression at the pinched site, and clothespin-like opening of other parts of the structure. If this opening affects two spin centers, it should result in reduced communication between them. We show that for a naphthalene-bridged biscobaltocenes with competing through-space and through-bond pathways, the consequences of pinching are far less intuitive: despite the known dominance of through-space interactions, the bridge plays a much larger role for exchange spin coupling than previously assumed. Based on a combination of chemical synthesis, structural, magnetic and redox characterization, and a newly developed first-principles theoretical pathways analysis, we can suggest a comprehensive explanation for this nonintuitive behavior. These results are of interest for molecular spintronics, as naphthalene-linked cobaltocenes can form wires on surfaces for potential spin-only information transfer.


Sign in / Sign up

Export Citation Format

Share Document