scholarly journals Guided Deep Decoder: Unsupervised Image Pair Fusion

Author(s):  
Tatsumi Uezato ◽  
Danfeng Hong ◽  
Naoto Yokoya ◽  
Wei He
Keyword(s):  
2021 ◽  
Author(s):  
Gerald Eichstädt ◽  
John Rogers ◽  
Glenn Orton ◽  
Candice Hansen

<p>We derive Jupiter's zonal vorticity profile from JunoCam images, with Juno's polar orbit allowing the observation of latitudes that are difficult to observe from Earth or from equatorial flybys.  We identify cyclonic local vorticity maxima near 77.9°, 65.6°, 59.3°, 50.9°, 42.4°, and 34.3°S planetocentric at a resolution of ~1°, based on analyzing selected JunoCam image pairs taken during the 16 Juno perijove flybys 15-30. We identify zonal anticyclonic local vorticity maxima near 80.7°, 73.8°, 62.1°, 56.4°, 46.9°, 38.0°, and 30.7°S.  These results agree with the known zonal wind profile below 64°S, and reveal novel structure further south, including a prominent cyclonic band centered near 66°S. The anticyclonic vorticity maximum near 73.8°S represents a broad and skewed fluctuating anticyclonic band between ~69.0° and ~76.5°S, and is hence poorly defined. This band may even split temporarily into two or three bands.  The cyclonic vorticity maximum near 77.9°S appears to be fairly stable during these flybys, probably representing irregular cyclonic structures in the region. The area between ~82° and 90°S is relatively small and close to the terminator, resulting in poor statistics, but generally shows a strongly cyclonic mean vorticity, representing the well-known circumpolar cyclone cluster.</p><p>The latitude range between ~30°S and ~85°S was particularly well observed, allowing observation periods lasting several hours. For each considered perijove we selected a pair of images separated by about 30 - 60 minutes. We derived high-passed and contrast-normalized south polar equidistant azimuthal maps of Jupiter's cloud tops. They were used to derive maps of local rotation at a resolution of ~1° latitude by stereo-corresponding Monte-Carlo-distributed and Gauss-weighted round tiles for each image pair considered. Only the rotation portion of the stereo correspondence between tiles was used to sample the vorticity maps. For each image pair, we rendered ~40 vorticity maps with different Monte-Carlo runs. The standard deviation of the resulting statistics provided a criterion to define a valid area of the mean vorticity map. Averaging vorticities along circles centered on the south pole returned a zonal vorticity profile for each of the perijoves considered. Averaging the resulting zonal vorticity profiles built the basis for a discussion of the mean profile.</p><p>JunoCam also images the northern hemisphere, at higher resolution but with coverage restricted to a briefer time span and smaller area due to the nature of Juno's elliptical orbit, which will restrict our ability to obtain zonal vorticity profiles.</p>


2013 ◽  
Vol 12 (1) ◽  
pp. 30-43
Author(s):  
Bruno Eduardo Madeira ◽  
Luiz Velho

We describe a new architecture composed of software and hardware for displaying stereoscopic images over a horizontal surface. It works as a ``Virtual Table and Teleporter'', in the sense that virtual objects depicted over a table have the appearance of real objects. This system can be used for visualization and interaction. We propose two basic configurations: the Virtual Table, consisting of a single display surface, and the Virtual Teleporter, consisting of a pair of tables for image capture and display. The Virtual Table displays either 3D computer generated images or previously captured stereoscopic video and can be used for interactive applications. The Virtual Teleporter captures and transmits stereoscopic video from one table to the other and can be used for telepresence applications. In both configurations the images are properly deformed and displayed for horizontal 3D stereo. In the Virtual Teleporter two cameras are pointed to the first table, capturing a stereoscopic image pair. These images are shown on the second table that is, in fact, a stereoscopic display positioned horizontally. Many applications can benefit from this technology such as virtual reality, games, teleconferencing, and distance learning. We present some interactive applications that we developed using this architecture.


Author(s):  
Shaohua Kevin Zhou ◽  
Jie Shao ◽  
Bogdan Georgescu ◽  
Dorin Comaniciu

Motion estimation necessitates an appropriate choice of similarity function. Because generic similarity functions derived from simple assumptions are insufficient to model complex yet structured appearance variations in motion estimation, the authors propose to learn a discriminative similarity function to match images under varying appearances by casting image matching into a binary classification problem. They use the LogitBoost algorithm to learn the classifier based on an annotated database that exemplifies the structured appearance variations: An image pair in correspondence is positive and an image pair out of correspondence is negative. To leverage the additional distance structure of negatives, they present a location-sensitive cascade training procedure that bootstraps negatives for later stages of the cascade from the regions closer to the positives, which enables viewing a large number of negatives and steering the training process to yield lower training and test errors. The authors apply the learned similarity function to estimating the motion for the endocardial wall of left ventricle in echocardiography and to performing visual tracking. They obtain improved performances when comparing the learned similarity function with conventional ones.


Sign in / Sign up

Export Citation Format

Share Document