High-Quality 3D Display for Integral Imaging Microscope Using Deep Learning Depth Estimation Algorithm

Author(s):  
Ki-Chul Kwon ◽  
Ki Hoon Kwon ◽  
Munkh-Uchral Erdenebat ◽  
Young-Tae Lim ◽  
Jong-Rae Jeong ◽  
...  
2021 ◽  
Author(s):  
Ki-Chul Kwon ◽  
Munkh-Uchral Erdenebat ◽  
Anar Khuderchuluun ◽  
Ki Hoon Kwon ◽  
Min Young Kim ◽  
...  

Biosensors ◽  
2021 ◽  
Vol 11 (2) ◽  
pp. 35
Author(s):  
Seungjae Lee ◽  
Yeongtak Song ◽  
Jongshill Lee ◽  
Jaehoon Oh ◽  
Tae Ho Lim ◽  
...  

Recently, a smart-device-based chest compression depth (CCD) feedback system that helps ensure that chest compressions have adequate depth during cardiopulmonary resuscitation (CPR) was developed. However, no CCD feedback device has been developed for infants, and many feedback systems are inconvenient to use. In this paper, we report the development of a smart-ring-based CCD feedback device for CPR based on an inertial measurement unit, and propose a high-quality chest compression depth estimation algorithm that considers the orientation of the device. The performance of the proposed feedback system was evaluated by comparing it with a linear variable differential transformer in three CPR situations. The experimental results showed compression depth errors of 2.0 ± 1.1, 2.2 ± 0.9, and 1.4 ± 1.1 mm in the three situations. In addition, we conducted a pilot test with an adult/infant mannequin. The results of the experiments show that the proposed smart-ring-based CCD feedback system is applicable to various chest compression methods based on real CPR situations.


Author(s):  
Ying Yuan ◽  
Xiaorui Wang ◽  
Yang Yang ◽  
Hang Yuan ◽  
Chao Zhang ◽  
...  

Abstract The full-chain system performance characterization is very important for the optimization design of an integral imaging three-dimensional (3D) display system. In this paper, the acquisition and display processes of 3D scene will be treated as a complete light field information transmission process. The full-chain performance characterization model of an integral imaging 3D display system is established, which uses the 3D voxel, the image depth, and the field of view of the reconstructed images as the 3D display quality evaluation indicators. Unlike most of the previous research results using the ideal integral imaging model, the proposed full-chain performance characterization model considering the diffraction effect and optical aberration of the microlens array, the sampling effect of the detector, 3D image data scaling, and the human visual system, can accurately describe the actual 3D light field transmission and convergence characteristics. The relationships between key parameters of an integral imaging 3D display system and the 3D display quality evaluation indicators are analyzed and discussed by the simulation experiment. The results will be helpful for the optimization design of a high-quality integral imaging 3D display system.


2021 ◽  
Vol 2021 (1) ◽  
Author(s):  
Yanyang Guo ◽  
Hanzhou Wu ◽  
Xinpeng Zhang

AbstractSocial media plays an increasingly important role in providing information and social support to users. Due to the easy dissemination of content, as well as difficulty to track on the social network, we are motivated to study the way of concealing sensitive messages in this channel with high confidentiality. In this paper, we design a steganographic visual stories generation model that enables users to automatically post stego status on social media without any direct user intervention and use the mutual-perceived joint attention (MPJA) to maintain the imperceptibility of stego text. We demonstrate our approach on the visual storytelling (VIST) dataset and show that it yields high-quality steganographic texts. Since the proposed work realizes steganography by auto-generating visual story using deep learning, it enables us to move steganography to the real-world online social networks with intelligent steganographic bots.


2007 ◽  
Author(s):  
Sang-Hyun Lee ◽  
Seung-Cheol Kim ◽  
Eun-Soo Kim

Author(s):  
L. Madhuanand ◽  
F. Nex ◽  
M. Y. Yang

Abstract. Depth is an essential component for various scene understanding tasks and for reconstructing the 3D geometry of the scene. Estimating depth from stereo images requires multiple views of the same scene to be captured which is often not possible when exploring new environments with a UAV. To overcome this monocular depth estimation has been a topic of interest with the recent advancements in computer vision and deep learning techniques. This research has been widely focused on indoor scenes or outdoor scenes captured at ground level. Single image depth estimation from aerial images has been limited due to additional complexities arising from increased camera distance, wider area coverage with lots of occlusions. A new aerial image dataset is prepared specifically for this purpose combining Unmanned Aerial Vehicles (UAV) images covering different regions, features and point of views. The single image depth estimation is based on image reconstruction techniques which uses stereo images for learning to estimate depth from single images. Among the various available models for ground-level single image depth estimation, two models, 1) a Convolutional Neural Network (CNN) and 2) a Generative Adversarial model (GAN) are used to learn depth from aerial images from UAVs. These models generate pixel-wise disparity images which could be converted into depth information. The generated disparity maps from these models are evaluated for its internal quality using various error metrics. The results show higher disparity ranges with smoother images generated by CNN model and sharper images with lesser disparity range generated by GAN model. The produced disparity images are converted to depth information and compared with point clouds obtained using Pix4D. It is found that the CNN model performs better than GAN and produces depth similar to that of Pix4D. This comparison helps in streamlining the efforts to produce depth from a single aerial image.


Sign in / Sign up

Export Citation Format

Share Document