Real-Time 3D Depth Generation for Stereoscopic Video Applications with Thread-Level Superscalar-Pipeline Parallelization

2012 ◽  
Vol 72 (1) ◽  
pp. 17-33
Author(s):  
Guo-An Jian ◽  
Cheng-An Chien ◽  
Peng-Sheng Chen ◽  
Jiun-In Guo
Urology ◽  
2009 ◽  
Vol 73 (4) ◽  
pp. 896-900 ◽  
Author(s):  
Li-Ming Su ◽  
Balazs P. Vagvolgyi ◽  
Rahul Agarwal ◽  
Carol E. Reiley ◽  
Russell H. Taylor ◽  
...  

2013 ◽  
Vol 18 (5) ◽  
pp. 680-690 ◽  
Author(s):  
Giseok Kim ◽  
Jae-Soo Cho ◽  
Gwangsoon Lee ◽  
Eung-Don Lee

Author(s):  
Athanasios Kordelas ◽  
Ilias Politis ◽  
Asimakis Lykourgiotis ◽  
Tasos Dagiuklas ◽  
Stavros Kotsopoulos

2016 ◽  
Author(s):  
Edalat Radfar ◽  
Jihoon Park ◽  
Sangyeob Lee ◽  
Myungjin Ha ◽  
Sungkon Yu ◽  
...  

2013 ◽  
Vol 401-403 ◽  
pp. 1834-1838
Author(s):  
Jia Xi Yu ◽  
Wen Hui Zhang

In this paper, a design of FPGA-based 3D-TV horizontal parallax acquiring system is presented. The system will receive the stereoscopic video by a HD-SDI receiver GS2971, and outputs a video of horizontal parallax to a digital TV through a HDMI transmitter SiI9134. In this system, FPGA plays an important role that converts the stereoscopic video to the horizontal parallax video. In addition, a microcontroller is selected as the control center of the entire system. This system can get the horizontal parallax of the stereoscopic video in real time, and is helpful for the stereoscopic program producer to control the horizontal parallax of the 3D program.


The machine vision systems have been playing a significant role in visual monitoring systems. With the help of stereovision and machine learning, it will be able to mimic human-like visual system and behaviour towards the environment. In this paper, we present a stereo vision based 3-DOF robot which will be used to monitor places from remote using cloud server and internet devices. The 3-DOF robot will transmit human-like head movements, i.e., yaw, pitch, roll and produce 3D stereoscopic video and stream it in Real-time. This video stream is sent to the user through any generic internet devices with VR box support, i.e., smartphones giving the user a First-person real-time 3D experience and transfers the head motion of the user to the robot also in Real-time. The robot will also be able to track moving objects and faces as a target using deep neural networks which enables it to be a standalone monitoring robot. The user will be able to choose specific subjects to monitor in a space. The stereovision enables us to track the depth information of different objects detected and will be used to track human interest objects with its distances and sent to the cloud. A full working prototype is developed which showcases the capabilities of a monitoring system based on stereo vision, robotics, and machine learning.


Sign in / Sign up

Export Citation Format

Share Document