scholarly journals Visual-Feedback-Based Frame-by-Frame Synchronization for 3000 fps Projector–Camera Visual Light Communication

Electronics ◽  
2021 ◽  
Vol 10 (14) ◽  
pp. 1631
Author(s):  
Atul Sharma ◽  
Sushil Raut ◽  
Kohei Shimasaki ◽  
Taku Senoo ◽  
Idaku Ishii

This paper proposes a novel method for synchronizing a high frame-rate (HFR) camera with an HFR projector, using a visual feedback-based synchronization algorithm for streaming video sequences in real time on a visible-light communication (VLC)-based system. The frame rates of the camera and projector are equal, and their phases are synchronized. A visual feedback-based synchronization algorithm is used to mitigate the complexities and stabilization issues of wire-based triggering for long-distance systems. The HFR projector projects a binary pattern modulated at 3000 fps. The HFR camera system operates at 3000 fps, which can capture and generate a delay signal to be given to the next camera clock cycle so that it matches the phase of the HFR projector. To test the synchronization performance, we used an HFR projector–camera-based VLC system in which the proposed synchronization algorithm provides maximum bandwidth utilization for the high-throughput transmission ability of the system and reduces data redundancy efficiently. The transmitter of the VLC system encodes the input video sequence into gray code, which is projected via high-definition multimedia interface streaming in the form of binary images 590 × 1060. At the receiver, a monochrome HFR camera can simultaneously capture and decode 12-bit 512 × 512 images in real time and reconstruct a color video sequence at 60 fps. The efficiency of the visual feedback-based synchronization algorithm is evaluated by streaming offline and live video sequences, using a VLC system with single and dual projectors, providing a multiple-projector-based system. The results show that the 3000 fps camera was successfully synchronized with a 3000 fps single-projector and a 1500 fps dual-projector system. It was confirmed that the synchronization algorithm can also be applied to VLC systems, autonomous vehicles, and surveillance applications.

Sensors ◽  
2020 ◽  
Vol 20 (18) ◽  
pp. 5368
Author(s):  
Atul Sharma ◽  
Sushil Raut ◽  
Kohei Shimasaki ◽  
Taku Senoo ◽  
Idaku Ishii

This study develops a projector–camera-based visible light communication (VLC) system for real-time broadband video streaming, in which a high frame rate (HFR) projector can encode and project a color input video sequence into binary image patterns modulated at thousands of frames per second and an HFR vision system can capture and decode these binary patterns into the input color video sequence with real-time video processing. For maximum utilization of the high-throughput transmission ability of the HFR projector, we introduce a projector–camera VLC protocol, wherein a multi-level color video sequence is binary-modulated with a gray code for encoding and decoding instead of pure-code-based binary modulation. Gray code encoding is introduced to address the ambiguity with mismatched pixel alignments along the gradients between the projector and vision system. Our proposed VLC system consists of an HFR projector, which can project 590 × 1060 binary images at 1041 fps via HDMI streaming and a monochrome HFR camera system, which can capture and process 12-bit 512 × 512 images in real time at 3125 fps; it can simultaneously decode and reconstruct 24-bit RGB video sequences at 31 fps, including an error correction process. The effectiveness of the proposed VLC system was verified via several experiments by streaming offline and live video sequences.


2015 ◽  
Vol 27 (1) ◽  
pp. 12-23 ◽  
Author(s):  
Qingyi Gu ◽  
◽  
Sushil Raut ◽  
Ken-ichi Okumura ◽  
Tadayoshi Aoyama ◽  
...  

<div class=""abs_img""><img src=""[disp_template_path]/JRM/abst-image/00270001/02.jpg"" width=""300"" />Synthesized panoramic images</div> In this paper, we propose a real-time image mosaicing system that uses a high-frame-rate video sequence. Our proposed system can mosaic 512 × 512 color images captured at 500 fps as a single synthesized panoramic image in real time by stitching the images based on their estimated frame-to-frame changes in displacement and orientation. In the system, feature point extraction is accelerated by implementing a parallel processing circuit module for Harris corner detection, and hundreds of selected feature points in the current frame can be simultaneously corresponded with those in their neighbor ranges in the previous frame, assuming that frame-to-frame image displacement becomes smaller in high-speed vision. The efficacy of our system for improved feature-based real-time image mosaicing at 500 fps was verified by implementing it on a field-programmable gate array (FPGA)-based high-speed vision platform and conducting several experiments: (1) capturing an indoor scene using a camera mounted on a fast-moving two-degrees-of-freedom active vision, (2) capturing an outdoor scene using a hand-held camera that was rapidly moved in a periodic fashion by hand. </span>


2021 ◽  
Vol 8 ◽  
pp. 109-117
Author(s):  
Artem L. Pazoev ◽  
Sergey A. Shoydin

When holographic information is transmitted through communication channels, a problem arises associated with the large capacity of holograms. In the patent of the Russian Federation No. 2707582, the possibility of compressing holographic information was shown, similar to the transmission on one sideband known in radio electronics. The experimental transmission of such compressed information over a Wi-Fi wireless communication channel with a frame rate of more than 25 frames per second is shown in this paper. The experiment of transmitting holographic information of 3D images over a wireless Wi-Fi communication channel to simulate 3D video using the FTP protocol was carried out. In accordance with the RF patent No. 2707582, each transmitted frame of a 3D image was the sum of two 2D frames-a texture (2000x2000 pixels) and a mask (1000x1000 pixels). To simulate the transmission of a video sequence, packets of 500 double frames were transmitted simultaneously. The transmission times of these frame packets measured in real time by FileZilla showed that the transmission of full holographic information about a 3D object in real time with a frame rate greater than 25 frames / sec. quite feasible.


Author(s):  
Sheikh Summerah

Abstract: This study presents a strategy to automate the process to recognize and track objects using color and motion. Video Tracking is the approach to detect a moving item using a camera across the long distance. The basic goal of video tracking is in successive video frames to link target objects. When objects move quicker in proportion to frame rate, the connection might be particularly difficult. This work develops a method to follow moving objects in real-time utilizing HSV color space values and OpenCV in distinct video frames.. We start by deriving the HSV value of an object to be tracked and then in the testing stage, track the object. It was seen that the objects were tracked with 90% accuracy. Keywords: HSV, OpenCV, Object tracking,


Author(s):  
Gowher Shafi

Abstract: This research shows how to use colour and movement to automate the process of recognising and tracking things. Video tracking is a technique for detecting a moving object over a long distance using a camera. The main purpose of video tracking is to connect target objects in subsequent video frames. The connection may be particularly troublesome when things move faster than the frame rate. Using HSV colour space values and OpenCV in different video frames, this study proposes a way to track moving objects in real-time. We begin by calculating the HSV value of an item to be monitored, and then we track the object throughout the testing step. The items were shown to be tracked with 90 percent accuracy. Keywords: HSV, OpenCV, Object tracking, Video frames, GUI


Photonics ◽  
2020 ◽  
Vol 7 (1) ◽  
pp. 17
Author(s):  
Qiuyi Pan ◽  
Xincheng Huang ◽  
Rui Min ◽  
Weiping Liu

To address the issues of high time consumption of frame synchronization involved in a scanning-free Brillouin optical time-domain analysis (SF-BOTDA) system, a fast frame synchronization algorithm based on incremental updating was proposed. In comparison to the standard frame synchronization algorithm, the proposed one significantly reduced the processing time required for the BOTDA system frame synchronization by about 98%. In addition, to further accelerate the real-time performance of frame synchronization, a field programmable gate array (FPGA) hardware implementation architecture based on parallel processing and pipelining mechanisms was also proposed. Compared with the software implementation, it further raised the processing speed by 13.41 times. The proposed approach could lay a foundation for the BOTDA system in the field with the associated high real-time requirements.


2007 ◽  
Vol 30 (4) ◽  
pp. 51 ◽  
Author(s):  
A. Baranchuk ◽  
G. Dagnone ◽  
P. Fowler ◽  
M. N. Harrison ◽  
L. Lisnevskaia ◽  
...  

Electrocardiography (ECG) interpretation is an essential skill for physicians as well as for many other health care professionals. Continuing education is necessary to maintain these skills. The process of teaching and learning ECG interpretation is complex and involves both deductive mechanisms and recognition of patterns for different clinical situations (“pattern recognition”). The successful methodologies of interactive sessions and real time problem based learning have never been evaluated with a long distance education model. To evaluate the efficacy of broadcasting ECG rounds to different hospitals in the Southeastern Ontario region; to perform qualitative research to determine the impact of this methodology in developing and maintaining skills in ECG interpretation. ECG rounds are held weekly at Kingston General Hospital and will be transmitted live to Napanee, Belleville, Oshawa, Peterborough and Brockville. The teaching methodology is based on real ECG cases. The audience is invited to analyze the ECG case and the coordinator will introduce comments to guide the case through the proper algorithm. Final interpretation will be achieved emphasizing the deductive process and the relevance of each case. An evaluation will be filled out by each participant at the end of each session. Videoconferencing works through a vast array of internet LANs, WANs, ISDN phone lines, routers, switches, firewalls and Codecs (Coder/Decoder) and bridges. A videoconference Codec takes the analog audio and video signal codes and compresses it into a digital signal and transmits that digital signal to another Codec where the signal is decompressed and retranslated back into analog video and audio. This compression and decompression allows large amounts of data to be transferred across a network at close to real time (384 kbps with 30 frames of video per second). Videoconferencing communication works on voice activation so whichever site is speaking has the floor and is seen by all the participating sites. A continuous presence mode allows each site to have the same visual and audio involvement as the host site. A bridged multipoint can connect between 8 and 12 sites simultaneously. This innovative methodology for teaching ECG will facilitate access to developing and maintaining skills in ECG interpretation for a large number of health care providers. Bertsch TF, Callas PW, Rubin A. Effectiveness of lectures attended via interactive video conferencing versus in-person in preparing third-year internal medicine clerkship students for clinical practice examinations. Teach Learn Med 2007; 19(1):4-8. Yellowlees PM, Hogarth M, Hilty DM. The importance of distributed broadband networks to academic biomedical research and education programs. Acad Psychaitry 2006;30:451-455


Sign in / Sign up

Export Citation Format

Share Document