Ptz control with head tracking for video chat

Author(s):  
Kota Yamaguchi ◽  
Takashi Komuro ◽  
Masatoshi Ishikawa
Keyword(s):  
2012 ◽  
Vol 21 (1) ◽  
pp. 11-16 ◽  
Author(s):  
Susan Fager ◽  
Tom Jakobs ◽  
David Beukelman ◽  
Tricia Ternus ◽  
Haylee Schley

Abstract This article summarizes the design and evaluation of a new augmentative and alternative communication (AAC) interface strategy for people with complex communication needs and severe physical limitations. This strategy combines typing, gesture recognition, and word prediction to input text into AAC software using touchscreen or head movement tracking access methods. Eight individuals with movement limitations due to spinal cord injury, amyotrophic lateral sclerosis, polio, and Guillain Barre syndrome participated in the evaluation of the prototype technology using a head-tracking device. Fourteen typical individuals participated in the evaluation of the prototype using a touchscreen.


Author(s):  
Bernard D. Adelstein ◽  
Thomas G. Lee ◽  
Stephen R. Ellis

2020 ◽  
Vol 13 (2) ◽  
Author(s):  
Saket Kumar ◽  
Rajesh Mehra

Author(s):  
Haoming Chen ◽  
Chao Wei ◽  
Mingli Song ◽  
Ming-Ting Sun ◽  
Kevin Lau

We propose a method to measure the capture-to-display delay (CDD) of a visual communication application. The method does not require modifications to the existing system, nor require the encoder and decoder clocks be synchronized. Furthermore, we propose a solution to solve the multiple-overlapped-timestamp problem due to the exposure time of the camera. We analyze the measurement error, and implement the method in software to measure the CDD of a cellphone video chat application over various types of networks. Experiments confirm the effectiveness of our proposed method.


2020 ◽  
Vol 4 (Supplement_1) ◽  
pp. 958-958
Author(s):  
Khoa Nguyen ◽  
Mattie McDonald ◽  
Colton Scavone ◽  
Nora Mattek ◽  
Jeffrey Kaye ◽  
...  

Abstract I-CONECT is a randomized controlled clinical trial to examine the impact of social interaction delivered via video-chat on cognitive function (clinicaltrials.gov number: NCT02871921, project website: www.I-CONECT.org ). We aimed to enroll 320 community-dwelling socially isolated older adults (age >=75 years). The recruitment of participants has started in 2018 and was ongoing when COVID-19 pandemic began. Video chat and telephone-based social interaction interventions did not change during COVID-19. However, new recruitment and cognitive assessments, which require in-person contact and deployment and retrieval of video chat devices in participant homes, were suspended due to the nature of our study population (i.e., older age, higher likelihood of comorbidities). Recently we were able to successfully switch to complete remote assessments including 1) telephone-based cognitive assessments using T-COG (Telephone Cognitive Assessment battery), and 2) contactless delivery of our study devices (Chrome books and electronic pill boxes) for subject self-installation. Our creative approach to self-installations includes color coded pictures and an easy-to-follow installation manual, accompanied by remote instruction and support via telephone. This poster introduces our remote assessment and installation protocol and participant and technical support team feedback regarding this new contactless protocol. This presentation provides useful guidance for future studies considering completely remote assessment and telemedicine approaches.


2021 ◽  
Vol 11 (12) ◽  
pp. 5503
Author(s):  
Munkhjargal Gochoo ◽  
Syeda Amna Rizwan ◽  
Yazeed Yasin Ghadi ◽  
Ahmad Jalal ◽  
Kibum Kim

Automatic head tracking and counting using depth imagery has various practical applications in security, logistics, queue management, space utilization and visitor counting. However, no currently available system can clearly distinguish between a human head and other objects in order to track and count people accurately. For this reason, we propose a novel system that can track people by monitoring their heads and shoulders in complex environments and also count the number of people entering and exiting the scene. Our system is split into six phases; at first, preprocessing is done by converting videos of a scene into frames and removing the background from the video frames. Second, heads are detected using Hough Circular Gradient Transform, and shoulders are detected by HOG based symmetry methods. Third, three robust features, namely, fused joint HOG-LBP, Energy based Point clouds and Fused intra-inter trajectories are extracted. Fourth, the Apriori-Association is implemented to select the best features. Fifth, deep learning is used for accurate people tracking. Finally, heads are counted using Cross-line judgment. The system was tested on three benchmark datasets: the PCDS dataset, the MICC people counting dataset and the GOTPD dataset and counting accuracy of 98.40%, 98%, and 99% respectively was achieved. Our system obtained remarkable results.


Sign in / Sign up

Export Citation Format

Share Document