High performance GPGPU based system for matching people in a live video feed

2012 ◽  
Author(s):  
Bartlomiej Bosek ◽  
Leszek Horwath ◽  
Grzegorz Matecki ◽  
Arkadiusz Pawlik
2011 ◽  
pp. 160-171 ◽  
Author(s):  
Yeonjoo Oh ◽  
Ken Camarata ◽  
Michael Philetus Weller ◽  
Mark D. Gross ◽  
Ellen Yi-Luen Do

People can use computationally-enhanced furniture to interact with distant friends and places without cumbersome menus or widgets. We describe computing embedded in a pair of tables and a chair that enables people to experience remote events in two ways: The TeleTables are ambient tabletop displays that connect two places by projecting shadows cast on one surface to the other. The Window Seat rocking chair through its motion controls a remote camera tied to a live video feed. Both explore using the physical space of a room and its furniture to create “bilocative” interfaces.


2021 ◽  
Vol 11 (22) ◽  
pp. 10809
Author(s):  
Hugo S. Oliveira ◽  
José J. M. Machado ◽  
João Manuel R. S. Tavares

With the widespread use of surveillance image cameras and enhanced awareness of public security, objects, and persons Re-Identification (ReID), the task of recognizing objects in non-overlapping camera networks has attracted particular attention in computer vision and pattern recognition communities. Given an image or video of an object-of-interest (query), object identification aims to identify the object from images or video feed taken from different cameras. After many years of great effort, object ReID remains a notably challenging task. The main reason is that an object’s appearance may dramatically change across camera views due to significant variations in illumination, poses or viewpoints, or even cluttered backgrounds. With the advent of Deep Neural Networks (DNN), there have been many proposals for different network architectures achieving high-performance levels. With the aim of identifying the most promising methods for ReID for future robust implementations, a review study is presented, mainly focusing on the person and multi-object ReID and auxiliary methods for image enhancement. Such methods are crucial for robust object ReID, while highlighting limitations of the identified methods. This is a very active field, evidenced by the dates of the publications found. However, most works use data from very different datasets and genres, which presents an obstacle to wide generalized DNN model training and usage. Although the model’s performance has achieved satisfactory results on particular datasets, a particular trend was observed in the use of 3D Convolutional Neural Networks (CNN), attention mechanisms to capture object-relevant features, and generative adversarial training to overcome data limitations. However, there is still room for improvement, namely in using images from urban scenarios among anonymized images to comply with public privacy legislation. The main challenges that remain in the ReID field, and prospects for future research directions towards ReID in dense urban scenarios, are also discussed.


The project throws light on detection of object and afterwards tracking of the said object using IOT and WSN. All the operations will be performed in real time as capturing of images is a continuous process which is achieved with the help of ESP32CAM mounted on the chassis of the robot and its connection is given to ESP32CAM. Ultrasonic detects object and tracking by robot is done by its right or left movements and backward or forward movements depending on the said object’s displacement. The distance between the robot and the said object is constant which is preserved with the help of ultrasonic sensors. Tracking involves live video feed and trigger of manual mode for detecting the object. Once the object is detected it will be intimated through WSN to base station and through IoT local and central headquarters for further analysis.


Author(s):  
Yeonjoo Oh ◽  
Ken Camarata ◽  
Michael Philetus Weller ◽  
Mark D. Gross ◽  
Ellen Yi-Luen Do

People can use computationally-enhanced furniture to interact with distant friends and places without cumbersome menus or widgets. We describe computing embedded in a pair of tables and a chair that enables people to experience remote events in two ways: The TeleTables are ambient tabletop displays that connect two places by projecting shadows cast on one surface to the other. The Window Seat rocking chair through its motion controls a remote camera tied to a live video feed. Both explore using the physical space of a room and its furniture to create “bilocative” interfaces.


2004 ◽  
Vol 5 (1) ◽  
pp. 75-97 ◽  
Author(s):  
Irene M. Pepperberg ◽  
Steven R. Wilkes

Grey parrots (Psittacus erithacus) do not acquire referential English labels when tutored with videotapes displayed on CRT screens if (a) socially isolated, (b) reward for attempted labels is possible, (c) trainers direct birds’ attention to the monitor, (d) live video feed avoids habituation or (d) one trainer repeats labels produced on video and rewards label attempts. Because birds learned referential labels from live tutor pairs in concurrent sessions, we concluded that video failed because input lacked live social interaction and modeling (Pepperberg, 1999). Recent studies (e.g. Ikebuchi & Okanoya, 1999), however, suggest that standard CRT monitor flickering could instead have prevented learning. Using an LCD monitor, we found that eliminating flickering did not enable birds to learn from video under conditions of limited social interaction. Results emphasize the role of social interaction in referential label learning and may generalize to other systems (e.g. disabled children, or possibly software and robotic agents).


2019 ◽  
Vol 8 (4) ◽  
pp. 11524-11528

Today, there are a few autonomous fire-fighting robots but the absolute autonomous decision making in places that involve discrete thinking is still unresolved. With remotely operated fire-fighting robots, this problem can be solved to an extent. The project involves use of a remote power source to reduce the weight of the robot and a bio-inspired design of the fire hose manipulator mimicking the elephant’s trunk using which the hose tip could be moved precisely up to 5° on every direction. The hose can be manipulated to direct the water towards the fire, using the live video feed from a camera and raspberry pi set up that are on board. The movement of the robot and the fire hose manipulator can be remotely operated using GUI interface. The response of the robot for various intensities of flame, the angular freedom of the manipulator and the projectile of water-flow were studied and calibrated for better performance.


Sign in / Sign up

Export Citation Format

Share Document