High Definition Television (HDTV) and Video Surveillance

High definition television is becoming ever more popular, opening up the market to new high-definition technologies. Image quality and color fidelity have experienced improvements faster than ever. The video surveillance market has been affected by high definition television demand. Since video surveillance calls for large amounts of image data, high-quality video frame rates are generally compromised. However, a network camera that conforms to high definition television standards shows good performance in high frame rate, resolution, and color fidelity. High quality network cameras are a good choice for surveillance video quality.

2012 ◽  
Vol 546-547 ◽  
pp. 225-229
Author(s):  
Shi Wen Wang ◽  
Bo Xie

High-definition intelligent video surveillance system of Henan Power is constructed in 220KV&500KV substation with uniform marginal pattern, multi-level management and image resources sharing. The video sever control and communicate with the monitoring equipment at the substation with WDM-PON channel based on SIP protocol, and can achieve a truly high-quality picture format up to D4(1280*720). Gateway Protocol is researched and developed to convert SD (Standard Definition) video image with different formats to unified H.264 format. The SD and HD video image access to video surveillance platform based on SIP protocol, then give solutions to the existing substation video surveillance platform with low transmission rate, incompatible interface protocol and low quality video image. Furthermore, intelligent analytical processing for data traffic carrying feeder in substation, icing equipment, transmission line galloping and meteorological monitoring, together with the HD video image transmitted on WPON. Aid decision is made for power network production operation and emergency support.


Author(s):  
Onate Taylor ◽  
P. S. Ezekiel ◽  
V. T. Emmah

Internet of Things is the interconnectivity between things, individuals and cloud administrations by means of web, which empowers new plans of action. Because of these exchanges, immense volumes of information are smartly created and is shipped off cloud-based server through web; the information is being handled and broken down, bringing about significant and convenient activities for observing the car parking. The serious issue that is arising currently at a worldwide scale and developing dramatically is the gridlock issue brought about by vehicles. A worldwide scale and developing dramatically is the gridlock issue brought about by vehicles. Among that, finding a better parking sparking space in urban areas has become a major problem with an increase of the numbers of vehicles on a daily bases. Therefore making it difficult in having a better and safe parking spot. The system proposes an intelligent smart parking system using computer vision and internet of things. The proposed system starts by acquiring a dataset. The dataset is made up images of various vehicles, which was collected from the faculty of science car park at the Rivers State University, Port Harcourt, Rivers State Nigeria. We proposed two methods for vehicle/parking slot detection. The first method is the use of convolution neural network algorithm which is used with a haar cascade classifier in detection of multiple vehicles in a single picture and video, and put rectangular boxes  on identified vehicles. This first method obtained an accuracy of 99.80%. In the second method, we made use of a Mask R-CNN, here we download a pre-trained model weights which was trained on a coco dataset to identify various objects in videos and pictures. The Mask R-CNN model was used to identify various vehicles by putting a bounding box on each of the vehicle detected, but one of the problem of the Mask R-CNN is that it quite slow in training, and it could not really detect all vehicles tested on a high quality high definition video. In summary our, trained model was able to detect vehicles and parking slot on high quality video and it consumes lesser graphic card.


2017 ◽  
Vol 2017 ◽  
pp. 1-10 ◽  
Author(s):  
Asif Ali Laghari ◽  
Hui He ◽  
Shahid Karim ◽  
Himat Ali Shah ◽  
Nabin Kumar Karn

Video sharing on social clouds is popular among the users around the world. High-Definition (HD) videos have big file size so the storing in cloud storage and streaming of videos with high quality from cloud to the client are a big problem for service providers. Social clouds compress the videos to save storage and stream over slow networks to provide quality of service (QoS). Compression of video decreases the quality compared to original video and parameters are changed during the online play as well as after download. Degradation of video quality due to compression decreases the quality of experience (QoE) level of end users. To assess the QoE of video compression, we conducted subjective (QoE) experiments by uploading, sharing, and playing videos from social clouds. Three popular social clouds, Facebook, Tumblr, and Twitter, were selected to upload and play videos online for users. The QoE was recorded by using questionnaire given to users to provide their experience about the video quality they perceive. Results show that Facebook and Twitter compressed HD videos more as compared to other clouds. However, Facebook gives a better quality of compressed videos compared to Twitter. Therefore, users assigned low ratings for Twitter for online video quality compared to Tumblr that provided high-quality online play of videos with less compression.


2003 ◽  
Vol 60 (3) ◽  
pp. 678-683 ◽  
Author(s):  
Russell A Moursund ◽  
Thomas J Carlson ◽  
Rock D Peters

Abstract The uses of an acoustic camera in fish-passage research at hydropower facilities are being explored by the U.S. Army Corps of Engineers. The “Dual-Frequency Identification Sonar” (DIDSON) is a high-definition imaging sonar that obtains near-video quality images for the identification of objects underwater. Developed originally for the U.S. Navy by the University of Washington's Applied Physics Laboratory, it bridges the gap between existing fisheries-assessment sonar and optical systems. The images within 12 m of this acoustic camera are sufficiently clear such that fish can be observed undulating as they swim and their orientation ascertained in otherwise zero-visibility water. In the 1.8 MHz high-frequency mode, this system 96 beams over a 29° field-of-view. The high resolution and fast frame rate provide target visualization in real time. The DIDSON can be used where conventional underwater cameras would be limited by low light levels and high turbidity.


Sensors ◽  
2019 ◽  
Vol 19 (2) ◽  
pp. 350 ◽  
Author(s):  
Minghao Zhao ◽  
Chengquan Hu ◽  
Fenglin Wei ◽  
Kai Wang ◽  
Chong Wang ◽  
...  

The underwater environment is still unknown for humans, so the high definition camera is an important tool for data acquisition at short distances underwater. Due to insufficient power, the image data collected by underwater submersible devices cannot be analyzed in real time. Based on the characteristics of Field-Programmable Gate Array (FPGA), low power consumption, strong computing capability, and high flexibility, we design an embedded FPGA image recognition system on Convolutional Neural Network (CNN). By using two technologies of FPGA, parallelism and pipeline, the parallelization of multi-depth convolution operations is realized. In the experimental phase, we collect and segment the images from underwater video recorded by the submersible. Next, we join the tags with the images to build the training set. The test results show that the proposed FPGA system achieves the same accuracy as the workstation, and we get a frame rate at 25 FPS with the resolution of 1920 × 1080. This meets our needs for underwater identification tasks.


Author(s):  
Jie Ren ◽  
Ling Gao ◽  
Xiaoming Wang ◽  
Miao Ma ◽  
Guoyong Qiu ◽  
...  

Augmented reality (AR) underpins many emerging mobile applications, but it increasingly requires more computation power for better machine understanding and user experience. While computation offloading promises a solution for high-quality and interactive mobile AR, existing methods work best for high-definition videos but cannot meet the real-time requirement for emerging 4K videos due to the long uploading latency. We introduce ACTOR, a novel computation-offloading framework for 4K mobile AR. To reduce the uploading latency, ACTOR dynamically and judiciously downscales the mobile video feed to be sent to the remote server. On the server-side, it leverages image super-resolution technology to scale back the received video so that high-quality object detection, tracking and rendering can be performed on the full 4K resolution. ACTOR employs machine learning to predict which of the downscaling resolutions and super-resolution configurations should be used, by taking into account the video content, server processing delay, and user expected latency. We evaluate ACTOR by applying it to over 2,000 4K video clips across two typical WiFi network settings. Extensive experimental results show that ACTOR consistently and significantly outperforms competitive methods for simultaneously meeting the latency and user-perceived video quality requirements.


Sign in / Sign up

Export Citation Format

Share Document