video stream
Recently Published Documents


TOTAL DOCUMENTS

947
(FIVE YEARS 335)

H-INDEX

17
(FIVE YEARS 6)

Author(s):  
Xiang Ren ◽  
Min Sun ◽  
Xianfeng Zhang ◽  
Lei Liu ◽  
Hang Zhou ◽  
...  

Author(s):  
Prod. Roshan R. Kolte

Abstract: COVID-19 pandemic has rapidly affected our day-to-day life the world trade and movements. Wearing a face mask is very essentials for protecting against virus. People also wear mask to cover themselves in order to reduce the spread of covid virus. The corona virus covid-19 pandemic is causing a global health crisis so the effective protection method is wearing a face mask in public area according to the world health organization (WHO). The covid-19 pandemic forced government across the world to impose lockdowns to prevent virus transmission report indicates that wearing face mask while at work clearly reduce the risk of transmission .we will use the dataset to build a covid-19 face mask detector with computer vision using python,opencv,tensorflow,keras library and deep learning. Our goal is to identify whether the person on image or live video stream is wearing mask or not wearing face mask this can help to society and whole organization to avoid the transfer of virus one person to antother.we used computer vision and deep learning modules to detect a with mask image and without mask image. Keywords: face detection, face recognition, CNN, SVM, opencv, python, tensorflow, keras.


2022 ◽  
Vol 2022 ◽  
pp. 1-8
Author(s):  
Jingrong Lu ◽  
Hongtao Gao

At present, wireless network technology is advancing rapidly, and intelligent equipment is gradually popularized, which rapidly developed the mobile streaming media business. All kinds of mobile video applications have enriched people’s lives by carrying huge traffic randomly. Wireless networks (WNs) are facing an unprecedented burden, which allocates very important wireless video resources. Similarly, in WNs, the network status is dynamic and the terminal is heterogeneous, which causes the traditional video transmission system to fail to meet the needs of users. Hence, Scalable Video Coding (SVC) has been introduced in the video transmission system to achieve bit rate adaptation. However, in a strictly hierarchical traditional computer network, the wireless resource allocation strategy usually takes throughput as the only way to optimize the target, and it is terrible to make more optimizations for scalable video transmission. This article proposed a cross-layer design to enable information to be transmitted between the wireless base station and the video server to achieve joint optimization. To improve users’ satisfaction with video services, the wireless resource allocation problem and the video stream scheduling problem are jointly considered, which keep the optimization space larger. Based on the proposed architecture, we further study the design of wireless resource allocation algorithms and rate-adaptive algorithms for the scenario of multiuser transmission of scalable video in the Long-Term Evolution (LTE) downlink. Experimental outcomes have shown substantial performance enhancement of the proposed work.


2022 ◽  
Vol 12 (1) ◽  
pp. 523
Author(s):  
Darius Plikynas ◽  
Audrius Indriulionis ◽  
Algirdas Laukaitis ◽  
Leonidas Sakalauskas

This paper presents an approach to enhance electronic traveling aids (ETAs) for people who are blind and severely visually impaired (BSVI) using indoor orientation and guided navigation by employing social outsourcing of indoor route mapping and assistance processes. This type of approach is necessary because GPS does not work well, and infrastructural investments are absent or too costly to install for indoor navigation. Our approach proposes the prior outsourcing of vision-based recordings of indoor routes from an online network of seeing volunteers, who gather and constantly update a web cloud database of indoor routes using specialized sensory equipment and web services. Computational intelligence-based algorithms process sensory data and prepare them for BSVI usage. In this way, people who are BSVI can obtain ready-to-use access to the indoor routes database. This type of service has not previously been offered in such a setting. Specialized wearable sensory ETA equipment, depth cameras, smartphones, computer vision algorithms, tactile and audio interfaces, and computational intelligence algorithms are employed for that matter. The integration of semantic data of points of interest (such as stairs, doors, WC, entrances/exits) and evacuation schemes could make the proposed approach even more attractive to BVSI users. Presented approach crowdsources volunteers’ real-time online help for complex navigational situations using a mobile app, a live video stream from BSVI wearable cameras, and digitalized maps of buildings’ evacuation schemes.


Author(s):  
Hatim Derrouz ◽  
Alberto Cabri ◽  
Hamd Ait Abdelali ◽  
Rachid Oulad Haj Thami ◽  
François Bourzeix ◽  
...  

Author(s):  
Prof. S. B. Kothari

Abstract: As an integral part of the safety and security many organizations, video rental has established its value and benefits many times by providing immediate management of property, people, the environment and property. This project operates in the form of the Embedded Real-Time Surveillance System Based Raspberry Pi SBC for internal detection that enhances monitoring technology to provide critical safety in our lives as well as consistent performance and alert operation. The proposed security solution depends on our integration of cameras and motion detectors into a web application. Raspberry Pi operates and controls motion detectors and video cameras for remote hearing and monitoring, streams streaming video and recording for future playback. This research focuses on the development of a detection system that detects strangers and responds quickly by taking and transferring images to wireless modules based on owners. This Raspberry Pi program based on Smart Surveillance System provides a remote location monitoring concept. The proposed solution provides a fully functional, efficient and easy-to-use global solution. This project will also introduce the concept of motion detection and tracking using image processing. This type of technology is very important when it comes to surveillance and security. The live video stream will be used to show how things can be found and tracked. The detection and tracking process will be based on the pixel threshold. Keyword: Internet Of Things (IOT), Raspberry pi, Picamera, PIR Sensor, Dropbox.


Author(s):  
Prof. Roshan R. Kolte

Abstract: Now a days we are living in this world where everything is automated and linked online. Internet are the things discover and it is used all over a world very beneficially.in human body face is the crucial factor for identifying each person. It can be identified by using different method like biometric for taking attendance. But in this method many more time are required to take attendance and also people are in contact with each other while marking their attendance in this pandemic situation we are introducing new technology student attendance system using face reorganization. Generally in a classroom the attendance was taken manually at ending or beginning of the class. The problem is that they required a lot of time to be taken and some manual and paper work will make a chance of mistake. To overcome from this problem we are introducing face recognition base attendance system. It is used in many application for identification of human face in a digital image or live video stream video. The proposed system make used of Haar classifier, KNN, CNN, SVM and global filters. After this recognition attendance report will be generated in excel format. The overall accuracy and complexity are calculated after testing this system it is cost efficient and need less installation time. Keywords: Face recognitions, Face detection, Haar classifier, CNN, KNN, SVM, LBPH, Automatic Attendances and image processing.


Author(s):  
С.В. Морковин

В статье рассматривается модифицированный метод внедрения цифровых водяных знаков в видеоданные, заключающийся в дополнении уже известных методов новыми функциями, базирующихся на принципиальных отличиях видеопотока от статичной фотографии. The article discusses a modified method of introducing digital watermarks into video data, which consists in supplementing the already known methods with new functions based on the fundamental differences between the video stream and static photography.


Author(s):  
Samrat Bhardwaj ◽  
Neha Agrawal ◽  
M L Sharma

In the present scenario due to Covid-19, there are no efficient face mask detection applications which are now in high demand for transportation means, densely populated areas, residential districts, large-scale manufacturers and other enterprises to ensure safety. This system can therefore be used in real-time applications which require face-mask detection for safety purposes due to the outbreak of Covid-19. This project can be integrated with embedded systems for application in airports, railway stations, offices, schools, and public places to ensure that public safety guidelines are followed. To identify the person on image/video stream wearing face mask or not. If the person doesn’t wear a mask, the notification will be sent to the respected admin with the help of Python and deep learning algorithm by using the Convolutional Neural Network, Keras Framework and OpenCV. Keywords: Computer Vision, Object Detection, Object Tracking, COVID-19, Face Masks, Safety Improvement


2021 ◽  
Vol 26 (3) ◽  
Author(s):  
Oleh V. Kuzhylnyi ◽  
Tymofii A. Kodniev ◽  
Anton Yuriiovych Varfolomieiev ◽  
Ihor Vsevolodovych Mikhailenko

The paper investigates the possibility of efficient implementation of a GigE Vision compatible video stream source on a computing platform based on a system-on-a-chip with general-purpose ARM processor cores. In particular, to implement the aforementioned video source, a proprietary prototype of a GigE Vision compatible camera was developed based on the Raspberry Pi 4 single-board computer. This computing platform was chosen due to its widespread use and wide community support. The software part of the camera is implemented using the Video4Linux and Aravis libraries. The first library is used for the primary image capturing from a video sensor connected to a single board computer. The second library is intended for forming and transmission of video stream frames compatible with GigE Vision technology over the network. To estimate the delays in the transmission of a video stream over an Ethernet channel, a methodology based on the Precise Time Protocol (PTP) has been proposed and applied. During the experiments, it was found that the software implementation of a GigE Vision compatible camera on single-board computers with general-purpose processor cores is quite promising. Without additional optimization, such an implementation can be successfully used to transmit small frames (with a resolution of up to 640 × 480 pixels), giving a delay less than 10 ms. At the same time, some additional optimizations may be required to transmit larger frames. Namely, a MTU (maximum transmission unit) size value plays the crucial role in latency formation. Thus, to implement a faster camera, it is necessary to select a platform that supports the largest possible MTU (unfortunately, it turned out that it is not possible with Raspberry Pi 4, as it supports relatively small MTU size of up to 2000 bytes). In addition, the image format conversion procedure can noticeably affect the delay. Therefore, it is highly desirable to avoid any frame processing on the transmitter side and, if it is possible, to broadcast raw images. If the conversion of the frame format is necessary, the platform should be chosen so that there are free computing cores on it, which will permit to distribute all necessary frame conversions between these cores using parallelization techniques.


Sign in / Sign up

Export Citation Format

Share Document