surveillance cameras
Recently Published Documents


TOTAL DOCUMENTS

272
(FIVE YEARS 106)

H-INDEX

14
(FIVE YEARS 4)

2022 ◽  
Vol 133 ◽  
pp. 104034
Author(s):  
Kyung-Su Kang ◽  
Young-Woon Cho ◽  
Kyo-Hoon Jin ◽  
Young-Bin Kim ◽  
Han-Guk Ryu

2021 ◽  
Vol 33 (6) ◽  
pp. 1303-1314
Author(s):  
Masato Fujitake ◽  
Makito Inoue ◽  
Takashi Yoshimi ◽  
◽  

This paper describes the development of a robust object tracking system that combines detection methods based on image processing and machine learning for automatic construction machine tracking cameras at unmanned construction sites. In recent years, unmanned construction technology has been developed to prevent secondary disasters from harming workers in hazardous areas. There are surveillance cameras on disaster sites that monitor the environment and movements of construction machines. By watching footage from the surveillance cameras, machine operators can control the construction machines from a safe remote site. However, to control surveillance cameras to follow the target machines, camera operators are also required to work next to machine operators. To improve efficiency, an automatic tracking camera system for construction machines is required. We propose a robust and scalable object tracking system and robust object detection algorithm, and present an accurate and robust tracking system for construction machines by integrating these two methods. Our proposed image-processing algorithm is able to continue tracking for a longer period than previous methods, and the proposed object detection method using machine learning detects machines robustly by focusing on their component parts of the target objects. Evaluations in real-world field scenarios demonstrate that our methods are more accurate and robust than existing off-the-shelf object tracking algorithms while maintaining practical real-time processing performance.


2021 ◽  
Author(s):  
N. Duraichi ◽  
K. Arun Kumar ◽  
N. Lokesh Sathya ◽  
S. Lokesh

Vehicle robbery and unknown car thefts has become a intense issue around the nation. Many culprits use unapproved vehicles to perform numerous illegal activities and leave the vehicles. The utmost reason for accidents is due to the vehicles driven by unknown users, who perform reckless and inexperienced driving without the speed limit will cause many accidents that increases the death rate. Our goal is to make a system which will allow the person who have authorized license. For this purpose, we plan to install an automated system in the vehicle to introduce smart license verification technology. Various techniques and technologies are being explained to detect the details of the driver, and also Various vehicle thefts are being done in spite of various surveillance cameras are set down to keep an eye on the activities and various technologies are being implemented to diminish the vehicle robbery. So, we proposed the system with the concept of deep learning. As compared to normal detection techniques deep learning collects N number of input samples and compares it with the database details. After the authentication process the engine mechanism starts, if not authorized it gives a buzzer sound and vehicle doesn’t start until the details of registered person is authenticated.


2021 ◽  
pp. 127312
Author(s):  
Xing Wang ◽  
Meizhen Wang ◽  
Xuejun Liu ◽  
Litao Zhu ◽  
Thomas Glade ◽  
...  

2021 ◽  
Author(s):  
Natan Santos Moura ◽  
João Medrado Gondim ◽  
Daniela Barreiro Claro ◽  
Marlo Souza ◽  
Roberto de Cerqueira Figueiredo

The employment of video surveillance cameras by public safety agencies enables incident detection in monitored cities by using object detection for scene description, enhancing the protection to the general public. Object detection has its drawbacks, such as false positives. Our work aims to enhance object detection and image classification by employing IoU (Intersection over Union) to minimize the false positives and identify weapon holders or fire in a frame, adding more information to the scene.


Author(s):  
Sokyna Alqatawneh ◽  
Khalid Jaber ◽  
Mosa Salah ◽  
Dalal Yehia ◽  
Omayma Alqatawneh ◽  
...  

Like many countries, Jordan has resorted to lockdown in an attempt to contain the outbreak of Coronavirus (Covid-19). A set of precautions such as quarantines, isolations, and social distancing were taken in order to tackle its rapid spread of Covid-19. However, the authorities were facing a serious issue with enforcing quarantine instructions and social distancing among its people. In this paper, a social distancing mentoring system has been designed to alert the authorities if any of the citizens violated the quarantine instructions and to detect the crowds and measure their social distancing using an object tracking technique that works in real-time base. This system utilises the widespread surveillance cameras that already exist in public places and outside many residential buildings. To ensure the effectiveness of this approach, the system uses cameras deployed on the campus of Al-Zaytoonah University of Jordan. The results showed the efficiency of this system in tracking people and determining the distances between them in accordance with public safety instructions. This work is the first approach to handle the classification challenges for moving objects using a shared-memory model of multicore techniques. Keywords: Covid-19, Parallel computing, Risk management, Social distancing, Tracking system.


2021 ◽  
Author(s):  
◽  
Alexander John Petre Kane

<p>The Blackeye II camera, produced by Kinopta, is used for remote security, conservation and traffic flow surveillance. The camera uses an image sensor to acquire photographs which undergo image processing and JPEG encoding on a microprocessor. Although the microprocessor performs other tasks, it is the processing and encoding of images that limit the frame rate of the camera to 2 frames per second (fps). Clients have requested an increase to 12.5 fps while adding more image processing to each photograph. The current microprocessor-based system is unable to achieve this.  Custom digital logic systems perform well on processes that naturally form a pipeline, such as the Blackeye II image processing system. This project develops a digital logic system based on an FPGA to receive images from the image sensor, perform the required image processing operations, encode the images in JPEG format and send them on to the microprocessor. The objective is to implement a proof of concept device based upon the Blackeye II’s existing hardware and an FPGA development board. It will implement the proposed pipeline including one example of an image processing operation.  A JPEG encoder is designed to process the 752 × 480 greyscale photographs from the image processor in real time. The JPEG encoder consists of four stages: discrete cosine transform (DCT), quantisation, zig-zag buffer and Huffman encoder. The DCT design is based upon the work of Woods et al. [1], which is improved on. An analysis of the relationship between precision and accuracy in the DCT and quantisation stages is used to minimise the system’s resource requirements. The JPEG encoder is successfully tested in simulation.  Input and output stages are added to the design. The input stage receives data from the image sensor and removes breaks in the data stream. The output stage must concatenate the data from the JPEG encoder and transmit it to the microprocessor via the microprocessor’s ISI (image sensor interface) peripheral. An image sharpening filter is developed and inserted into the pipeline between the input and JPEG encoder. Because remote surveillance cameras are battery powered, the minimisation of power consumption is a key concern. To minimise power consumption a mechanism is introduced to track those modules in the pipeline that are in use at any time. Any not in use are paused by gating the module’s clock source.  Once the system is complete and tested in simulation it is loaded into hardware. The FPGA development board is attached to the image sensor board and microprocessor board of the Blackeye II camera by a purpose-built breakout board. Plugging the microprocessor board into a PC provides a live stream of images proving the successful operation of the FPGA system. The project objectives were exceeded by increasing the frame rate of the Blackeye II to 20 fps, which will not decrease with additional image processing operations.  The project was viewed as a success by Kinopta, who have committed to its further development.</p>


2021 ◽  
Author(s):  
◽  
Alexander John Petre Kane

<p>The Blackeye II camera, produced by Kinopta, is used for remote security, conservation and traffic flow surveillance. The camera uses an image sensor to acquire photographs which undergo image processing and JPEG encoding on a microprocessor. Although the microprocessor performs other tasks, it is the processing and encoding of images that limit the frame rate of the camera to 2 frames per second (fps). Clients have requested an increase to 12.5 fps while adding more image processing to each photograph. The current microprocessor-based system is unable to achieve this.  Custom digital logic systems perform well on processes that naturally form a pipeline, such as the Blackeye II image processing system. This project develops a digital logic system based on an FPGA to receive images from the image sensor, perform the required image processing operations, encode the images in JPEG format and send them on to the microprocessor. The objective is to implement a proof of concept device based upon the Blackeye II’s existing hardware and an FPGA development board. It will implement the proposed pipeline including one example of an image processing operation.  A JPEG encoder is designed to process the 752 × 480 greyscale photographs from the image processor in real time. The JPEG encoder consists of four stages: discrete cosine transform (DCT), quantisation, zig-zag buffer and Huffman encoder. The DCT design is based upon the work of Woods et al. [1], which is improved on. An analysis of the relationship between precision and accuracy in the DCT and quantisation stages is used to minimise the system’s resource requirements. The JPEG encoder is successfully tested in simulation.  Input and output stages are added to the design. The input stage receives data from the image sensor and removes breaks in the data stream. The output stage must concatenate the data from the JPEG encoder and transmit it to the microprocessor via the microprocessor’s ISI (image sensor interface) peripheral. An image sharpening filter is developed and inserted into the pipeline between the input and JPEG encoder. Because remote surveillance cameras are battery powered, the minimisation of power consumption is a key concern. To minimise power consumption a mechanism is introduced to track those modules in the pipeline that are in use at any time. Any not in use are paused by gating the module’s clock source.  Once the system is complete and tested in simulation it is loaded into hardware. The FPGA development board is attached to the image sensor board and microprocessor board of the Blackeye II camera by a purpose-built breakout board. Plugging the microprocessor board into a PC provides a live stream of images proving the successful operation of the FPGA system. The project objectives were exceeded by increasing the frame rate of the Blackeye II to 20 fps, which will not decrease with additional image processing operations.  The project was viewed as a success by Kinopta, who have committed to its further development.</p>


2021 ◽  
Vol 7 (2) ◽  
Author(s):  
Shu Lea Cheang ◽  
With Paula Gardner and Stephen Surlin

The title of Shu Lea Cheang’s 3x3x6 which represented Taiwan at Venice Biennale 2019 derives from the 21st century high-security prison cell measured in 9 square meter and equipped with 6 surveillance cameras. As an immersive installation, 3x3x6 is comprised of multiple interfaces to reflect on the construction of sexual subjectivity by technologies of confinement and control, from physical incarceration to the omnipresent surveillance systems of contemporary society, from Jeremy Bentham’s panopticon conceptualized in 1791 to China’s Sharp Eyes that boasts 200 million surveillance cameras with facial recognition capacity for its 1.4 billion population. By employing strategic and technical interventions, 3x3x6 investigates 10 criminal cases in which the prisoners across time and space are incarcerated for sexual provocation and gender affirmation. The exhibition constructs collective counter-accounts of sexuality where trans punk fiction, queer, and anti-colonial imaginations hacks the operating system of the history of sexual subjection. This Image and Text piece intersperses images from the exhibition with handout texts written by curator Paul B. Preciado (against a grey background), as well as an interview between special section co-editor Paula Gardner and the artist that brings the extraordinary exhibition into further conversation with feminist technoscience scholarship. The project website is available at https://3x3x6-v2.webflow.io/.


Sign in / Sign up

Export Citation Format

Share Document