smart camera
Recently Published Documents


TOTAL DOCUMENTS

416
(FIVE YEARS 53)

H-INDEX

20
(FIVE YEARS 3)

2021 ◽  
Author(s):  
Doan Thanh Nghi ◽  
Nguyen Thanh Hien Triet ◽  
Thai Truong An

Author(s):  
Praveen Tumuluru ◽  
S. Hrushikesava Raju ◽  
Dorababu Sudarsa ◽  
P. Venkateswara Rao ◽  
Sampoornamma Sudarsa ◽  
...  
Keyword(s):  

Sensors ◽  
2021 ◽  
Vol 21 (22) ◽  
pp. 7436
Author(s):  
Leticia Oyuki Rojas-Perez ◽  
Jose Martinez-Carranza

Recent advances have shown for the first time that it is possible to beat a human with an autonomous drone in a drone race. However, this solution relies heavily on external sensors, specifically on the use of a motion capture system. Thus, a truly autonomous solution demands performing computationally intensive tasks such as gate detection, drone localisation, and state estimation. To this end, other solutions rely on specialised hardware such as graphics processing units (GPUs) whose onboard hardware versions are not as powerful as those available for desktop and server computers. An alternative is to combine specialised hardware with smart sensors capable of processing specific tasks on the chip, alleviating the need for the onboard processor to perform these computations. Motivated by this, we present the initial results of adapting a novel smart camera, known as the OpenCV AI Kit or OAK-D, as part of a solution for the ADR running entirely on board. This smart camera performs neural inference on the chip that does not use a GPU. It can also perform depth estimation with a stereo rig and run neural network models using images from a 4K colour camera as the input. Additionally, seeking to limit the payload to 200 g, we present a new 3D-printed design of the camera’s back case, reducing the original weight 40%, thus enabling the drone to carry it in tandem with a host onboard computer, the Intel Stick compute, where we run a controller based on gate detection. The latter is performed with a neural model running on an OAK-D at an operation frequency of 40 Hz, enabling the drone to fly at a speed of 2 m/s. We deem these initial results promising toward the development of a truly autonomous solution that will run intensive computational tasks fully on board.


2021 ◽  
Author(s):  
Hoai-Nhan Nguyen ◽  
Minh-Son Nguyen ◽  
Tri-Nhut Do

Electronics ◽  
2021 ◽  
Vol 10 (16) ◽  
pp. 1898
Author(s):  
Isaac Sánchez Leal ◽  
Irida Shallari ◽  
Silvia Krug ◽  
Axel Jantsch ◽  
Mattias O’Nils

Image processing systems exploit image information for a purpose determined by the application at hand. The implementation of image processing systems in an Internet of Things (IoT) context is a challenge due to the amount of data in an image processing system, which affects the three main node constraints: memory, latency and energy. One method to address these challenges is the partitioning of tasks between the IoT node and a server. In this work, we present an in-depth analysis of how the input image size and its content within the conventional image processing systems affect the decision on where tasks should be implemented, with respect to node energy and latency. We focus on explaining how the characteristics of the image are transferred through the system until finally influencing partition decisions. Our results show that the image size affects significantly the efficiency of the node offloading configurations. This is mainly due to the dominant cost of communication over processing as the image size increases. Furthermore, we observed that image content has limited effects in the node offloading analysis.


Sensors ◽  
2021 ◽  
Vol 21 (9) ◽  
pp. 2958
Author(s):  
Antonio Carlos Cob-Parro ◽  
Cristina Losada-Gutiérrez ◽  
Marta Marrón-Romera ◽  
Alfredo Gardel-Vicente ◽  
Ignacio Bravo-Muñoz

New processing methods based on artificial intelligence (AI) and deep learning are replacing traditional computer vision algorithms. The more advanced systems can process huge amounts of data in large computing facilities. In contrast, this paper presents a smart video surveillance system executing AI algorithms in low power consumption embedded devices. The computer vision algorithm, typical for surveillance applications, aims to detect, count and track people’s movements in the area. This application requires a distributed smart camera system. The proposed AI application allows detecting people in the surveillance area using a MobileNet-SSD architecture. In addition, using a robust Kalman filter bank, the algorithm can keep track of people in the video also providing people counting information. The detection results are excellent considering the constraints imposed on the process. The selected architecture for the edge node is based on a UpSquared2 device that includes a vision processor unit (VPU) capable of accelerating the AI CNN inference. The results section provides information about the image processing time when multiple video cameras are connected to the same edge node, people detection precision and recall curves, and the energy consumption of the system. The discussion of results shows the usefulness of deploying this smart camera node throughout a distributed surveillance system.


Sign in / Sign up

Export Citation Format

Share Document