Detecting Compromised Edge Smart Cameras using Lightweight Environmental Fingerprint Consensus

2021 ◽  
Author(s):  
Deeraj Nagothu ◽  
Ronghua Xu ◽  
Yu Chen ◽  
Erik Blasch ◽  
Alexander Aved
Keyword(s):  
2011 ◽  
Vol 403-408 ◽  
pp. 516-521 ◽  
Author(s):  
Sanjay Singh ◽  
Srinivasa Murali Dunga ◽  
AS Mandal ◽  
Chandra Shekhar ◽  
Santanu Chaudhury

In any remote surveillance scenario, smart cameras have to take intelligent decisions to generate summary frames to minimize communication and processing overhead. Video summary generation, in the context of smart camera, is the process of merging the information from multiple frames. A summary generation scheme based on clustering based change detection algorithm has been implemented in our smart camera system for generating frames to deliver requisite information. In this paper we propose an embedded platform based framework for implementing summary generation scheme using HW-SW Co-Design based methodology. The complete system is implemented on Xilinx XUP Virtex-II Pro FPGA board. The overall algorithm is running on PowerPC405 and some of the blocks which are computationally intensive and more frequently called are implemented in hardware using VHDL. The system is designed using Xilinx Embedded Design Kit (EDK).


1999 ◽  
Vol 09 (03) ◽  
pp. 175-186 ◽  
Author(s):  
HAROLD SZU

Unified Lyaponov function is given for the first time to prove the learning methodologies convergence of artificial neural network (ANN), both supervised and unsupervised, from the viewpoint of the minimization of the Helmholtz free energy at the constant temperature. Early in 1982, Hopfield has proven the supervised learning by the energy minimization principle. Recently in 1996, Bell & Sejnowski has algorithmically demonstrated. Independent Component Analyses (ICA) generalizing the Principal Component Analyses (PCA) that the continuing reduction of early vision redundancy happens towards the "sparse edge maps" by maximization of the ANN output entropy. We explore the combination of both as Lyaponov function of which the proven convergence gives both learning methodologies. The unification is possible because of the thermodynamics Helmholtz free energy at a constant temperature. The blind de-mixing condition for more than two objects using two sensor measurement. We design two smart cameras with short term working memory to do better image de-mixing of more than two objects. We consider channel communication application that we can efficiently mix four images using matrices [AO] and [Al] to send through two channels.


2011 ◽  
Vol 268-270 ◽  
pp. 841-846
Author(s):  
Soo Mi Yang

In this paper, we describe efficient ontology integration model for better context inference based on distributed ontology framework. Context aware computing with inference based on ontology is widely used in distributed surveillance environment. In such a distributed surveillance environment, surveillance devices such as smart cameras may carry heterogeneous video data with different transmission ranges, latency, and formats. However even smart devices, they generally have small memory and power which can manage only part of ontology data. In our efficient ontology integration model, each of agents built in such devices get services not only from a region server, but also peer servers. For such a collaborative network, an effective cache framework that can handle heterogeneous devices is required for the efficient ontology integration. In this paper, we propose a efficient ontology integration model which is adaptive to the actual device demands and that of its neighbors. Our scheme shows the efficiency of model resulted in better context inference.


2012 ◽  
Author(s):  
Oliver Sidla ◽  
Marcin Rosner ◽  
Michael Ulm ◽  
Gert Schwingshackl

Smart Cameras ◽  
2009 ◽  
pp. 359-364
Author(s):  
Ahmed Nabil Belbachir

2021 ◽  
Vol 2021 ◽  
pp. 1-7
Author(s):  
Cheng-Jian Lin ◽  
Chun-Hui Lin ◽  
Shyh-Hau Wang

Deep learning has accomplished huge success in computer vision applications such as self-driving vehicles, facial recognition, and controlling robots. A growing need for deploying systems on resource-limited or resource-constrained environments such as smart cameras, autonomous vehicles, robots, smartphones, and smart wearable devices drives one of the current mainstream developments of convolutional neural networks: reducing model complexity but maintaining fine accuracy. In this study, the proposed efficient light convolutional neural network (ELNet) comprises three convolutional modules which perform ELNet using fewer computations, which is able to be implemented in resource-constrained hardware equipment. The classification task using CIFAR-10 and CIFAR-100 datasets was used to verify the model performance. According to the experimental results, ELNet reached 92.3% and 69%, respectively, in CIFAR-10 and CIFAR-100 datasets; moreover, ELNet effectively lowered the computational complexity and parameters required in comparison with other CNN architectures.


2015 ◽  
pp. 167-188
Author(s):  
Massimo Magrini ◽  
Davide Moroni ◽  
Gabriele Pieri ◽  
Ovidio Salvetti

Sign in / Sign up

Export Citation Format

Share Document