scholarly journals Smart Cameras in Embedded Systems

A smart camera performs real-time analysis to recognize scenic elements. Smart cameras are useful in a variety of scenarios: surveillance, medicine, etc. We have built a real-time system for recognizing gestures. Our smart camera uses novel algorithms to recognize gestures based on low-level analysis of body parts as well as hidden Markov models for the moves that comprise the gestures. These algorithms run on a Tri media processor. Our system cans recognize gestures at the rate of 20 frames /second. The camera can also fuse the results of multiple cameras. The smart camera – a whole vision system contained in one neat housing can be used anywhere, in any industry where image processing can be applied. Companies no longer need a cabinet in which to keep all their computing equipment: the computer is housed within the smart camera. In the pharmaceutical industry and in clean rooms – when not even dust is allowed – this can be a big advantage. A single square meter of space can be comparatively very expensive if there is no need for a component rack or cabinet, simply a smart camera, and then this could save a lot of money.

Author(s):  
Tomás Serrano-Ramírez ◽  
Ninfa del Carmen Lozano-Rincón ◽  
Arturo Mandujano-Nava ◽  
Yosafat Jetsemaní Sámano-Flores

Computer vision systems are an essential part in industrial automation tasks such as: identification, selection, measurement, defect detection and quality control in parts and components. There are smart cameras used to perform tasks, however, their high acquisition and maintenance cost is restrictive. In this work, a novel low-cost artificial vision system is proposed for classifying objects in real time, using the Raspberry Pi 3B + embedded system, a Web camera and the Open CV artificial vision library. The suggested technique comprises the training of a supervised classification system of the Haar Cascade type, with image banks of the object to be recognized, subsequently generating a predictive model which is put to the test with real-time detection, as well as the calculation for the prediction error. This seeks to build a powerful vision system, affordable and also developed using free software.


10.5772/57135 ◽  
2013 ◽  
Vol 10 (12) ◽  
pp. 402 ◽  
Author(s):  
Abdul Waheed Malik ◽  
Benny Thörnberg ◽  
Prasanna Kumar

2019 ◽  
Vol 16 (2) ◽  
pp. 649-654
Author(s):  
S. Navaneethan ◽  
N. Nandhagopal ◽  
V. Nivedita

Threshold based pupil detection algorithm was found tobe most efficient method to detect human eye. An implementation of a real-time system on an FPGA board to detect and track a human's eye is the main motive to obtain from proposed work. The Pupil detection algorithm involved thresholding and image filtering. The Pupil location was identified by computing the center value of the detected region. The proposed hardware architecture is designed using Verilog HDL and implemented on aAltera DE2 cyclone II FPGA for prototyping and logic utilizations are compared with Existing work. The overall setup included Cyclone II FPGA, a E2V camera, SDRAM and a VGA monitor. Experimental results proved the accuracy and effectiveness of the hardware realtime implementation as the algorithm was able to manage various types of input video frame. All calculation was performed in real time. Although the system can be furthered improved to obtain better results, overall the project was a success as it enabled any inputted eye to be accurately detected and tracked.


2011 ◽  
Vol 08 (02) ◽  
pp. 103-116 ◽  
Author(s):  
MAYANK BARANWAL ◽  
M. TAHIR KHAN ◽  
CLARENCE W. DE SILVA

This paper presents a method for detecting abnormal motion in real time using a computer vision system. The method is based on the modeling of human body image, which takes into account both orientation and velocity of prominent body parts. A comparative study is made of this method with other existing algorithms based on optical flow and the use of accelerometer body sensors. From the real time experiments conducted in the present work, the developed method is found to be efficient in characterizing human motion and classifying it into basic types such as falling, sitting, and walking. The method uses a Radial Basis Function Network (RBFN) to compute the severity coefficient associated with the type of motion, based on experience. The paper evaluates the various methods and incorporates the advantages of other methods in order to develop a more reliable system for abnormal motion detection.


2021 ◽  
Author(s):  
Sergiy Zhelnakov

Video data processing tasks are traditionally performed either through software-based systems when various algorithms must be applied to the data and when time issue is not critical, DSPs -- when certain time constraints are set but when the set of tasks is limited, or ASICs -- when the highest performance is required, the set of tasks is fixed and highly optimized, the data stream doesn't change, and the number of data streams is limited. For a real-time system which must operate on multiple data streams which also can change in time and on which various data processing algorithms must be applied neither of the mentioned approaches can be used. Timing requirements and power limitation does not allow utilization of sequential CPU. ASIC becomes too big to accomodate multiple processing circuits for each algorithm and associated modes. Only Run-Time Reconfigurable (RTR) FPGA approach allows implementation of such a system. The thesis presents a real-time stereo vision system with elements of synthesis of interactive 3-D virtual objects designed and implemented on the FPGA-based reconfigurable platform. FPGA chip integrates a hybrid architecture system with multi-mode and mutli-stream processing ability for critical time tasks and with embedded microprocessor(s) for computing complex algorithms for 3-D objects synthesis for which timing requirements are not so strict. An approach for the formal presentation and processing of the 3-D virtual objects and their transformation is also analyzed and presented in this paper. Architecture synthesis and optimization for a hybrid system are also considered. The experimental results proved the effectiveness of proposed approach: the FPGA-based system-on-chip provides stereo visualization in different modes (actual image and edge detection image), with synthesized 3-D controls (pressed and released buttons).


2015 ◽  
Vol 2015 ◽  
pp. 1-8 ◽  
Author(s):  
Raphaël Beamonte ◽  
Michel R. Dagenais

Real-time systems have always been difficult to monitor and debug because of the timing constraints which rule out any tool significantly impacting the system latency and performance. Tracing is often the most reliable tool available for studying real-time systems. The real-time behavior of Linux systems has improved recently and it is possible to have latencies in the low microsecond range. Therefore, tracers must ensure that their overhead is within that range and predictable and scales well to multiple cores. The LTTng 2.0 tools have been optimized for multicore performance, scalability, and flexibility. We used and extended the real-time verification tool rteval to study the impact of LTTng on the maximum latency on hard real-time applications. We introduced a new real-time analysis tool to establish the baseline of real-time system performance and then to measure the impact added by tracing the kernel and userspace (UST) with LTTng. We then identified latency problems and accordingly modified LTTng-UST and the procedure to isolate the shielded real-time cores from the RCU interprocess synchronization routines. This work resulted in extended tools to measure the real-time properties of multicore Linux systems, a characterization of the impact of LTTng kernel and UST tracing tools, and improvements to LTTng.


2021 ◽  
Author(s):  
Sergiy Zhelnakov

Video data processing tasks are traditionally performed either through software-based systems when various algorithms must be applied to the data and when time issue is not critical, DSPs -- when certain time constraints are set but when the set of tasks is limited, or ASICs -- when the highest performance is required, the set of tasks is fixed and highly optimized, the data stream doesn't change, and the number of data streams is limited. For a real-time system which must operate on multiple data streams which also can change in time and on which various data processing algorithms must be applied neither of the mentioned approaches can be used. Timing requirements and power limitation does not allow utilization of sequential CPU. ASIC becomes too big to accomodate multiple processing circuits for each algorithm and associated modes. Only Run-Time Reconfigurable (RTR) FPGA approach allows implementation of such a system. The thesis presents a real-time stereo vision system with elements of synthesis of interactive 3-D virtual objects designed and implemented on the FPGA-based reconfigurable platform. FPGA chip integrates a hybrid architecture system with multi-mode and mutli-stream processing ability for critical time tasks and with embedded microprocessor(s) for computing complex algorithms for 3-D objects synthesis for which timing requirements are not so strict. An approach for the formal presentation and processing of the 3-D virtual objects and their transformation is also analyzed and presented in this paper. Architecture synthesis and optimization for a hybrid system are also considered. The experimental results proved the effectiveness of proposed approach: the FPGA-based system-on-chip provides stereo visualization in different modes (actual image and edge detection image), with synthesized 3-D controls (pressed and released buttons).


Sign in / Sign up

Export Citation Format

Share Document