A multi-mode video-stream processor with cyclically reconfigurable architecture

Author(s):  
Valeri Kirischian ◽  
Vadim Geurkov ◽  
Lev Kirischian
Author(s):  
E. D. Salmon ◽  
J. C. Waters ◽  
C. Waterman-Storer

We have developed a multi-mode digital imaging system which acquires images with a cooled CCD camera (Figure 1). A multiple band pass dichromatic mirror and robotically controlled filter wheels provide wavelength selection for epi-fluorescence. Shutters select illumination either by epi-fluorescence or by transmitted light for phase contrast or DIC. Many of our experiments involve investigations of spindle assembly dynamics and chromosome movements in live cells or unfixed reconstituted preparations in vitro in which photodamage and phototoxicity are major concerns. As a consequence, a major factor in the design was optical efficiency: achieving the highest image quality with the least number of illumination photons. This principle applies to both epi-fluorescence and transmitted light imaging modes. In living cells and extracts, microtubules are visualized using X-rhodamine labeled tubulin. Photoactivation of C2CF-fluorescein labeled tubulin is used to locally mark microtubules in studies of microtubule dynamics and translocation. Chromosomes are labeled with DAPI or Hoechst DNA intercalating dyes.


2020 ◽  
Vol 39 (6) ◽  
pp. 8463-8475
Author(s):  
Palanivel Srinivasan ◽  
Manivannan Doraipandian

Rare event detections are performed using spatial domain and frequency domain-based procedures. Omnipresent surveillance camera footages are increasing exponentially due course the time. Monitoring all the events manually is an insignificant and more time-consuming process. Therefore, an automated rare event detection contrivance is required to make this process manageable. In this work, a Context-Free Grammar (CFG) is developed for detecting rare events from a video stream and Artificial Neural Network (ANN) is used to train CFG. A set of dedicated algorithms are used to perform frame split process, edge detection, background subtraction and convert the processed data into CFG. The developed CFG is converted into nodes and edges to form a graph. The graph is given to the input layer of an ANN to classify normal and rare event classes. Graph derived from CFG using input video stream is used to train ANN Further the performance of developed Artificial Neural Network Based Context-Free Grammar – Rare Event Detection (ACFG-RED) is compared with other existing techniques and performance metrics such as accuracy, precision, sensitivity, recall, average processing time and average processing power are used for performance estimation and analyzed. Better performance metrics values have been observed for the ANN-CFG model compared with other techniques. The developed model will provide a better solution in detecting rare events using video streams.


2009 ◽  
Vol E92-B (12) ◽  
pp. 3717-3725
Author(s):  
Thomas HUNZIKER ◽  
Ziyang JU ◽  
Dirk DAHLHAUS

2014 ◽  
Vol E97.C (7) ◽  
pp. 781-786 ◽  
Author(s):  
Mohammad NASIR UDDIN ◽  
Takaaki KIZU ◽  
Yasuhiro HINOKUMA ◽  
Kazuhiro TANABE ◽  
Akio TAJIMA ◽  
...  

2016 ◽  
Vol E99.C (7) ◽  
pp. 866-877 ◽  
Author(s):  
Abdulfattah M. OBEID ◽  
Syed Manzoor QASIM ◽  
Mohammed S. BENSALEH ◽  
Abdullah A. ALJUFFRI

2019 ◽  
Vol 4 (91) ◽  
pp. 21-29 ◽  
Author(s):  
Yaroslav Trofimenko ◽  
Lyudmila Vinogradova ◽  
Evgeniy Ershov

Sign in / Sign up

Export Citation Format

Share Document