scholarly journals WindSTORM: Robust online image processing for high-throughput nanoscopy

2019 ◽  
Vol 5 (4) ◽  
pp. eaaw0683 ◽  
Author(s):  
Hongqiang Ma ◽  
Jianquan Xu ◽  
Yang Liu

High-throughput nanoscopy becomes increasingly important for unraveling complex biological processes from a large heterogeneous cell population at a nanoscale resolution. High-density emitter localization combined with a large field of view and fast imaging frame rate is commonly used to achieve a high imaging throughput, but the image processing speed and the presence of heterogeneous background in the dense emitter scenario remain a bottleneck. Here, we present a simple non-iterative approach, referred to as WindSTORM, to achieve high-speed high-density emitter localization with robust performance for various image characteristics. We demonstrate that WindSTORM improves the computation speed by two orders of magnitude on CPU and three orders of magnitude upon GPU acceleration to realize online image processing, without compromising localization accuracy. Further, WindSTORM is highly robust to maximize the localization accuracy and minimize the image artifacts in the presence of nonuniform background. WindSTORM paves the way for next generation high-throughput nanoscopy.

2018 ◽  
Author(s):  
Hongqiang Ma ◽  
Jianquan Xu ◽  
Yang Liu

AbstractHigh-throughput nanoscopy becomes increasingly important for unraveling complex biological processes from a large heterogeneous cell population at a nanoscale resolution. High-density emitter localization combined with a large field of view and fast imaging frame rate is commonly used to achieve a high imaging throughput, but the image processing speed in the dense emitter scenario remains a bottleneck. Here we present a simple non-iterative approach, referred to as WindSTORM, to achieve high-speed high-density emitter localization with robust performance for various image characteristics. We demonstrate that WindSTORM improves the computation speed by two orders of magnitude on CPU and three orders of magnitude upon GPU acceleration to realize online image processing, without compromising localization accuracy. Further, due to the embedded background correction, WindSTORM is highly robust in the presence of high and non-uniform background. WindSTORM paves the way for next generation of high-throughput nanoscopy.


2020 ◽  
Author(s):  
Jun Ki Kim ◽  
Youngkyu Kim ◽  
Jungmin Oh ◽  
Seung-Ho Choi ◽  
Ahra Jung ◽  
...  

BACKGROUND Recently, high-speed digital imaging (HSDI), especially HSD endoscopic imaging is being routinely used for the diagnosis of vocal fold disorders. However, high-speed digital endoscopic imaging devices are usually large and costly, which limits access by patients in underdeveloped countries and in regions with inadequate medical infrastructure. Modern smartphones have sufficient functionality to process the complex calculations that are required for processing high-resolution images and videos with a high frame rate. Recently, several attempts have been made to integrate medical endoscopes with smartphones to make them more accessible to underdeveloped countries. OBJECTIVE To develop a smartphone adaptor for endoscopes to reduce the cost of devices, and to demonstrate the possibility of high-speed vocal cord imaging using the high-speed imaging functions of a high-performance smartphone camera. METHODS A customized smartphone adaptor was designed for clinical endoscopy using selective laser melting (SLM)-based 3D printing. Existing laryngoscope was attached to the smartphone adaptor to acquire high-speed vocal cord endoscopic images. Only existing basic functions of the smartphone camera were used for HSDI of the vocal folds. For image processing, segmented glottal areas were calculated from whole HSDI frames, and characteristics such as volume, shape and longitudinal edge length were analyzed. RESULTS High-speed digital smartphone imaging with the smartphone-endoscope adaptor could achieve 940 frames per second, and was used to image the vocal folds of five volunteers. The image processing and analytics demonstrated successful calculation of relevant diagnostic variables from the acquired images. CONCLUSIONS A smartphone-based HSDI endoscope system can function as a point-of-care clinical diagnostic device. Furthermore, this system is suitable for use as an accessible diagnostic method in underdeveloped areas with inadequate medical service infrastructure.


2014 ◽  
Vol 971-973 ◽  
pp. 1454-1458
Author(s):  
Lei Qu ◽  
Yan Tian ◽  
Jun Liu

For real time target detection, identification and tracking in high frame rates, large field of view images, a real-time image processing system is designed. A TMS320C6678 DSP runs as the chief arithmetic processor of this system and FPGA as the secondary controller. C6678 is compared with the same series C6414 in image compression algorithm test. Experimental results show that the new system has a more effective construct, and higher reliability, and can provide a platform for the new high-speed image processing.


Author(s):  
Samee Maharjan ◽  
Dag Bjerketvedt ◽  
Ola Marius Lysaker

Abstract This paper presents a framework for processing high-speed videos recorded during gas experiments in a shock tube. The main objective is to study boundary layer interactions of reflected shock waves in an automated way, based on image processing. The shock wave propagation was recorded at a frame rate of 500,000 frames per second with a Kirana high-speed camera. Each high-speed video consists of 180 frames, with image size [$$768 \times 924$$ 768 × 924 ] pixels. An image processing framework was designed to track the wave front in each image and thereby estimate: (a) the shock position; (b) position of triple point; and (c) shock angle. The estimated shock position and shock angle were then used as input for calculating the pressure exerted by the shock. To validate our results, the calculated pressure was compared with recordings from pressure transducers. With the proposed framework, we were able to identify and study shock wave properties that occurred within less than $$300\, \upmu \hbox {sec}$$ 300 μ sec and to track evolveness over a distance of 100 mm. Our findings show that processing of high-speed videos can enrich, and give detailed insight, to the observations in the shock experiments.


2014 ◽  
Vol 568-570 ◽  
pp. 193-197
Author(s):  
Qiang Wu ◽  
Gen Wang ◽  
Xu Wen Li

A High-Speed LVDS Data Acquisition system is designed, with XILINX’s Virtex-5 FPGA as core processor as well as TI’s TMS320C6748 DSP for pre-processing and storing data. This system achieved a greater amount of image processing and faster image processing requirement. The system completed the dual LVDS image data acquisition according to the demand. The resolution of the image data is 320x257. Each image transmission frame rate of not less than 150 / sec. large amount of data throughout the system as well as real-time demanding is a big challenge for designer. The designer uses simulation tools from Mentor Graphics Hyperlynx to complete the stack and impedance calculation and signal quality simulation to ensure that the system is stable and reliable. This system also has better scalability and more reliable storage method than past designs. Recently, the system has completed testing verification and results show that this design is feasible and reliable.


2001 ◽  
Vol 6 (1) ◽  
pp. 3-9 ◽  
Author(s):  
Patrick Lavery ◽  
Murray J.B. Brown ◽  
Andrew J. Pope

In order to accommodate the predicted increase in screening required of successful pharmaceutical companies, miniaturized, high-speed HTS formats are necessary. Much emphasis has been placed on sensitive fluorescence techniques, but some systems, particularly enzymes interconverting small substrates, are likely to be refractory to such approaches. We show here that simple absorbance-based assays can be miniaturized to 10-,.d volumes in 1536- well microplates compatible with the requirements for ultra-high throughput screening. We demonstrate that, with low-cost hardware, assay performance is wholly predictable from the 2-fold decrease in pathlength for fully filled 1536-well plates compared to 96- and 384-well microplates. A number of enzyme systems are shown to work in this high-density format, and the inhibition parameters determined are comparable with those in standard assay formats. We also demonstrate the utility of kinetics measurements in miniaturized format with improvements in assay quality and the ability to extract detailed mechanistic information about inhibitors.


2019 ◽  
Vol 5 (3) ◽  
pp. 34 ◽  
Author(s):  
Runbin Shi ◽  
Justin Wong ◽  
Hayden So

Parallel hardware designed for image processing promotes vision-guided intelligent applications. With the advantages of high-throughput and low-latency, streaming architecture on FPGA is especially attractive to real-time image processing. Notably, many real-world applications, such as region of interest (ROI) detection, demand the ability to process images continuously at different sizes and resolutions in hardware without interruptions. FPGA is especially suitable for implementation of such flexible streaming architecture, but most existing solutions require run-time reconfiguration, and hence cannot achieve seamless image size-switching. In this paper, we propose a dynamically-programmable buffer architecture (D-SWIM) based on the Stream-Windowing Interleaved Memory (SWIM) architecture to realize image processing on FPGA for image streams at arbitrary sizes defined at run time. D-SWIM redefines the way that on-chip memory is organized and controlled, and the hardware adapts to arbitrary image size with sub-100 ns delay that ensures minimum interruptions to the image processing at a high frame rate. Compared to the prior SWIM buffer for high-throughput scenarios, D-SWIM achieved dynamic programmability with only a slight overhead on logic resource usage, but saved up to 56 % of the BRAM resource. The D-SWIM buffer achieves a max operating frequency of 329.5 MHz and reduction in power consumption by 45.7 % comparing with the SWIM scheme. Real-world image processing applications, such as 2D-Convolution and the Harris Corner Detector, have also been used to evaluate D-SWIM’s performance, where a pixel throughput of 4.5 Giga Pixel/s and 4.2 Giga Pixel/s were achieved respectively in each case. Compared to the implementation with prior streaming frameworks, the D-SWIM-based design not only realizes seamless image size-switching, but also improves hardware efficiency up to 30 × .


Author(s):  
Shaz A Zamore ◽  
Nicole Araujo ◽  
John J Socha

Synopsis Visual control during high-speed aerial locomotion requires a visual system adapted for such behaviors. Flying snakes (genus: Chrysopelea) are capable of gliding at speeds up to 11 m s− 1 and perform visual assessments before take-off. Investigating mechanisms of visual control requires a closed-loop experimental system, such as immersive virtual arenas. To characterize vision in the flying snake Chrysopelea paradisi, we used digitally reconstructed models of the head to determine a 3D field of vision. We also used optokinetic drum experiments and compared slow-phase optokinetic nystagmus (OKN) speeds to calculate visual acuity, and conducted preliminary experiments to determine whether snakes would respond to closed-loop virtual stimuli. Visual characterization showed that C. paradisi likely has a large field of view (308.5 ± 6.5° azimuthal range), with a considerable binocular region (33.0 ± 11.0° azimuthal width) that extends overhead. Their visual systems are broadly tuned and motion-sensitive, with mean peak OKN response gains of 0.50 ± 0.11, seen at 46.06 ± 11.08 Hz, and a low spatial acuity, with mean peak gain of 0.92 ± 0.41, seen at 2.89 ± 0.16 cycles per degree (cpd). These characteristics were used to inform settings in an immersive virtual arena, including frame rate, brightness, and stimulus size. In turn, the immersive virtual arena was used to reproduce the optokinetic drum experiments. We elicited OKN in open-loop experiments, with a mean gain of 0.21 ± 0.9, seen at 0.019 ± 6 × 10−5 cpd and 1.79 ± 0.01 Hz. In closed-loop experiments, snakes did not exhibit OKN, but held the image fixed, indicating visual stabilization. These results demonstrate that C. paradisi responds to visual stimuli in a digital virtual arena. The accessibility and adaptability of the virtual setup make it suitable for future studies of visual control in snakes and other animals in an unconstrained setting.


2005 ◽  
Vol 17 (2) ◽  
pp. 121-129 ◽  
Author(s):  
Yoshihiro Watanabe ◽  
◽  
Takashi Komuro ◽  
Shingo Kagami ◽  
Masatoshi Ishikawa

Real-time image processing at high frame rates could play an important role in various visual measurement. Such image processing can be realized by using a high-speed vision system imaging at high frame rates and having appropriate algorithms processed at high speed. We introduce a vision chip for high-speed vision and propose a multi-target tracking algorithm for the vision chip utilizing the unique features. We describe two visual measurement applications, target counting and rotation measurement. Both measurements enable excellent measurement precision and high flexibility because of high-frame-rate visual observation achievable. Experimental results show the advantages of vision chips compared with conventional visual systems.


Sign in / Sign up

Export Citation Format

Share Document