High-speed, high-precision focal length measurement using double-hole mask and advanced image sensor software

2018 ◽  
Vol 74 ◽  
pp. 239-244
Author(s):  
Binh Xuan Cao ◽  
Phuong Le Hoang ◽  
Sanghoon Ahn ◽  
Jeng-o Kim ◽  
Heeshin Kang ◽  
...  
2021 ◽  
pp. 002029402110022
Author(s):  
Xiaohua Zhou ◽  
Jianbin Zheng ◽  
Xiaoming Wang ◽  
Wenda Niu ◽  
Tongjian Guo

High-speed scanning is a huge challenge to the motion control of step-scanning gene sequencing stage. The stage should achieve high-precision position stability with minimal settling time for each step. The existing step-scanning scheme usually bases on fixed-step motion control, which has limited means to reduce the time cost of approaching the desired position and keeping high-precision position stability. In this work, we focus on shortening the settling time of stepping motion and propose a novel variable step control method to increase the scanning speed of gene sequencing stage. Specifically, the variable step control stabilizes the stage at any position in a steady-state interval rather than the desired position on each step, so that reduces the settling time. The resulting step-length error is compensated in the next acceleration and deceleration process of stepping to avoid the accumulation of errors. We explicitly described the working process of the step-scanning gene sequencer and designed the PID control structure used in the variable step control for the gene sequencing stage. The simulation was performed to check the performance and stability of the variable step control. Under the conditions of the variable step control where the IMA6000 gene sequencer prototype was evaluated extensively. The experimental results show that the real gene sequencer can step 1.54 mm in 50 ms period, and maintain a high-precision stable state less than 30 nm standard deviation in the following 10 ms period. The proposed method performs well on the gene sequencing stage.


Sensors ◽  
2021 ◽  
Vol 21 (6) ◽  
pp. 1955
Author(s):  
Md Jubaer Hossain Pantho ◽  
Pankaj Bhowmik ◽  
Christophe Bobda

The astounding development of optical sensing imaging technology, coupled with the impressive improvements in machine learning algorithms, has increased our ability to understand and extract information from scenic events. In most cases, Convolution neural networks (CNNs) are largely adopted to infer knowledge due to their surprising success in automation, surveillance, and many other application domains. However, the convolution operations’ overwhelming computation demand has somewhat limited their use in remote sensing edge devices. In these platforms, real-time processing remains a challenging task due to the tight constraints on resources and power. Here, the transfer and processing of non-relevant image pixels act as a bottleneck on the entire system. It is possible to overcome this bottleneck by exploiting the high bandwidth available at the sensor interface by designing a CNN inference architecture near the sensor. This paper presents an attention-based pixel processing architecture to facilitate the CNN inference near the image sensor. We propose an efficient computation method to reduce the dynamic power by decreasing the overall computation of the convolution operations. The proposed method reduces redundancies by using a hierarchical optimization approach. The approach minimizes power consumption for convolution operations by exploiting the Spatio-temporal redundancies found in the incoming feature maps and performs computations only on selected regions based on their relevance score. The proposed design addresses problems related to the mapping of computations onto an array of processing elements (PEs) and introduces a suitable network structure for communication. The PEs are highly optimized to provide low latency and power for CNN applications. While designing the model, we exploit the concepts of biological vision systems to reduce computation and energy. We prototype the model in a Virtex UltraScale+ FPGA and implement it in Application Specific Integrated Circuit (ASIC) using the TSMC 90nm technology library. The results suggest that the proposed architecture significantly reduces dynamic power consumption and achieves high-speed up surpassing existing embedded processors’ computational capabilities.


Sensors ◽  
2021 ◽  
Vol 21 (11) ◽  
pp. 3713
Author(s):  
Soyeon Lee ◽  
Bohyeok Jeong ◽  
Keunyeol Park ◽  
Minkyu Song ◽  
Soo Youn Kim

This paper presents a CMOS image sensor (CIS) with built-in lane detection computing circuits for automotive applications. We propose on-CIS processing with an edge detection mask used in the readout circuit of the conventional CIS structure for high-speed lane detection. Furthermore, the edge detection mask can detect the edges of slanting lanes to improve accuracy. A prototype of the proposed CIS was fabricated using a 110 nm CIS process. It has an image resolution of 160 (H) × 120 (V) and a frame rate of 113, and it occupies an area of 5900 μm × 5240 μm. A comparison of its lane detection accuracy with that of existing edge detection algorithms shows that it achieves an acceptable accuracy. Moreover, the total power consumption of the proposed CIS is 9.7 mW at pixel, analog, and digital supply voltages of 3.3, 3.3, and 1.5 V, respectively.


Cytotherapy ◽  
2021 ◽  
Vol 23 (5) ◽  
pp. S97
Author(s):  
J. Bell ◽  
Y. Huang ◽  
S. Yung ◽  
H. Qazi ◽  
C. Hernandez ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document