pixel output
Recently Published Documents


TOTAL DOCUMENTS

7
(FIVE YEARS 3)

H-INDEX

1
(FIVE YEARS 0)

2021 ◽  
Vol 16 ◽  
pp. 626-632
Author(s):  
Aicha Menssouri ◽  
Karim El Khadiri ◽  
Ahmed Tahiri

This work aims to design and simulate an in-pixel Capacitive Transimpedance Amplifier (CTIA) and peripheral circuitry that ensures pixel reading. Each pixel circuit is composed of four transistors using 90nm CMOS technology with a supply voltage of 1.8 V and is part of an array of pixels that make up a CMOS image sensor with peripheral circuitry. Pixel output is sent to a delta difference sampling (DDS) circuit to filter reset voltages. The Gain Margin achieved for the in-pixel CTIA is 44dB and 91dB for the Phase Margin. We also present measured pixel parameters and give a comparison with prior work. The timing and readout circuitry is also described.



2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Issam H. Laradji ◽  
Alzayat Saleh ◽  
Pau Rodriguez ◽  
Derek Nowrouzezahrai ◽  
Mostafa Rahimi Azghadi ◽  
...  

AbstractEstimating fish body measurements like length, width, and mass has received considerable research due to its potential in boosting productivity in marine and aquaculture applications. Some methods are based on manual collection of these measurements using tools like a ruler which is time consuming and labour intensive. Others rely on fully-supervised segmentation models to automatically acquire these measurements but require collecting per-pixel labels which are also time consuming. It can take up to 2 minutes per fish to acquire accurate segmentation labels. To address this problem, we propose a segmentation model that can efficiently train on images labeled with point-level supervision, where each fish is annotated with a single click. This labeling scheme takes an average of only 1 second per fish. Our model uses a fully convolutional neural network with one branch that outputs per-pixel scores and another that outputs an affinity matrix. These two outputs are aggregated using a random walk to get the final, refined per-pixel output. The whole model is trained end-to-end using the localization-based counting fully convolutional neural network (LCFCN) loss and thus we call our method Affinity-LCFCN (A-LCFCN). We conduct experiments on the DeepFish dataset, which contains several fish habitats from north-eastern Australia. The results show that A-LCFCN outperforms a fully-supervised segmentation model when the annotation budget is fixed. They also show that A-LCFCN achieves better segmentation results than LCFCN and a standard baseline.



2021 ◽  
Author(s):  
Alzayat Saleh ◽  
Issam Laradji ◽  
Pau Rodriguez ◽  
Derek Nowrouzezahrai ◽  
Mostafa Rahimi Azghadi ◽  
...  

Abstract Estimating fish body measurements like length, width, and mass has received considerable research due to its potential in boosting productivity in marine and aquaculture applications. Some methods are based on manual collection of these measurements using tools like a ruler which is time consuming and labour intensive. Others rely on fully-supervised segmentation models to automatically acquire these measurements but require collecting per-pixel labels which are also time consuming. It can take up to 2 minutes per fish to acquire accurate segmentation labels. To address this problem, we propose a segmentation model that can efficiently train on images labeled with point-level supervision, where each fish is annotated with a single click. This labeling scheme takes an average of only 1 second per fish. Our model uses a fully convolutional neural network with one branch that outputs per-pixel scores and another that outputs an affinity matrix. These two outputs are aggregated using a random walk to get the final, refined per-pixel output. The whole model is trained end-to-end using the LCFCN loss and thus we call our method Affinity-LCFCN (A-LCFCN). We conduct experiments on the DeepFish dataset, which contains several fish habitats from north-eastern Australia. The results show that A-LCFCN outperforms a fully-supervised segmentation model when the annotation budget is fixed. They also show that A-LCFCN achieves better segmentation results than LCFCN and a standard baseline.



2016 ◽  
Vol 7 (1) ◽  
pp. 7
Author(s):  
O. Smet ◽  
S. Van de Vyver ◽  
Patrick De Baets ◽  
Verstraete Matthias ◽  
Stijn Herregodts

In order to gain more insight in the characteristics of Tekscan contact pressure mapping sensors, a hydrostatic pressure cell is designed. The main research topics are load history dependency, inter-pixel output variation and output drift behaviour of the sensors. This leads to a state of the art preconditioning and post processing method which yields a higher accuracy of the measurement data.







Sign in / Sign up

Export Citation Format

Share Document