image sensors
Recently Published Documents





Sensors ◽  
2022 ◽  
Vol 22 (2) ◽  
pp. 604
Carlos A. M. Correia ◽  
Fabio A. A. Andrade ◽  
Agnar Sivertsen ◽  
Ihannah Pinto Guedes ◽  
Milena Faria Pinto ◽  

Optical image sensors are the most common remote sensing data acquisition devices present in Unmanned Aerial Systems (UAS). In this context, assigning a location in a geographic frame of reference to the acquired image is a necessary task in the majority of the applications. This process is denominated direct georeferencing when ground control points are not used. Despite it applies simple mathematical fundamentals, the complete direct georeferencing process involves much information, such as camera sensor characteristics, mounting measurements, attitude and position of the UAS, among others. In addition, there are many rotations and translations between the different reference frames, among many other details, which makes the whole process a considerable complex operation. Another problem is that manufacturers and software tools may use different reference frames posing additional difficulty when implementing the direct georeferencing. As this information is spread among many sources, researchers may face difficulties on having a complete vision of the method. In fact, there is absolutely no paper in the literature that explain this process in a comprehensive way. In order to supply this implicit demand, this paper presents a comprehensive method for direct georeferencing of aerial images acquired by cameras mounted on UAS, where all required information, mathematical operations and implementation steps are explained in detail. Finally, in order to show the practical use of the method and to prove its accuracy, both simulated and real flights were performed, where objects of the acquired images were georeferenced.

2022 ◽  
Houk Jang ◽  
Henry Hinton ◽  
Woo-Bin Jung ◽  
Min-Hyun Lee ◽  
Changhyun Kim ◽  

Abstract Complementary metal-oxide-semiconductor (CMOS) image sensors are a visual outpost of many machines that interact with the world. While they presently separate image capture in front-end silicon photodiode arrays from image processing in digital back-ends, efforts to process images within the photodiode array itself are rapidly emerging, in hopes of minimizing the data transfer between sensing and computing, and the associated overhead in energy and bandwidth. Electrical modulation, or programming, of photocurrents is requisite for such in-sensor computing, which was indeed demonstrated with electrostatically doped, but non-silicon, photodiodes. CMOS image sensors are currently incapable of in-sensor computing, as their chemically doped photodiodes cannot produce electrically tunable photocurrents. Here we report in-sensor computing with an array of electrostatically doped silicon p-i-n photodiodes, which is amenable to seamless integration with the rest of the CMOS image sensor electronics. This silicon-based approach could more rapidly bring in-sensor computing to the real world due to its compatibility with the mainstream CMOS electronics industry. Our wafer-scale production of thousands of silicon photodiodes using standard fabrication emphasizes this compatibility. We then demonstrate in-sensor processing of optical images using a variety of convolutional filters electrically programmed into a 3 × 3 network of these photodiodes.

2022 ◽  
Huanyu Sun ◽  
Shiling Wang ◽  
Xiaobo Hu ◽  
Hongjie Liu ◽  
Xiaoyan Zhou ◽  

Abstract Surface defects (SDs) and subsurface defects (SSDs) are the key factors decreasing the laser damage threshold of optics. Due to the spatially stacked structure, accurately detecting and distinguishing them has become a major challenge. Herein a detection method for SDs and SSDs with multisensor image fusion is proposed. The optics is illuminated by a laser under dark field condition, and the defects are excited to generate scattering and fluorescence lights, which are received by two image sensors in a wide-field microscope. With the modified algorithms of image registration and feature-level fusion, different types of defects are identified and extracted from the scattering and fluorescence images. Experiments show that two imaging modes can be realized simultaneously by multisensor image fusion, and HF etching verifies that SDs and SSDs of polished optics can be accurately distinguished. This method provides a more targeted reference for the evaluation and control of the defects of optics, and exhibits potential in the application of material surface research.

Orit Skorka ◽  
Radu Ispasoiu

2022 ◽  
Vol 137 ◽  
pp. 106211
Ryosuke Okuyama ◽  
Takeshi Kadono ◽  
Ayumi Onaka-Masada ◽  
Akihiro Suzuki ◽  
Koji Kobayashi ◽  

2021 ◽  
Sebastian Köhler ◽  
Giulio Lovisotto ◽  
Simon Birnbach ◽  
Richard Baker ◽  
Ivan Martinovic

2021 ◽  
Vol 127 (24) ◽  
Fernando Chierchie ◽  
Guillermo Fernandez Moroni ◽  
Leandro Stefanazzi ◽  
Eduardo Paolini ◽  
Javier Tiffenberg ◽  

Sign in / Sign up

Export Citation Format

Share Document