Robotized Imaging System Based on Sipm and Image Fusion for Monitoring Radiation Emergencies

Author(s):  
A. V. Vasileva ◽  
A. S. Vasilev ◽  
A. K. Akhmerov ◽  
Victoria A. Ryzhova
Keyword(s):  
2014 ◽  
Vol 513-517 ◽  
pp. 3045-3048
Author(s):  
Da Hai Huang ◽  
Li Xin Ma ◽  
Wang Wei

According to the problem of the corona discharge, a new image fusion rule based on wavelet transform was proposed, making a double spectrums imaging system about spectrum characteristic of high-voltage electrical corona. The validity and feasibility has been approved by using Matlab as the experiment platform in the paper. The experimental results show that proposed algorithm is very effective in image fusion, which fuses more details of input images and improve the locating precision of the corona detection system.


2012 ◽  
Vol 41 (11) ◽  
pp. 1359-1364
Author(s):  
陶小平 TAO Xiao-ping ◽  
薛栋林 XUE Dong-lin ◽  
黎发志 LI Fa-zhi ◽  
闫锋 YAN Feng

Sensors ◽  
2019 ◽  
Vol 19 (6) ◽  
pp. 1409 ◽  
Author(s):  
Hang Liu ◽  
Hengyu Li ◽  
Jun Luo ◽  
Shaorong Xie ◽  
Yu Sun

Multi-focus image fusion is a technique for obtaining an all-in-focus image in which all objects are in focus to extend the limited depth of field (DoF) of an imaging system. Different from traditional RGB-based methods, this paper presents a new multi-focus image fusion method assisted by depth sensing. In this work, a depth sensor is used together with a colour camera to capture images of a scene. A graph-based segmentation algorithm is used to segment the depth map from the depth sensor, and the segmented regions are used to guide a focus algorithm to locate in-focus image blocks from among multi-focus source images to construct the reference all-in-focus image. Five test scenes and six evaluation metrics were used to compare the proposed method and representative state-of-the-art algorithms. Experimental results quantitatively demonstrate that this method outperforms existing methods in both speed and quality (in terms of comprehensive fusion metrics). The generated images can potentially be used as reference all-in-focus images.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Hoover Rueda-Chacon ◽  
Fernando Rojas ◽  
Henry Arguello

AbstractSpectral image fusion techniques combine the detailed spatial information of a multispectral (MS) image and the rich spectral information of a hyperspectral (HS) image into a high-spatial and high-spectral resolution image. Due to the data deluge entailed by such images, new imaging modalities have exploited their intrinsic correlations in such a way that, a computational algorithm can fuse them from few multiplexed linear projections. The latter has been coined compressive spectral image fusion. State-of-the-art research work have focused mainly on the algorithmic part, simulating instrumentation characteristics and assuming independently registered sensors to conduct compressed MS and HS imaging. In this manuscript, we report on the construction of a unified computational imaging framework that includes a proof-of-concept optical testbed to simultaneously acquire MS and HS compressed projections, and an alternating direction method of multipliers algorithm to reconstruct high-spatial and high-spectral resolution images from the fused compressed measurements. The testbed employs a digital micro-mirror device (DMD) to encode and split the input light towards two compressive imaging arms, which collect MS and HS measurements, respectively. This strategy entails full light throughput sensing since no light is thrown away by the coding process. Further, different resolutions can be dynamically tested by binning the DMD and sensors pixels. Real spectral responses and optical characteristics of the employed equipment are obtained through a per-pixel point spread function calibration approach to enable accurate compressed image fusion performance. The proposed framework is demonstrated through real experiments within the visible spectral range using as few as 5% of the data.


Author(s):  
B. Raviteja ◽  
M. Surendra Prasad Babu ◽  
K. Venkata Rao ◽  
Jonnadula Harikiran

<p>Hyperspectral imaging system contains stack of images collected from the sensor with different wavelengths representing the same scene on the earth. This paper presents a framework for hyperspectral image segmentation using a clustering algorithm. The framework consists of four stages in segmenting a hyperspectral data set. In the first stage, filtering is done to remove noise in image bands. Second stage consists of dimensionality reduction algorithms, in which the bands that convey less information or redundant data will be removed. In the third stage, the informative bands which are selected in the second stage are merged into a single image using hierarchical fusion technique. In the hierarchical image fusion, the images are grouped such that each group has equal number of images. This methodology leads to group of images having much varied information, thus decreasing the quality of fused image. This paper presents a new methodology of hierarchical image fusion in which similarity metrics are used to create image groups for merging the selected image bands. This single image is segmented using Fuzzy c-means clustering algorithm. The experimental results show that this framework will segment the data set more accurately by combining all the features in the image bands. </p>


Lung Cancer ◽  
2005 ◽  
Vol 49 ◽  
pp. S216
Author(s):  
G. Sergiacomi ◽  
G. Di Costanzo ◽  
M. Carlani ◽  
M. Leporace ◽  
O. Schillaci ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document