scholarly journals A high-dynamic-range visual sensing method for feature extraction of welding pool based on adaptive image fusion

Author(s):  
Baori Zhang ◽  
Yonghua Shi ◽  
Yanxin Cui ◽  
Zishun Wang ◽  
Xiyin Chen

Abstract The high dynamic range existing in arc welding with high energy density challenges most of the industrial cameras, causing badly exposed pixels in the captured images and bringing difficulty to the feature detection from internal weld pool. This paper proposes a novel monitoring method called adaptive image fusion, which increases the amount of information contained in the welding image and can be realized on the common industrial camera with low cost. It combines original images captured rapidly by the camera into one fused image and the setting of these images is based on the real time analysis of realistic scene irradiance during the welding. Experiments are carried out to find out the operating window for the adaptive image fusion method, providing the rules for getting a fused image with as much as information as possible. The comparison between the imaging with or without the proposed method proves that the fused image has a wider dynamic range and includes more useful features from the weld pool. The improvement is also verified by extracting both the internal and external features of weld pool within a same fused image with proposed method. The results show that the proposed method can adaptively expand the dynamic range of visual monitoring system with low cost, which benefits the feature extraction from the internal weld pool.

Symmetry ◽  
2020 ◽  
Vol 12 (3) ◽  
pp. 451
Author(s):  
Ming Fang ◽  
Xu Liang ◽  
Feiran Fu ◽  
Yansong Song ◽  
Zhen Shao

High-dynamic range imaging technology is an effective method to improve the limitations of a camera’s dynamic range. However, most current high-dynamic imaging technologies are based on image fusion of multiple frames with different exposure levels. Such methods are prone to various phenomena, for example motion artifacts, detail loss and edge effects. In this paper, we combine a dual-channel camera that can output two different gain images simultaneously, a semi-supervised network structure based on an attention mechanism to fuse multiple gain images is proposed. The proposed network structure comprises encoding, fusion and decoding modules. First, the U-Net structure is employed in the encoding module to extract important detailed information in the source image to the maximum extent. Simultaneously, the SENet attention mechanism is employed in the encoding module to assign different weights to different feature channels and emphasis important features. Then, a feature map extracted from the encoding module is input to the decoding module for reconstruction after fusing by the fusion module to obtain a fused image. Experimental results indicate that the fused images obtained by the proposed method demonstrate clear details and high contrast. Compared with other methods, the proposed method improves fused image quality relative to several indicators.


2021 ◽  
Vol 2042 (1) ◽  
pp. 012113
Author(s):  
Michael Kim ◽  
Athanasios Tzempelikos

Abstract Continuous luminance monitoring is challenging because high-dynamic-range cameras are expensive, they need programming, and are intrusive when placed near the occupants’ field-of-view. A new semi-automated and non-intrusive framework is presented for monitoring occupant-perceived luminance using a low-cost camera sensor and Structure-from- Motion (SfM)-Multiview Stereo (MVS) photogrammetry pipeline. Using a short video and a few photos from the occupant position, the 3D space geometry is automatically reconstructed. Retrieved 3D context enables the back-projection of the camera-captured luminance distribution into 3D spaces that are in turn re-projected to occupant-FOVs. The framework was tested and validated in a testbed office. The re-projected luminance field showed with good agreement with luminance measured at the occupant position. The new method can be used for non-intrusive luminance monitoring integrated with daylighting control applications.


2017 ◽  
Vol 37 (4) ◽  
pp. 0410001
Author(s):  
都琳 Du Lin ◽  
孙华燕 Sun Huayan ◽  
王帅 Wang Shuai ◽  
高宇轩 Gao Yuxuan ◽  
齐莹莹 Qi Yingying

2010 ◽  
Author(s):  
Lirong Wang ◽  
Peng Su ◽  
Robert Parks ◽  
Roger Angel ◽  
Jose Sasian ◽  
...  

2014 ◽  
Vol 2014 ◽  
pp. 1-8 ◽  
Author(s):  
Harbinder Singh ◽  
Vinay Kumar ◽  
Sunil Bhooshan

In this paper we propose a novel detail-enhancing exposure fusion approach using nonlinear translation-variant filter (NTF). With the captured Standard Dynamic Range (SDR) images under different exposure settings, first the fine details are extracted based on guided filter. Next, the base layers (i.e., images obtained from NTF) across all input images are fused using multiresolution pyramid. Exposure, contrast, and saturation measures are considered to generate a mask that guides the fusion process of the base layers. Finally, the fused base layer is combined with the extracted fine details to obtain detail-enhanced fused image. The goal is to preserve details in both very dark and extremely bright regions without High Dynamic Range Image (HDRI) representation and tone mapping step. Moreover, we have demonstrated that the proposed method is also suitable for the multifocus image fusion without introducing artifacts.


2018 ◽  
Vol 26 (26) ◽  
pp. 34805 ◽  
Author(s):  
Jian Wang ◽  
Rong Su ◽  
Richard Leach ◽  
Wenlong Lu ◽  
Liping Zhou ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document