scholarly journals A New Photographic Reproduction Method Based on Feature Fusion and Virtual Combined Histogram Equalization

Sensors ◽  
2021 ◽  
Vol 21 (18) ◽  
pp. 6038
Author(s):  
Yu-Hsiu Lin ◽  
Kai-Lung Hua ◽  
Yung-Yao Chen ◽  
I-Ying Chen ◽  
Yun-Chen Tsai

A desirable photographic reproduction method should have the ability to compress high-dynamic-range images to low-dynamic-range displays that faithfully preserve all visual information. However, during the compression process, most reproduction methods face challenges in striking a balance between maintaining global contrast and retaining majority of local details in a real-world scene. To address this problem, this study proposes a new photographic reproduction method that can smoothly take global and local features into account. First, a highlight/shadow region detection scheme is used to obtain prior information to generate a weight map. Second, a mutually hybrid histogram analysis is performed to extract global/local features in parallel. Third, we propose a feature fusion scheme to construct the virtual combined histogram, which is achieved by adaptively fusing global/local features through the use of Gaussian mixtures according to the weight map. Finally, the virtual combined histogram is used to formulate the pixel-wise mapping function. As both global and local features are simultaneously considered, the output image has a natural and visually pleasing appearance. The experimental results demonstrated the effectiveness of the proposed method and the superiority over other seven state-of-the-art methods.

Sensors ◽  
2021 ◽  
Vol 21 (12) ◽  
pp. 4136
Author(s):  
Yung-Yao Chen ◽  
Kai-Lung Hua ◽  
Yun-Chen Tsai ◽  
Jun-Hua Wu

Photographic reproduction and enhancement is challenging because it requires the preservation of all the visual information during the compression of the dynamic range of the input image. This paper presents a cascaded-architecture-type reproduction method that can simultaneously enhance local details and retain the naturalness of original global contrast. In the pre-processing stage, in addition to using a multiscale detail injection scheme to enhance the local details, the Stevens effect is considered for adapting different luminance levels and normally compressing the global feature. We propose a modified histogram equalization method in the reproduction stage, where individual histogram bin widths are first adjusted according to the property of overall image content. In addition, the human visual system (HVS) is considered so that a luminance-aware threshold can be used to control the maximum permissible width of each bin. Then, the global tone is modified by performing histogram equalization on the output modified histogram. Experimental results indicate that the proposed method can outperform the five state-of-the-art methods in terms of visual comparisons and several objective image quality evaluations.


F1000Research ◽  
2018 ◽  
Vol 7 ◽  
pp. 660
Author(s):  
Alvydas Šoliūnas

Background:The present study concerns parallel and serial processing of visual information, or more specifically, whether visual objects are identified successively or simultaneously in multiple object stimulus. Some findings in scene perception demonstrate the potential parallel processing of different sources of information in a stimulus; however, more extensive investigation is needed.Methods:We presented one, two or three visual objects of different categories for 100 ms and afterwards asked subjects whether a specified category was present in the stimulus. We varied the number of objects, the number of categories and the type of object shape distortion (distortion of either global or local features). Results:The response time and accuracy data corresponded to data from a previous experiment, which demonstrated that performance efficiency mostly depends on the number of categories but not on the number of objects. Two and three objects of the same category were identified with the same accuracy and the same response time, but two objects were identified faster and more accurately than three objects if they belonged to different categories. Distortion type did not affect the pattern of performance.Conclusions:The findings suggest the idea that objects of the same category can be identified simultaneously and that identification involves both local and global features.


2013 ◽  
Vol 373-375 ◽  
pp. 1022-1026
Author(s):  
Tian Wen Li ◽  
Yun Gao

In the actual complex scenes, multi-feature fusion has become a valid method of object representation for tracking video motion targets. Two keys about multi-feature fusion are how to select some valid features and how to fuse the features. In this paper, we propose an object representation fusing global and local features for object tracking. In our method, we select a common hue histogram as the global feature and use a valid SIFT feature as the local feature. In the tracking frame of particle filter, the tracking results show that our proposed object representation can better restrain the disturbing of complex environments with abrupt illumination and partial occlusion, than color-based global representation.


Author(s):  
Dongnan Liu ◽  
Donghao Zhang ◽  
Yang Song ◽  
Chaoyi Zhang ◽  
Fan Zhang ◽  
...  

Automated detection and segmentation of individual nuclei in histopathology images is important for cancer diagnosis and prognosis. Due to the high variability of nuclei appearances and numerous overlapping objects, this task still remains challenging. Deep learning based semantic and instance segmentation models have been proposed to address the challenges, but these methods tend to concentrate on either the global or local features and hence still suffer from information loss. In this work, we propose a panoptic segmentation model which incorporates an auxiliary semantic segmentation branch with the instance branch to integrate global and local features. Furthermore, we design a feature map fusion mechanism in the instance branch and a new mask generator to prevent information loss. Experimental results on three different histopathology datasets demonstrate that our method outperforms the state-of-the-art nuclei segmentation methods and popular semantic and instance segmentation models by a large margin.


Symmetry ◽  
2021 ◽  
Vol 13 (10) ◽  
pp. 1838
Author(s):  
Chih-Wei Lin ◽  
Mengxiang Lin ◽  
Jinfu Liu

Classifying fine-grained categories (e.g., bird species, car, and aircraft types) is a crucial problem in image understanding and is difficult due to intra-class and inter-class variance. Most of the existing fine-grained approaches individually utilize various parts and local information of objects to improve the classification accuracy but neglect the mechanism of the feature fusion between the object (global) and object’s parts (local) to reinforce fine-grained features. In this paper, we present a novel framework, namely object–part registration–fusion Net (OR-Net), which considers the mechanism of registration and fusion between an object (global) and its parts’ (local) features for fine-grained classification. Our model learns the fine-grained features from the object of global and local regions and fuses these features with the registration mechanism to reinforce each region’s characteristics in the feature maps. Precisely, OR-Net consists of: (1) a multi-stream feature extraction net, which generates features with global and various local regions of objects; (2) a registration–fusion feature module calculates the dimension and location relationships between global (object) regions and local (parts) regions to generate the registration information and fuses the local features into the global features with registration information to generate the fine-grained feature. Experiments execute symmetric GPU devices with symmetric mini-batch to verify that OR-Net surpasses the state-of-the-art approaches on CUB-200-2011 (Birds), Stanford-Cars, and Stanford-Aircraft datasets.


2021 ◽  
Vol 11 (5) ◽  
pp. 2174
Author(s):  
Xiaoguang Li ◽  
Feifan Yang ◽  
Jianglu Huang ◽  
Li Zhuo

Images captured in a real scene usually suffer from complex non-uniform degradation, which includes both global and local blurs. It is difficult to handle the complex blur variances by a unified processing model. We propose a global-local blur disentangling network, which can effectively extract global and local blur features via two branches. A phased training scheme is designed to disentangle the global and local blur features, that is the branches are trained with task-specific datasets, respectively. A branch attention mechanism is introduced to dynamically fuse global and local features. Complex blurry images are used to train the attention module and the reconstruction module. The visualized feature maps of different branches indicated that our dual-branch network can decouple the global and local blur features efficiently. Experimental results show that the proposed dual-branch blur disentangling network can improve both the subjective and objective deblurring effects for real captured images.


Sensors ◽  
2021 ◽  
Vol 21 (6) ◽  
pp. 2117
Author(s):  
Hui Han ◽  
Zhiyuan Ren ◽  
Lin Li ◽  
Zhigang Zhu

Automatic modulation classification (AMC) is playing an increasingly important role in spectrum monitoring and cognitive radio. As communication and electronic technologies develop, the electromagnetic environment becomes increasingly complex. The high background noise level and large dynamic input have become the key problems for AMC. This paper proposes a feature fusion scheme based on deep learning, which attempts to fuse features from different domains of the input signal to obtain a more stable and efficient representation of the signal modulation types. We consider the complementarity among features that can be used to suppress the influence of the background noise interference and large dynamic range of the received (intercepted) signals. Specifically, the time-series signals are transformed into the frequency domain by Fast Fourier transform (FFT) and Welch power spectrum analysis, followed by the convolutional neural network (CNN) and stacked auto-encoder (SAE), respectively, for detailed and stable frequency-domain feature representations. Considering the complementary information in the time domain, the instantaneous amplitude (phase) statistics and higher-order cumulants (HOC) are extracted as the statistical features for fusion. Based on the fused features, a probabilistic neural network (PNN) is designed for automatic modulation classification. The simulation results demonstrate the superior performance of the proposed method. It is worth noting that the classification accuracy can reach 99.8% in the case when signal-to-noise ratio (SNR) is 0 dB.


2009 ◽  
Vol 119 (3) ◽  
pp. 373-383 ◽  
Author(s):  
Tomohiro Ishizu ◽  
Tomoaki Ayabe ◽  
Shozo Kojima

Sign in / Sign up

Export Citation Format

Share Document