scholarly journals Attention-based Pyramid Dilated Lattice Network for Blind Image Denoising

Author(s):  
Mohammad Nikzad ◽  
Yongsheng Gao ◽  
Jun Zhou

Though convolutional neural networks (CNNs) with residual and dense aggregations have obtained much attention in image denoising, they are incapable of exploiting different levels of contextual information at every convolutional unit in order to infer different levels of noise components with a single model. In this paper, to overcome this shortcoming we present a novel attention-based pyramid dilated lattice (APDL) architecture and investigate its capability for blind image denoising. The proposed framework can effectively harness the advantages of residual and dense aggregations to achieve a great trade-off between performance, parameter efficiency, and test time. It also employs a novel pyramid dilated convolution strategy to effectively capture contextual information corresponding to different noise levels through the training of a single model. Our extensive experimental investigation verifies the effectiveness and efficiency of the APDL architecture for image denoising as well as JPEG artifacts suppression tasks.

2014 ◽  
Vol 31 (01) ◽  
pp. 1450006 ◽  
Author(s):  
NING-RONG TAO ◽  
ZU-HUA JIANG ◽  
LU ZHEN

For shipbuilding, spatial scheduling and workforce assignment are two important issues for the operation management in block assembly shops. Spatial scheduling is to decide where and when to assemble blocks, while workforce assignment is to assign working teams to the blocks. Traditionally, they are made separately at different levels in the production management framework. The combining of such decisions presents additional complexity and new problems. This paper proposes an approach that solves jointly the spatial scheduling problem and the workforce assignment problem. The objective is to improve the coordination among working teams and increase the productivity of assembly shops. A spatial layout strategy is designed according to extreme point and deep bottom left strategies. By using genetic algorithm, a solution method is developed based on the spatial layout strategy and several assignment and sequence rules. Some computational experiments are conducted to evaluate the performance of the presented algorithm and compare it with other commonly used methods. Computational results validate the effectiveness and efficiency of the proposed algorithm.


2009 ◽  
Vol 9 (2) ◽  
pp. 152-164 ◽  
Author(s):  
Andreas Kjellin ◽  
Lars Winkler Pettersson ◽  
Stefan Seipel ◽  
Mats Lind

New technologies and techniques allow novel kinds of visualizations and different types of 3D visualizations are constantly developed. We propose a categorization of 3D visualizations and, based on this categorization, evaluate two versions of a space-time cube that show discrete spatiotemporal data. The two visualization techniques used are a head-tracked stereoscopic visualization (‘strong 3D’) and a static monocular visualization (‘weak 3D’). In terms of effectiveness and efficiency the weak 3D visualization is as good as the strong 3D and thus the need for advanced 3D visualizations in these kinds of tasks may not be necessary.


2020 ◽  
Vol 58 (4) ◽  
pp. 2516-2529 ◽  
Author(s):  
Alessandro Maffei ◽  
Juan M. Haut ◽  
Mercedes Eugenia Paoletti ◽  
Javier Plaza ◽  
Lorenzo Bruzzone ◽  
...  

2020 ◽  
Vol 17 (4) ◽  
pp. 1770-1780
Author(s):  
B. Chinna Rao ◽  
M. Madhavilatha

This paper develops a new image denoising framework based on the Dual Tree Complex Wavelet Transform and an edge based patch grouping. The proposed patch grouping mechanism considers the photometric features along with gradient features to cluster the image patches into different groups with similar properties. Furthermore, the K-means algorithm was accomplished for patch grouping instead of Euclidean distance metric. An adaptive thresholding mechanism is also developed here to remove the noise with less information loss at edge features. Extensive simulation is carried out through MATLAB software over different grayscale images at different noise levels and noise types and the performance is measured with the performance metrics such as PSNR and SSIM for varying noise levels. The obtained simulation revealed the outstanding performance of proposed approach both in the preservation of edge features and also in the quality improvisation by efficient noise removal.


2008 ◽  
Vol 19 (06) ◽  
pp. 481-495 ◽  
Author(s):  
Jeffrey Weihing ◽  
Frank E. Musiek

Background: A common complaint of patients with (central) auditory processing disorder is difficulty understanding speech in noise. Because binaural hearing improves speech understanding in compromised listening situations, quantifying this ability in different levels of noise may yield a measure with high clinical utility. Purpose: To examine binaural enhancement (BE) and binaural interaction (BI) in different levels of noise for the auditory brainstem response (ABR) and middle latency response (MLR) in a normal hearing population. Research Design: An experimental study in which subjects were exposed to a repeated measures design. Study Sample: Fifteen normal hearing female adults served as subjects. Normal hearing was assessed by pure-tone audiometry and otoacoustic emissions. Intervention: All subjects were exposed to 0, 20, and 35 dB effective masking (EM) of white noise during monotic and diotic click stimulation. Data Collection and Analysis: ABR and MLR responses were simultaneously acquired. Peak amplitudes and latencies were recorded and compared across conditions using a repeated measures analysis of variance (ANOVA). Results: For BE, ABR results showed enhancement at 0 and 20 dB EM, but not at 35 dB EM. The MLR showed BE at all noise levels, but the degree of BE decreased with increasing noise level. For BI, both the ABR and MLR showed BI at all noise levels. However, the degree of BI again decreased with increasing noise level for the MLR. Conclusions: The results demonstrate the ability to measure BE simultaneously in the ABR and MLR in up to 20 dB of EM noise and BI in up to 35 dB EM of noise. Results also suggest that ABR neural generators may respond to noise differently than MLR generators.


2018 ◽  
Vol 11 (2) ◽  
pp. 625-634 ◽  
Author(s):  
Anchal Anchal ◽  
Sumit Budhiraja ◽  
Bhawna Goyal ◽  
Ayush Dogra ◽  
Sunil Agrawal

Image denoising is one of the fundamental image processing problem. Images are corrupted with additive white Gaussian noise during image acquisition and transmission over analog circuits. In medical images the prevalence of noise can be perceived as tumours or artefacts and can lead to first diagnosis. Similarly in satellite images the visibility of images is significantly degraded due to noise, hence the image denoising is of vital importance. There are many denoising mechanisms given in literature are able to work well on lower noise levels but their performance degrades with increasing noise levels. If higher amount of filtering is applied it leads to degradation or removal of edges from the image and hence significant information. In this paper, we proposed an algorithm in which we are able to address the problem of image denoising at higher noise levels while preserving the edge information. The standard bilateral filter does not provides good results at higher noise levels. Hence we proposed to combine robust bilateral filtering with anisotropic diffusion filtering as the anisotropic diffusion perform the smoothing of homogenous regions without blurring the edges. Experimental results show that the proposed method works better for higher Nosie levels in terms of PSNR values and Visual quality.


2016 ◽  
Vol 834 ◽  
pp. 211-216
Author(s):  
Ion Cristian Braga ◽  
Anișor Nedelcu ◽  
Razvan Udroiu

The organizational performance depends of the development of the processes in the organization. In the automotive manufacturing – based on the requirements from the referential standards like ISO 9001 or ISO TS 16949 – the processes map describe very clear all the processes and their inter-correlation. To achieve the level of performance according with top management expectation all processes are monitored regarding the effectiveness and efficiency using the key indicators, but the level of performance is also direct linked with the level of response at quality issues. But the organizations have the processes with different structure of personnel who take care for managing the processes, with different levels of reporting and leadership. This paper presents the advantages to develop the FLRQI (Fast Response on Layers at Quality Issues) easy to communicate and escalate from down to top, and to develop and coach the personnel from top to down in the same time, at the end the advantages to have the fast response at issues implemented in all departments will be measured in better quality, less cost and on time delivery. The paper proposes a new concept to implement a culture of fast response which is easy to be applied in multi-national companies because is easier to be tailored on each structure of the processes.


Sensors ◽  
2020 ◽  
Vol 20 (20) ◽  
pp. 5759 ◽  
Author(s):  
Jiacai Liao ◽  
Libo Cao ◽  
Wei Li ◽  
Xiaole Luo ◽  
Xiexing Feng

Linear feature extraction is crucial for special objects in semantic segmentation networks, such as slot marking and lanes. The objects with linear characteristics have global contextual information dependency. It is very difficult to capture the complete information of these objects in semantic segmentation tasks. To improve the linear feature extraction ability of the semantic segmentation network, we propose introducing the dilated convolution with vertical and horizontal kernels (DVH) into the task of feature extraction in semantic segmentation networks. Meanwhile, we figure out the outcome if we put the different vertical and horizontal kernels on different places in the semantic segmentation networks. Our networks are trained on the basis of the SS dataset, the TuSimple lane dataset and the Massachusetts Roads dataset. These datasets consist of slot marking, lanes, and road images. The research results show that our method improves the accuracy of the slot marking segmentation of the SS dataset by 2%. Compared with other state-of-the-art methods, our UnetDVH-Linear (v1) obtains better accuracy on the TuSimple Benchmark Lane Detection Challenge with a value of 97.53%. To prove the generalization of our models, road segmentation experiments were performed on aerial images. Without data argumentation, the segmentation accuracy of our model on the Massachusetts roads dataset is 95.3%. Moreover, our models perform better than other models when training with the same loss function and experimental settings. The experiment result shows that the dilated convolution with vertical and horizontal kernels will enhance the neural network on linear feature extraction.


2020 ◽  
Vol 2020 (4) ◽  
pp. 292-1-292-13
Author(s):  
Yufang Sun ◽  
Jan P. Allebach

Embedding information into a printed image is useful in many aspects, in which reliable channel encoding/decoding systems are crucial, since there is information loss and error propagation during transmission. Circular coding is a general twodimensional channel coding method that allows data recovery with only a cropped portion of the code, and without the knowledge of the carrier image. While some traditional methods add redundancy bits to extend the length of the original massage length, this method embeds message into image rows in a repeated and shifted manner with redundancy, then uses the majority votes of the redundancy bits for recovery. In this paper, we developed a closed-form formula to predict its decoding success rate in a noisy channel under various transmission noise levels, using probabilistic modeling. The theoretical result is validated with simulations. This result enables the optimal parameter selection in the encoder and decoder system design, and decoding rate prediction with different levels of transmission error.


Sign in / Sign up

Export Citation Format

Share Document