scholarly journals Using the Guided Fireworks Algorithm for Local Backlight Dimming

2019 ◽  
Vol 9 (1) ◽  
pp. 129 ◽  
Author(s):  
Tao Zhang ◽  
Xin Zhao ◽  
Yifei Wang ◽  
Qin Zeng

Local backlight dimming is a promising display technology, with good performance in improving the visual quality and reducing the power consumption of device displays. To set optimal backlight luminance, it is important to design high performance local dimming algorithms. In this paper, we focused on improving the quality of the displayed image, and take local backlight dimming as an optimization problem. In order to better evaluate the image quality, we used the structural similarity (SSIM) index as the image quality evaluation method, and built the model for the local dimming problem. To solve this optimization problem, we designed the local dimming algorithm based on the Fireworks Algorithm (FWA), which is a new evolutionary computation (EC) algorithm. To further improve the solution quality, we introduced a guiding strategy into the FWA and proposed an improved algorithm named the Guided Fireworks Algorithm (GFWA). Experimental results showed that the GFWA had a higher performance in local backlight dimming compared with the Look-Up Table (LUT) algorithm, the Improved Shuffled Frog Leaping Algorithm (ISFLA), and the FWA.




2019 ◽  
Vol 29 (1) ◽  
pp. 130-140 ◽  
Author(s):  
Florian Gerland ◽  
Alexander Wetzel ◽  
Thomas Schomberg ◽  
Olaf Wünsch ◽  
Bernhard Middendorf

Abstract Modern concretes such as ultra-high performance concrete (UHPC) show excellent strength properties combined with favorable flow properties. However, the flow properties depend strongly on process parameters during production (temperature, humidity etc.), but also change sensitively even with slight variations in the mixture. In order to ensure desired processing of the fluidlike material and consistent process quality, the flow properties of the concrete must be evaluated quantitatively and objectively. The usual evaluation of measurements from concrete rheometers, for example of the ball probe system type, does not allow the direct determination of the objective material parameters yield stress and plastic viscosity of the sample. We developed a simulation-based method for the evaluation of rheometric measurements of fine grained high performance concretes like self-compacting concrete (SCC) and UHPC. The method is based on a dimensional analysis for ball measuring systems. Through numerical parameter studies we were able to describe the identified relationship between measuring quantities and material parameters quantitatively for two devices of this type. The evaluation method is based on the Bingham model. With this method it is possible to measure both the yield stress and the plastic viscosity of the fresh sample simultaneously. Device independence of the evaluation process is proven and an application to fiber-reinforced UHPC is presented.



2020 ◽  
Vol 30 (1) ◽  
pp. 240-257
Author(s):  
Akula Suneetha ◽  
E. Srinivasa Reddy

Abstract In the data collection phase, the digital images are captured using sensors that often contaminated by noise (undesired random signal). In digital image processing task, enhancing the image quality and reducing the noise is a central process. Image denoising effectively preserves the image edges to a higher extend in the flat regions. Several adaptive filters (median filter, Gaussian filter, fuzzy filter, etc.) have been utilized to improve the smoothness of digital image, but these filters failed to preserve the image edges while removing noise. In this paper, a modified fuzzy set filter has been proposed to eliminate noise for restoring the digital image. Usually in fuzzy set filter, sixteen fuzzy rules are generated to find the noisy pixels in the digital image. In modified fuzzy set filter, a set of twenty-four fuzzy rules are generated with additional four pixel locations for determining the noisy pixels in the digital image. The additional eight fuzzy rules ease the process of finding the image pixels,whether it required averaging or not. In this scenario, the input digital images were collected from the underwater photography fish dataset. The efficiency of the modified fuzzy set filter was evaluated by varying degrees of Gaussian noise (0.01, 0.03, and 0.1 levels of Gaussian noise). For performance evaluation, Structural Similarity (SSIM), Mean Structural Similarity (MSSIM), Mean Square Error (MSE), Normalized Mean Square Error (NMSE), Universal Image Quality Index (UIQI), Peak Signal to Noise Ratio (PSNR), and Visual Information Fidelity (VIF) were used. The experimental results showed that the modified fuzzy set filter improved PSNR value up to 2-3 dB, MSSIM up to 0.12-0.03, and NMSE value up to 0.38-0.1 compared to the traditional filtering techniques.



2009 ◽  
Author(s):  
Naotoshi Fujita ◽  
Asumi Yamazaki ◽  
Katsuhiro Ichikawa ◽  
Yoshie Kodera


2021 ◽  
Vol 63 (3) ◽  
pp. 266-271
Author(s):  
Hammoudi Abderazek ◽  
Ferhat Hamza ◽  
Ali Riza Yildiz ◽  
Sadiq M. Sait

Abstract In this study, two recent algorithms, the whale optimization algorithm and moth-flame optimization, are used to optimize spur gear design. The objective function is the minimization of the total weight of the spur gear pair. Moreover, the optimization problem is subjected to constraints on the main kinematic and geometric conditions as well as to the resistance of the material of the gear system. The comparison between moth-flame optimization (MFO), the whale optimization algorithm (WOA), and previous studies indicate that the final results obtained from both algorithms lead to a reduction in gear weight by 1.05 %. MFO and the WOA are compared with four additional swarm algorithms. The experimental results indicate that the algorithms introduced here, in particular MFO, outperform the four other methods when compared in terms of solution quality, robustness, and high success rate.



2021 ◽  
Vol 20 (5s) ◽  
pp. 1-25
Author(s):  
Michael Canesche ◽  
Westerley Carvalho ◽  
Lucas Reis ◽  
Matheus Oliveira ◽  
Salles Magalhães ◽  
...  

Coarse-grained reconfigurable architecture (CGRA) mapping involves three main steps: placement, routing, and timing. The mapping is an NP-complete problem, and a common strategy is to decouple this process into its independent steps. This work focuses on the placement step, and its aim is to propose a technique that is both reasonably fast and leads to high-performance solutions. Furthermore, a near-optimal placement simplifies the following routing and timing steps. Exact solutions cannot find placements in a reasonable execution time as input designs increase in size. Heuristic solutions include meta-heuristics, such as Simulated Annealing (SA) and fast and straightforward greedy heuristics based on graph traversal. However, as these approaches are probabilistic and have a large design space, it is not easy to provide both run-time efficiency and good solution quality. We propose a graph traversal heuristic that provides the best of both: high-quality placements similar to SA and the execution time of graph traversal approaches. Our placement introduces novel ideas based on “you only traverse twice” (YOTT) approach that performs a two-step graph traversal. The first traversal generates annotated data to guide the second step, which greedily performs the placement, node per node, aided by the annotated data and target architecture constraints. We introduce three new concepts to implement this technique: I/O and reconvergence annotation, degree matching, and look-ahead placement. Our analysis of this approach explores the placement execution time/quality trade-offs. We point out insights on how to analyze graph properties during dataflow mapping. Our results show that YOTT is 60.6 , 9.7 , and 2.3 faster than a high-quality SA, bounding box SA VPR, and multi-single traversal placements, respectively. Furthermore, YOTT reduces the average wire length and the maximal FIFO size (additional timing requirement on CGRAs) to avoid delay mismatches in fully pipelined architectures.



Fast track article for IS&T International Symposium on Electronic Imaging 2021: Image Quality and System Performance XVIII proceedings.



2004 ◽  
Vol 13 (4) ◽  
pp. 600-612 ◽  
Author(s):  
Z. Wang ◽  
A.C. Bovik ◽  
H.R. Sheikh ◽  
E.P. Simoncelli


2012 ◽  
Vol 155-156 ◽  
pp. 440-444
Author(s):  
He Yan ◽  
Xiu Feng Wang

JPEG2000 algorithm has been developed based on the DWT techniques, which have shown how the results achieved in different areas in information technology can be applied to enhance the performance. Lossy image compression algorithms sacrifice perfect image reconstruction in favor of decreased storage requirements. Wavelets have become a popular technology for information redistribution for high-performance image compression algorithms. Lossy compression algorithms sacrifice perfect image reconstruction in favor of improved compression rates while minimizing image quality lossy.



Sign in / Sign up

Export Citation Format

Share Document