scholarly journals CONIC: Contour Optimized Non-Iterative Clustering Superpixel Segmentation

2021 ◽  
Vol 13 (6) ◽  
pp. 1061
Author(s):  
Cheng Li ◽  
Baolong Guo ◽  
Nannan Liao ◽  
Jianglei Gong ◽  
Xiaodong Han ◽  
...  

Superpixels group perceptually similar pixels into homogeneous sub-regions that act as meaningful features for advanced tasks. However, there is still a contradiction between color homogeneity and shape regularity in existing algorithms, which hinders their performance in further processing. In this work, a novel Contour Optimized Non-Iterative Clustering (CONIC) method is presented. It incorporates contour prior into the non-iterative clustering framework, aiming to provide a balanced trade-off between segmentation accuracy and visual uniformity. After the conventional grid sampling initialization, a regional inter-seed correlation is first established by the joint color-spatial-contour distance. It then guides a global redistribution of all seeds to modify the number and positions iteratively. This is done to avoid clustering falling into the local optimum and achieve the exact number of user-expectation. During the clustering process, an improved feature distance is elaborated to measure the color similarity that considers contour constraint and prevents the boundary pixels from being wrongly assigned. Consequently, superpixels acquire better visual quality and their boundaries are more consistent with the object contours. Experimental results show that CONIC performs as well as or even better than the state-of-the-art superpixel segmentation algorithms, in terms of both efficiency and segmentation effects.

Author(s):  
M. Li ◽  
H. Zou ◽  
Q. Ma ◽  
J. Sun ◽  
X. Cao ◽  
...  

Abstract. Superpixel segmentation for PolSAR images can heavily decrease the number of primitives for subsequent interpretation while reducing the impact of speckle noise. However, traditional superpixel segmentation methods for PolSAR images only focus on the boundary adherence, the significance of superpixel segmentation will be lost when the accuracy is improved at the expense of computation efficiency. To solve this problem, this paper proposes a novel superpixel segmentation algorithm for PolSAR images based on hexagon initialization and edge refinement. First, the PolSAR image is initialized as hexagonal distribution, where the complexity of searching pixels for relabelling in the local regions can be reduced by 30% theoretically. Second, all pixels in the PolSAR image are initialized as unstable pixels based on the hexagonal superpixels, which can boost the segmentation performance in the heterogeneous regions and effectively maintain all the potential edge pixels. Third, the revised Wishart distance and the spatial distance are integrated as a distance measure to relabel all unstable pixels. Finally, the postprocessing procedure based on a dissimilarity measure is applied to generate the final superpixels. Extensive experiments conducted on both the simulated and real-world PolSAR images demonstrate the superiority and effectiveness of our proposed algorithm in terms of computation efficiency and segmentation accuracy, compared to three other state-of-the-art methods.


Firefly algorithm is a meta-heuristic stochastic search algorithm with strong robustness and easy implementation. However, it also has some shortcomings, such as the "oscillation" phenomenon caused by too many attractions, which makes the convergence speed is too slow or premature. In the original FA, the full attraction model makes the algorithm consume a lot of evaluation times, and the time complexity is high. Therefore, In this paper, a novel firefly algorithm (EMDmFA) based on Euclidean metric (EM) and dimensional mutation (DM) is proposed. The EM strategy makes the firefly learn from its nearest neighbors. When the firefly is better than its neighbors, it learns from the best individuals in the population. It improves the FA attraction model and dramatically reduces the computational time complexity. At the same time, DM strategy improves the ability of the algorithm to jump out of the local optimum. The experimental results show that the proposed EMDmFA significantly improves the accuracy of the solution and better than most state-of-the-art FA variants.


2020 ◽  
Vol 4 (1) ◽  
pp. 127-142
Author(s):  
Kaiwen Chang ◽  
Bruno Figliuzzi

AbstractIn this article, we present a fast-marching based algorithm for generating superpixel (FMS) partitions of images. The idea behind the algorithm is to draw an analogy between waves propagating in a heterogeneous medium and regions growing on an image at a rate depending on the local color and texture. The FMS algorithm is evaluated on the Berkeley Segmentation Dataset 500. It yields results in terms of boundary adherence that are slightly better than the ones obtained with similar approaches including the Simple Linear Iterative Clustering, the Eikonal-based region growing for efficient clustering and the Iterative Spanning Forest framework for superpixel segmentation algorithms. An interesting feature of the proposed algorithm is that it can take into account texture information to compute the superpixel partition. We illustrate the interest of adding texture information on a specific set of images obtained by recombining textures patches extracted from images representing stripes, originally constructed by Giraud et al. [20]. On this dataset, our approach works significantly better than color based superpixel algorithms.


2020 ◽  
Vol 2020 (4) ◽  
pp. 76-1-76-7
Author(s):  
Swaroop Shankar Prasad ◽  
Ofer Hadar ◽  
Ilia Polian

Image steganography can have legitimate uses, for example, augmenting an image with a watermark for copyright reasons, but can also be utilized for malicious purposes. We investigate the detection of malicious steganography using neural networkbased classification when images are transmitted through a noisy channel. Noise makes detection harder because the classifier must not only detect perturbations in the image but also decide whether they are due to the malicious steganographic modifications or due to natural noise. Our results show that reliable detection is possible even for state-of-the-art steganographic algorithms that insert stego bits not affecting an image’s visual quality. The detection accuracy is high (above 85%) if the payload, or the amount of the steganographic content in an image, exceeds a certain threshold. At the same time, noise critically affects the steganographic information being transmitted, both through desynchronization (destruction of information which bits of the image contain steganographic information) and by flipping these bits themselves. This will force the adversary to use a redundant encoding with a substantial number of error-correction bits for reliable transmission, making detection feasible even for small payloads.


Author(s):  
Bahador Bahrami

Evidence for and against the idea that “two heads are better than one” is abundant. This chapter considers the contextual conditions and social norms that predict madness or wisdom of crowds to identify the adaptive value of collective decision-making beyond increased accuracy. Similarity of competence among members of a collective impacts collective accuracy, but interacting individuals often seem to operate under the assumption that they are equally competent even when direct evidence suggest the opposite and dyadic performance suffers. Cross-cultural data from Iran, China, and Denmark support this assumption of similarity (i.e., equality bias) as a sensible heuristic that works most of the time and simplifies social interaction. Crowds often trade off accuracy for other collective benefits such as diffusion of responsibility and reduction of regret. Consequently, two heads are sometimes better than one, but no-one holds the collective accountable, not even for the most disastrous of outcomes.


2021 ◽  
Vol 20 (3) ◽  
pp. 1-25
Author(s):  
Elham Shamsa ◽  
Alma Pröbstl ◽  
Nima TaheriNejad ◽  
Anil Kanduri ◽  
Samarjit Chakraborty ◽  
...  

Smartphone users require high Battery Cycle Life (BCL) and high Quality of Experience (QoE) during their usage. These two objectives can be conflicting based on the user preference at run-time. Finding the best trade-off between QoE and BCL requires an intelligent resource management approach that considers and learns user preference at run-time. Current approaches focus on one of these two objectives and neglect the other, limiting their efficiency in meeting users’ needs. In this article, we present UBAR, User- and Battery-aware Resource management, which considers dynamic workload, user preference, and user plug-in/out pattern at run-time to provide a suitable trade-off between BCL and QoE. UBAR personalizes this trade-off by learning the user’s habits and using that to satisfy QoE, while considering battery temperature and State of Charge (SOC) pattern to maximize BCL. The evaluation results show that UBAR achieves 10% to 40% improvement compared to the existing state-of-the-art approaches.


Author(s):  
Alexandru-Lucian Georgescu ◽  
Alessandro Pappalardo ◽  
Horia Cucu ◽  
Michaela Blott

AbstractThe last decade brought significant advances in automatic speech recognition (ASR) thanks to the evolution of deep learning methods. ASR systems evolved from pipeline-based systems, that modeled hand-crafted speech features with probabilistic frameworks and generated phone posteriors, to end-to-end (E2E) systems, that translate the raw waveform directly into words using one deep neural network (DNN). The transcription accuracy greatly increased, leading to ASR technology being integrated into many commercial applications. However, few of the existing ASR technologies are suitable for integration in embedded applications, due to their hard constrains related to computing power and memory usage. This overview paper serves as a guided tour through the recent literature on speech recognition and compares the most popular ASR implementations. The comparison emphasizes the trade-off between ASR performance and hardware requirements, to further serve decision makers in choosing the system which fits best their embedded application. To the best of our knowledge, this is the first study to provide this kind of trade-off analysis for state-of-the-art ASR systems.


Author(s):  
Wenchao Du ◽  
Hu Chen ◽  
Hongyu Yang ◽  
Yi Zhang

AbstractGenerative adversarial network (GAN) has been applied for low-dose CT images to predict normal-dose CT images. However, the undesired artifacts and details bring uncertainty to the clinical diagnosis. In order to improve the visual quality while suppressing the noise, in this paper, we mainly studied the two key components of deep learning based low-dose CT (LDCT) restoration models—network architecture and adversarial loss, and proposed a disentangled noise suppression method based on GAN (DNSGAN) for LDCT. Specifically, a generator network, which contains the noise suppression and structure recovery modules, is proposed. Furthermore, a multi-scaled relativistic adversarial loss is introduced to preserve the finer structures of generated images. Experiments on simulated and real LDCT datasets show that the proposed method can effectively remove noise while recovering finer details and provide better visual perception than other state-of-the-art methods.


Author(s):  
Mingliang Xu ◽  
Qingfeng Li ◽  
Jianwei Niu ◽  
Hao Su ◽  
Xiting Liu ◽  
...  

Quick response (QR) codes are usually scanned in different environments, so they must be robust to variations in illumination, scale, coverage, and camera angles. Aesthetic QR codes improve the visual quality, but subtle changes in their appearance may cause scanning failure. In this article, a new method to generate scanning-robust aesthetic QR codes is proposed, which is based on a module-based scanning probability estimation model that can effectively balance the tradeoff between visual quality and scanning robustness. Our method locally adjusts the luminance of each module by estimating the probability of successful sampling. The approach adopts the hierarchical, coarse-to-fine strategy to enhance the visual quality of aesthetic QR codes, which sequentially generate the following three codes: a binary aesthetic QR code, a grayscale aesthetic QR code, and the final color aesthetic QR code. Our approach also can be used to create QR codes with different visual styles by adjusting some initialization parameters. User surveys and decoding experiments were adopted for evaluating our method compared with state-of-the-art algorithms, which indicates that the proposed approach has excellent performance in terms of both visual quality and scanning robustness.


2020 ◽  
pp. 1-16
Author(s):  
Meriem Khelifa ◽  
Dalila Boughaci ◽  
Esma Aïmeur

The Traveling Tournament Problem (TTP) is concerned with finding a double round-robin tournament schedule that minimizes the total distances traveled by the teams. It has attracted significant interest recently since a favorable TTP schedule can result in significant savings for the league. This paper proposes an original evolutionary algorithm for TTP. We first propose a quick and effective constructive algorithm to construct a Double Round Robin Tournament (DRRT) schedule with low travel cost. We then describe an enhanced genetic algorithm with a new crossover operator to improve the travel cost of the generated schedules. A new heuristic for ordering efficiently the scheduled rounds is also proposed. The latter leads to significant enhancement in the quality of the schedules. The overall method is evaluated on publicly available standard benchmarks and compared with other techniques for TTP and UTTP (Unconstrained Traveling Tournament Problem). The computational experiment shows that the proposed approach could build very good solutions comparable to other state-of-the-art approaches or better than the current best solutions on UTTP. Further, our method provides new valuable solutions to some unsolved UTTP instances and outperforms prior methods for all US National League (NL) instances.


Sign in / Sign up

Export Citation Format

Share Document