scholarly journals Fast Algorithm for Selective Image Segmentation Model

2018 ◽  
Vol 7 (4.33) ◽  
pp. 41
Author(s):  
Abdul K Jumaat ◽  
Ke Chen

Selective image segmentation model aims to separate a specific object from its surroundings. To solve the model, the common practice to deal with its non-differentiable term is to approximate the original functional. While this approach yields to successful segmentation result, however the segmentation process can be slow. In this paper, we showed how to solve the model without approximation using Chambolle’s projection algorithm. Numerical tests show that good visual quality of segmentation is obtained in a fast-computational time.  

2020 ◽  
Vol 6 (2) ◽  
pp. 141
Author(s):  
Luthfi Maulana ◽  
Yusuf Gladiensyah Bihanda ◽  
Yuita Arum Sari

Image segmentation is a predefined process of image processing to determine a specific object. One of the problems in food recognition and food estimation is the lack of quality of the result of image segmentation. This paper presents a comparative study of different color space and color channel selection in image segmentation of food images. Based on previous research regarding image segmentation used in food leftover estimation, this paper proposed a different approach to selecting color space and color channel based on the score of Intersection Over Union (IOU) and Dice from the whole dataset. The color transformation is required, and five color spaces were used: CIELAB, HSV, YUV, YCbCr, and HLS. The result shows that A in LAB and H in HLS are better to produce segmentation than other color channels, with the Dice score of both is 5 (the highest score). It concludes that this color channel selection is applicable to be embedded in the Automatic Food Leftover Estimation (AFLE) algorithm.


In this paper a new image steganographic technique has been proposed which is capable of hiding data and produces a stego image that is totally indistinguishable from the original image by the human eye. To estimate the contrast and smoothness of pixels we check the relation between neighboring pixels. Our method first arranges the pixel in ascending manner, then takes the highest pixel value common with the other two pixels and then applies the pixel value differencing (PVD) method. To hide the secret data PVD technique is used in each pixel block. The two overlapping blocks are readjusted to attain the modified three-pixel components. Then calculate the new stego pixel block. In this way, take the middle and lowest pixel as the common pixel and apply the same procedure. In comparison, we get that if the highest value pixel value takes as a common one then the data hiding capacity is increased. The embedding capacity of the cover image is increased by using the pixel block overlapping mechanism. It has been tested on a set of images and also maintains the visual quality of the image.


Author(s):  
M. SIVAGAMI ◽  
T. REVATHI

Image segmentation is the basis for computer vision and object recognition. Watershed transform is one of the common methods used for region based segmentation. The previous watershed methods results in over segmentation. In this paper we present a novel method for efficient image segmentation by using bit-plane slicing and marker-controlled watershed. Bit-Plane slicing method produces the sliced image by taking the most significant bit of the image as the input to the bit-plane slicing algorithm. The output of the Bit-Plane slicing algorithm is used to produce the gradient image .The watershed segmentation algorithm is applied to the average of the marker image and the gradient image so as to get efficient segmentation result. Experimental results, shows that the proposed method reduces the memory consumption and computation.


2011 ◽  
Vol 28 (2) ◽  
pp. 93 ◽  
Author(s):  
Lamia Jaafar Belaid ◽  
Walid Mourou

The goal of this work is to present a new method for image segmentation using mathematicalmorphology. The approach used is based on the watershed transformation. In order to avoid an oversegmentation, we propose to adapt the topological gradient method. The watershed transformation combined with a fast algorithm based on the topological gradient approach gives good results. The numerical tests obtained illustrate the efficiency of our approach for image segmentation.


2020 ◽  
Vol 26 (11) ◽  
pp. 2567-2593
Author(s):  
M.V. Pomazanov

Subject. The study addresses the improvement of risk management efficiency and the quality of lending decisions made by banks. Objectives. The aim is to present the bank management with a fair algorithm for risk management motivation on the one hand, and the credit management (business) on the other hand. Within the framework of the common goal to maximize risk-adjusted income from loans, this algorithm will provide guidelines for ‘risk management’ and ‘business’ functions on how to improve individual and overall efficiency. Methods. The study employs the discriminant analysis, type I and II errors, Lorentz curve modeling, statistical analysis, economic modeling. Results. The paper offers a mechanism for assessing the quality of risk management decisions as opposed to (or in support of) decisions of the lending business when approving transactions. The mechanism rests on the approach of stating type I and II errors and the corresponding classical metric of the Gini coefficient. On the ‘business’ side, the mechanism monitors the improvement or deterioration of the indicator of changes in losses in comparison with the market average. Conclusions. The study substantiates the stimulating ‘rules of the game’ between the ‘business’ and ‘risk management’ to improve the efficiency of the entire business, to optimize interactions within the framework of internal competition. It presents mathematical tools to calculate corresponding indicators of the efficiency of internally competing entities.


2015 ◽  
Vol 60 (1) ◽  
pp. 81-102
Author(s):  
KErstin Thomas

Kerstin Thomas revaluates the famous dispute between Martin Heidegger, Meyer Schapiro, and Jacques Derrida, concerning a painting of shoes by Vincent Van Gogh. The starting point for this dispute was the description and analysis of things and artworks developed in his essay, “The Origin of the Work of Art”. In discussing Heidegger’s account, the art historian Meyer Schapiro’s main point of critique concerned Heidegger’s claim that the artwork reveals the truth of equipment in depicting shoes of a peasant woman and thereby showing her world. Schapiro sees a striking paradox in Heidegger’s claim for truth, based on a specific object in a specific artwork while at the same time following a rather metaphysical idea of the artwork. Kerstin Thomas proposes an interpretation, which exceeds the common confrontation of philosophy versus art history by focussing on the respective notion of facticity at stake in the theoretical accounts of both thinkers. Schapiro accuses Heidegger of a lack of concreteness, which he sees as the basis for every truth claim on objects. Thomas understands Schapiro’s objections as motivated by this demand for a facticity, which not only includes the work of art, but also investigator in his concrete historical perspective. Truth claims under such conditions of facticity are always relative to historical knowledge, and open to critical intervention and therefore necessarily contingent. Following Thomas, Schapiro’s critique shows that despite his intention of giving the work of art back its autonomy, Heidegger could be accused of achieving quite the opposite: through the abstraction of the concrete, the factual, and the given to the type, he actually sets the self and the realm of knowledge of the creator as absolute and not the object of his knowledge. Instead, she argues for a revaluation of Schapiro’s position with recognition of the arbitrariness of the artwork, by introducing the notion of factuality as formulated by Quentin Meillassoux. Understood as exchange between artist and object in its concrete material quality as well as with the beholder, the truth of painting could only be shown as radically contingent. Thomas argues that the critical intervention of Derrida who discusses both positions anew is exactly motivated by a recognition of the contingent character of object, artwork and interpretation. His deconstructive analysis can be understood as recognition of the dynamic character of things and hence this could be shown with Meillassoux to be exactly its character of facticity – or factuality.


Author(s):  
Junyoung Yun ◽  
Hong-Chang Shin ◽  
Gwangsoon Lee ◽  
Jong-Il Park

Author(s):  
Megha Chhabra ◽  
Manoj Kumar Shukla ◽  
Kiran Kumar Ravulakollu

: Latent fingerprints are unintentional finger skin impressions left as ridge patterns at crime scenes. A major challenge in latent fingerprint forensics is the poor quality of the lifted image from the crime scene. Forensics investigators are in permanent search of novel outbreaks of the effective technologies to capture and process low quality image. The accuracy of the results depends upon the quality of the image captured in the beginning, metrics used to assess the quality and thereafter level of enhancement required. The low quality of the image collected by low quality scanners, unstructured background noise, poor ridge quality, overlapping structured noise result in detection of false minutiae and hence reduce the recognition rate. Traditionally, Image segmentation and enhancement is partially done manually using help of highly skilled experts. Using automated systems for this work, differently challenging quality of images can be investigated faster. This survey amplifies the comparative study of various segmentation techniques available for latent fingerprint forensics.


Sign in / Sign up

Export Citation Format

Share Document