scholarly journals Thresholding based on Grey Levels, Gradient Magnitude and Spatial Correlation

Image segmentation gained significant importance in recent years. The goal of segmentation is partitioning an image into distinct regions containing each pixel with similar attributes. Several Image segmentation techniques exist based on thresholding and clustering. Image segmentation based on thresholding is typically doesn’t find any objects and bounds (lines, curves, etc.) in image. To boost the segmentation performance based on thresholding strategies, a unique strategy that integrates the spacial information between pixel’s is designed. The proposed strategy utilizes pixel’s grey level Gradient magnitude and gray level spacial correlation at intervals a part to construct a unique two dimensional bar graph, known as GLGM & GLSC. This technique is valid through segmenting many real world pictures. Experimental results proved this method outperforms several existing Thresholding strategies.

2013 ◽  
Vol 712-715 ◽  
pp. 2349-2353
Author(s):  
Hong Lan ◽  
Shao Bin Jin

Fuzzy C-Means clustering(FCM) algorithm plays an important role in image segmentation, but it is sensitive to noise because of not taking into account the spatial information. Addressing this problem, this paper presents an improved suppressed FCM algorithm based on the pixels and the spatial neighborhood information of the image. The algorithm combines the two-dimentional histogram and suppressed FCM algorithm together. First, construct a two-dimentional histogram instead of one-dimentional histogram, which can better distinguish the distribution of the object and background for noisy images. Then determine the initial clustering based on two-dimensional histogram. Last, provide a new way to determine the suppressed factor and use the improved FCM algorithm to realize the image segmentation. Experimental results show that the improved algorithm is effective to improve the clustering speed, and can achieve better segmentation results.


Author(s):  
Mona E. Elbashier ◽  
Suhaib Alameen ◽  
Caroline Edward Ayad ◽  
Mohamed E. M. Gar-Elnabi

This study concern to characterize the pancreas areato head, body and tail using Gray Level Run Length Matrix (GLRLM) and extract classification features from CT images. The GLRLM techniques included eleven’s features. To find the gray level distribution in CT images it complements the GLRLM features extracted from CT images with runs of gray level in pixels and estimate the size distribution of thesubpatterns. analyzing the image with Interactive Data Language IDL software to measure the grey level distribution of images. The results show that the Gray Level Run Length Matrix and  features give classification accuracy of pancreashead 89.2%, body 93.6 and the tail classification accuracy 93.5%. The overall classification accuracy of pancreas area 92.0%.These relationships are stored in a Texture Dictionary that can be later used to automatically annotate new CT images with the appropriate pancreas area names.


2010 ◽  
Vol 36 (7) ◽  
pp. 951-959 ◽  
Author(s):  
Bo LIU ◽  
Jian-Hua HUANG ◽  
Xiang-Long TANG ◽  
Jia-Feng LIU ◽  
Ying-Tao ZHANG

Data ◽  
2020 ◽  
Vol 6 (1) ◽  
pp. 1
Author(s):  
Ahmed Elmogy ◽  
Hamada Rizk ◽  
Amany M. Sarhan

In data mining, outlier detection is a major challenge as it has an important role in many applications such as medical data, image processing, fraud detection, intrusion detection, and so forth. An extensive variety of clustering based approaches have been developed to detect outliers. However they are by nature time consuming which restrict their utilization with real-time applications. Furthermore, outlier detection requests are handled one at a time, which means that each request is initiated individually with a particular set of parameters. In this paper, the first clustering based outlier detection framework, (On the Fly Clustering Based Outlier Detection (OFCOD)) is presented. OFCOD enables analysts to effectively find out outliers on time with request even within huge datasets. The proposed framework has been tested and evaluated using two real world datasets with different features and applications; one with 699 records, and another with five millions records. The experimental results show that the performance of the proposed framework outperforms other existing approaches while considering several evaluation metrics.


2021 ◽  
Vol 15 (3) ◽  
pp. 1-33
Author(s):  
Wenjun Jiang ◽  
Jing Chen ◽  
Xiaofei Ding ◽  
Jie Wu ◽  
Jiawei He ◽  
...  

In online systems, including e-commerce platforms, many users resort to the reviews or comments generated by previous consumers for decision making, while their time is limited to deal with many reviews. Therefore, a review summary, which contains all important features in user-generated reviews, is expected. In this article, we study “how to generate a comprehensive review summary from a large number of user-generated reviews.” This can be implemented by text summarization, which mainly has two types of extractive and abstractive approaches. Both of these approaches can deal with both supervised and unsupervised scenarios, but the former may generate redundant and incoherent summaries, while the latter can avoid redundancy but usually can only deal with short sequences. Moreover, both approaches may neglect the sentiment information. To address the above issues, we propose comprehensive Review Summary Generation frameworks to deal with the supervised and unsupervised scenarios. We design two different preprocess models of re-ranking and selecting to identify the important sentences while keeping users’ sentiment in the original reviews. These sentences can be further used to generate review summaries with text summarization methods. Experimental results in seven real-world datasets (Idebate, Rotten Tomatoes Amazon, Yelp, and three unlabelled product review datasets in Amazon) demonstrate that our work performs well in review summary generation. Moreover, the re-ranking and selecting models show different characteristics.


2020 ◽  
Vol 146 ◽  
pp. 03004
Author(s):  
Douglas Ruth

The most influential parameter on the behavior of two-component flow in porous media is “wettability”. When wettability is being characterized, the most frequently used parameter is the “contact angle”. When a fluid-drop is placed on a solid surface, in the presence of a second, surrounding fluid, the fluid-fluid surface contacts the solid-surface at an angle that is typically measured through the fluid-drop. If this angle is less than 90°, the fluid in the drop is said to “wet” the surface. If this angle is greater than 90°, the surrounding fluid is said to “wet” the surface. This definition is universally accepted and appears to be scientifically justifiable, at least for a static situation where the solid surface is horizontal. Recently, this concept has been extended to characterize wettability in non-static situations using high-resolution, two-dimensional digital images of multi-component systems. Using simple thought experiments and published experimental results, many of them decades old, it will be demonstrated that contact angles are not primary parameters – their values depend on many other parameters. Using these arguments, it will be demonstrated that contact angles are not the cause of wettability behavior but the effect of wettability behavior and other parameters. The result of this is that the contact angle cannot be used as a primary indicator of wettability except in very restricted situations. Furthermore, it will be demonstrated that even for the simple case of a capillary interface in a vertical tube, attempting to use simply a two-dimensional image to determine the contact angle can result in a wide range of measured values. This observation is consistent with some published experimental results. It follows that contact angles measured in two-dimensions cannot be trusted to provide accurate values and these values should not be used to characterize the wettability of the system.


2021 ◽  
Vol 22 (1) ◽  
Author(s):  
Changyong Li ◽  
Yongxian Fan ◽  
Xiaodong Cai

Abstract Background With the development of deep learning (DL), more and more methods based on deep learning are proposed and achieve state-of-the-art performance in biomedical image segmentation. However, these methods are usually complex and require the support of powerful computing resources. According to the actual situation, it is impractical that we use huge computing resources in clinical situations. Thus, it is significant to develop accurate DL based biomedical image segmentation methods which depend on resources-constraint computing. Results A lightweight and multiscale network called PyConvU-Net is proposed to potentially work with low-resources computing. Through strictly controlled experiments, PyConvU-Net predictions have a good performance on three biomedical image segmentation tasks with the fewest parameters. Conclusions Our experimental results preliminarily demonstrate the potential of proposed PyConvU-Net in biomedical image segmentation with resources-constraint computing.


Sign in / Sign up

Export Citation Format

Share Document