EFFICIENT QUATERNION MOMENTS FOR REPRESENTATION AND RETRIEVAL OF BIOMEDICAL COLOR IMAGES

2020 ◽  
Vol 32 (05) ◽  
pp. 2050039
Author(s):  
Gaber Hassan ◽  
Khalid M. Hosny ◽  
R. M. Farouk ◽  
Ahmed M. Alzohairy

Biomedical color (BMC) images are being used on a wide scale by physicians, where their diagnosis would be more accurate. Hence, it is recommended to develop new approaches that are able to represent and retrieve the BMC images efficiently. This work proposes two methods to represent BMC images: Quaternion Associated Laguerre. Moments (Q_ALMs), and Quaternion Chebyshev Moments (Q_CMs). Q_ALMs and Q_CMs are derived by extending the ALMs and CMs to the quaternion field. ALMs and CMs represent discrete orthogonal moments, and they are defined using the Associated Laguerre Polynomials (ALPs) and Chebychev Polynomials, respectively. Hospitals and medical institutes everywhere in the world create and store a large variety of datasets of BMC images during the routine clinical practices; hence, the mastery to retrieve the BMC images correctly is crucial for precise diagnoses and also for the researchers in medical sciences. So that in this study, we also introduced two image retrieval systems for BMC images based on the Q_CMs and Q_ALMs approaches. Our approaches extensively assessed with two standard benchmark datasets: LGG Segmentation dataset for brain magnetic resonance MR images and NEMA-CT for the computed tomography (CT) images. The performance of the proposed retrieval systems is assessed through three performance metrics: Average retrieval precision (ARP), average retrieval rate (ARR), and F_score. Results have shown the outperformance of Q_CMs over Q_ALMs in both the cases of representing and retrieval of BMC images.

Author(s):  
Givanna H Putri ◽  
Irena Koprinska ◽  
Thomas M Ashhurst ◽  
Nicholas J C King ◽  
Mark N Read

Abstract Motivation Many ‘automated gating’ algorithms now exist to cluster cytometry and single-cell sequencing data into discrete populations. Comparative algorithm evaluations on benchmark datasets rely either on a single performance metric, or a few metrics considered independently of one another. However, single metrics emphasize different aspects of clustering performance and do not rank clustering solutions in the same order. This underlies the lack of consensus between comparative studies regarding optimal clustering algorithms and undermines the translatability of results onto other non-benchmark datasets. Results We propose the Pareto fronts framework as an integrative evaluation protocol, wherein individual metrics are instead leveraged as complementary perspectives. Judged superior are algorithms that provide the best trade-off between the multiple metrics considered simultaneously. This yields a more comprehensive and complete view of clustering performance. Moreover, by broadly and systematically sampling algorithm parameter values using the Latin Hypercube sampling method, our evaluation protocol minimizes (un)fortunate parameter value selections as confounding factors. Furthermore, it reveals how meticulously each algorithm must be tuned in order to obtain good results, vital knowledge for users with novel data. We exemplify the protocol by conducting a comparative study between three clustering algorithms (ChronoClust, FlowSOM and Phenograph) using four common performance metrics applied across four cytometry benchmark datasets. To our knowledge, this is the first time Pareto fronts have been used to evaluate the performance of clustering algorithms in any application domain. Availability and implementation Implementation of our Pareto front methodology and all scripts and datasets to reproduce this article are available at https://github.com/ghar1821/ParetoBench. Supplementary information Supplementary data are available at Bioinformatics online.


2012 ◽  
Vol 263-266 ◽  
pp. 167-170 ◽  
Author(s):  
Xin Wu Chen ◽  
Jing Ge ◽  
Jin Gen Liu

Contourlet transform is superior to wavelet transform in representing texture information and sparser in describing geometric structures in digital images, but lack of robust character of shift invariance. Non-subsampled contourlet transform (NSCT) alleviates this shortcoming hence more suitable for texture and has been studied for image de-noising, enhancement, and retrieval situations. Focus on improving the retrieval rates of existing contourlet transforms retrieval systems, a new texture retrieval algorithm was proposed. In the algorithm, texture information was represented by four statistical estimators, namely, L2-energy, kurtosis, standard deviation and L1-energy of each sub-band coefficients in NSCT domain. Experimental results show that the new algorithm can make a higher retrieval rate than the combination of standard deviation and energy which is most commonly used today.


2020 ◽  
Vol 20 (3) ◽  
pp. 75-85
Author(s):  
Shefali Dhingra ◽  
Poonam Bansal

AbstractContent Based Image Retrieval (CBIR) system is an efficient search engine which has the potentiality of retrieving the images from huge repositories by extracting the visual features. It includes color, texture and shape. Texture is the most eminent feature among all. This investigation focuses upon the classification complications that crop up in case of big datasets. In this, texture techniques are explored with machine learning algorithms in order to increase the retrieval efficiency. We have tested our system on three texture techniques using various classifiers which are Support vector machine, K-Nearest Neighbor (KNN), Naïve Bayes and Decision Tree (DT). Variant evaluation metrics precision, recall, false alarm rate, accuracy etc. are figured out to measure the competence of the designed CBIR system on two benchmark datasets, i.e. Wang and Brodatz. Result shows that with both these datasets the KNN and DT classifier hand over superior results as compared to others.


2021 ◽  
Author(s):  
Harvineet Singh ◽  
Vishwali Mhasawade ◽  
Rumi Chunara

Importance: Modern predictive models require large amounts of data for training and evaluation which can result in building models that are specific to certain locations, populations in them and clinical practices. Yet, best practices and guidelines for clinical risk prediction models have not yet considered such challenges to generalizability. Objectives: To investigate changes in measures of predictive discrimination, calibration, and algorithmic fairness when transferring models for predicting in-hospital mortality across ICUs in different populations. Also, to study the reasons for the lack of generalizability in these measures. Design, Setting, and Participants: In this multi-center cross-sectional study, electronic health records from 179 hospitals across the US with 70,126 hospitalizations were analyzed. Time of data collection ranged from 2014 to 2015. Main Outcomes and Measures: The main outcome is in-hospital mortality. Generalization gap, defined as difference between model performance metrics across hospitals, is computed for discrimination and calibration metrics, namely area under the receiver operating characteristic curve (AUC) and calibration slope. To assess model performance by race variable, we report differences in false negative rates across groups. Data were also analyzed using a causal discovery algorithm "Fast Causal Inference" (FCI) that infers paths of causal influence while identifying potential influences associated with unmeasured variables. Results: In-hospital mortality rates differed in the range of 3.9%-9.3% (1st-3rd quartile) across hospitals. When transferring models across hospitals, AUC at the test hospital ranged from 0.777 to 0.832 (1st to 3rd quartile; median 0.801); calibration slope from 0.725 to 0.983 (1st to 3rd quartile; median 0.853); and disparity in false negative rates from 0.046 to 0.168 (1st to 3rd quartile; median 0.092). When transferring models across geographies, AUC ranged from 0.795 to 0.813 (1st to 3rd quartile; median 0.804); calibration slope from 0.904 to 1.018 (1st to 3rd quartile; median 0.968); and disparity in false negative rates from 0.018 to 0.074 (1st to 3rd quartile; median 0.040). Distribution of all variable types (demography, vitals, and labs) differed significantly across hospitals and regions. Shifts in the race variable distribution and some clinical (vitals, labs and surgery) variables by hospital or region. Race variable also mediates differences in the relationship between clinical variables and mortality, by hospital/region. Conclusions and Relevance: Group-specific metrics should be assessed during generalizability checks to identify potential harms to the groups. In order to develop methods to improve and guarantee performance of prediction models in new environments for groups and individuals, better understanding and provenance of health processes as well as data generating processes by sub-group are needed to identify and mitigate sources of variation.


Author(s):  
Deepak Kumar ◽  
Ramandeep Singh

Constant advancement and growth in digital technology is swiftly changing the scenario of text detection from hard copy images to natural images. An in-depth study of the previous research work reveals that though a lot of research work has been done on text detection and recognition in natural scene images, but most of the researchers have concluded their survey either on a horizontal or near to horizontal texts. Their survey somewhat speaks about multi-orientation text detection, but the curved text detection in natural images escaped their attention. It has necessitated exploration on the vital aspect of text detection field where detailed study of horizontal, near to horizontal, multi-orientation, and curved text finds a place in a single cover. To achieve this goal, the present study will focus on fundamental understanding, existing challenges, and the proven algorithms for text detection in natural images. The authors discuss the future perspective of recent advances in text detection in natural images with various benchmark datasets and performance metrics.


Author(s):  
Rakesh Asery ◽  
Ramesh Kumar Sunkaria ◽  
Puneeta Marwaha ◽  
Lakhan Dev Sharma

In this chapter authors introduces content-based image retrieval systems and compares them over a common database. For this, four different content-based local binary descriptors are described with and without Gabor transform in brief. Further Nth derivative descriptor is calculated using (N-1)th derivative, based on rotational and multiscale feature extraction. At last the distance based query image matching is used to find the similarity with database. The performance in terms of average precision, average retrieval rate, different orders of derivatives in the form of average retrieval rate, and length of feature vector v/s performance in terms of time have been calculated. For this work a comparative experiment has been conducted using the Ponce Group images on seven classes (each class have 100 images). In addition, the performance of the all descriptors have been analyzed by combining these with the Gabor transform.


2021 ◽  
Vol 13 (13) ◽  
pp. 2619
Author(s):  
Joao Fonseca ◽  
Georgios Douzas ◽  
Fernando Bacao

In remote sensing, Active Learning (AL) has become an important technique to collect informative ground truth data ``on-demand'' for supervised classification tasks. Despite its effectiveness, it is still significantly reliant on user interaction, which makes it both expensive and time consuming to implement. Most of the current literature focuses on the optimization of AL by modifying the selection criteria and the classifiers used. Although improvements in these areas will result in more effective data collection, the use of artificial data sources to reduce human--computer interaction remains unexplored. In this paper, we introduce a new component to the typical AL framework, the data generator, a source of artificial data to reduce the amount of user-labeled data required in AL. The implementation of the proposed AL framework is done using Geometric SMOTE as the data generator. We compare the new AL framework to the original one using similar acquisition functions and classifiers over three AL-specific performance metrics in seven benchmark datasets. We show that this modification of the AL framework significantly reduces cost and time requirements for a successful AL implementation in all of the datasets used in the experiment.


Author(s):  
Judy Simon

Computer vision research and its applications in the fashion industry have grown popular due to the rapid growth of information technology. Fashion detection is increasingly popular because most fashion goods need detection before they could be worn. Early detection of the human body component of the input picture is necessary to determine where the garment area is and then synthesize it. For this reason, detection is the starting point for most of the in-depth research. The cloth detection of landmarks is retrieved through many feature items that emphasis on fashionate things. The feature extraction can be done for better accuracy, pose and scale transmission. These convolution filters extract the features through many epochs and max-pooling layers in the neural networks. The optimized classification has been done using SVM in this study, for attaining overall high efficiency. This proposed CNN approach fashionate things prediction is combined with SVM for better classification. Furthermore, the classification error is minimized through the evaluation procedure for obtaining better accuracy. Finally, this research work has attained good accuracy and other performance metrics than the different traditional approaches. The benchmark datasets, current methodologies, and performance comparisons are all reorganized for each piece.


2011 ◽  
Vol 201-203 ◽  
pp. 2330-2333
Author(s):  
Xin Wu Chen ◽  
Zhan Qing Ma ◽  
Li Wei Liu

To improve the retrieval rate of contourlet transform retrieval system and reduce the redundancy of contourlet which cost two much time in building feature vector database, a new wavelet-contourlet transform retrieval system was proposed. Six different features, including mean, standard deviation, absolute mean energy, L2 energy, skewness and kurotis contributions to retrieval rates were examined. Based on the single feature ability in retrieval system, a new contourlet retrieval system was proposed. The feature vectors were constructed by cascading the absolute mean energy and kurtosis of each sub-band contourlet coefficients and the similarity measure used here is Canberra distance. Experimental results on 109 brodatz texture images show that using the features cascaded by absolute mean and kurtosis can lead to a higher retrieval rate than several contourlet transform retrieval systems which utilize the combination feature of standard deviation and absolute mean energy most commonly used today under same dimension of feature vectors.


2013 ◽  
Vol 2013 ◽  
pp. 1-10 ◽  
Author(s):  
Vaishali Naik ◽  
R. S. Gamad ◽  
P. P. Bansod

Background. The segmentation of the common carotid artery (CCA) wall is imperative for the determination of the intima-media thickness (IMT) on B-mode ultrasound (US) images. The IMT is considered an important indicator in the evaluation of the risk for the development of atherosclerosis. In this paper, authors have discussed the relevance of measurements in clinical practices and the challenges that one has to face while approaching the segmentation of carotid artery on ultrasound images. The paper presents an overall review of commonly used methods for the CCA segmentation and IMT measurement along with the different performance metrics that have been proposed and used for performance validation. Summary and future directions are given in the conclusion.


Sign in / Sign up

Export Citation Format

Share Document