scholarly journals Biomedical image representation approach using visualness and spatial information in a concept feature space for interactive region-of-interest-based retrieval

2015 ◽  
Vol 2 (4) ◽  
pp. 046502 ◽  
Author(s):  
Md. Mahmudur Rahman ◽  
Sameer K. Antani ◽  
Dina Demner-Fushman ◽  
George R. Thoma
2018 ◽  
Vol 8 (11) ◽  
pp. 2242 ◽  
Author(s):  
Bushra Zafar ◽  
Rehan Ashraf ◽  
Nouman Ali ◽  
Muhammad Iqbal ◽  
Muhammad Sajid ◽  
...  

The requirement for effective image search, which motivates the use of Content-Based Image Retrieval (CBIR) and the search of similar multimedia contents on the basis of user query, remains an open research problem for computer vision applications. The application domains for Bag of Visual Words (BoVW) based image representations are object recognition, image classification and content-based image analysis. Interest point detectors are quantized in the feature space and the final histogram or image signature do not retain any detail about co-occurrences of features in the 2D image space. This spatial information is crucial, as it adversely affects the performance of an image classification-based model. The most notable contribution in this context is Spatial Pyramid Matching (SPM), which captures the absolute spatial distribution of visual words. However, SPM is sensitive to image transformations such as rotation, flipping and translation. When images are not well-aligned, SPM may lose its discriminative power. This paper introduces a novel approach to encoding the relative spatial information for histogram-based representation of the BoVW model. This is established by computing the global geometric relationship between pairs of identical visual words with respect to the centroid of an image. The proposed research is evaluated by using five different datasets. Comprehensive experiments demonstrate the robustness of the proposed image representation as compared to the state-of-the-art methods in terms of precision and recall values.


2020 ◽  
Vol 12 (21) ◽  
pp. 3611
Author(s):  
Lisa Landuyt ◽  
Niko E. C. Verhoest ◽  
Frieke M. B. Van Coillie

The European Space Agency’s Sentinel-1 constellation provides timely and freely available dual-polarized C-band Synthetic Aperture Radar (SAR) imagery. The launch of these and other SAR sensors has boosted the field of SAR-based flood mapping. However, flood mapping in vegetated areas remains a topic under investigation, as backscatter is the result of a complex mixture of backscattering mechanisms and strongly depends on the wave and vegetation characteristics. In this paper, we present an unsupervised object-based clustering framework capable of mapping flooding in the presence and absence of flooded vegetation based on freely and globally available data only. Based on a SAR image pair, the region of interest is segmented into objects, which are converted to a SAR-optical feature space and clustered using K-means. These clusters are then classified based on automatically determined thresholds, and the resulting classification is refined by means of several region growing post-processing steps. The final outcome discriminates between dry land, permanent water, open flooding, and flooded vegetation. Forested areas, which might hide flooding, are indicated as well. The framework is presented based on four case studies, of which two contain flooded vegetation. For the optimal parameter combination, three-class F1 scores between 0.76 and 0.91 are obtained depending on the case, and the pixel- and object-based thresholding benchmarks are outperformed. Furthermore, this framework allows an easy integration of additional data sources when these become available.


Sensors ◽  
2020 ◽  
Vol 20 (4) ◽  
pp. 1181
Author(s):  
Keisuke Yoneda ◽  
Akisuke Kuramoto ◽  
Naoki Suganuma ◽  
Toru Asaka ◽  
Mohammad Aldibaja ◽  
...  

Traffic light recognition is an indispensable elemental technology for automated driving in urban areas. In this study, we propose an algorithm that recognizes traffic lights and arrow lights by image processing using the digital map and precise vehicle pose which is estimated by a localization module. The use of a digital map allows the determination of a region-of-interest in an image to reduce the computational cost and false detection. In addition, this study develops an algorithm to recognize arrow lights using relative positions of traffic lights, and the arrow light is used as prior spatial information. This allows for the recognition of distant arrow lights that are difficult for humans to see clearly. Experiments were conducted to evaluate the recognition performance of the proposed method and to verify if it matches the performance required for automated driving. Quantitative evaluations indicate that the proposed method achieved 91.8% and 56.7% of the average f-value for traffic lights and arrow lights, respectively. It was confirmed that the arrow-light detection could recognize small arrow objects even if their size was smaller than 10 pixels. The verification experiments indicate that the performance of the proposed method meets the necessary requirements for smooth acceleration or deceleration at intersections in automated driving.


2021 ◽  
Author(s):  
Lynn Le ◽  
Luca Ambrogioni ◽  
Katja Seeliger ◽  
Yağmur Güçlütürk ◽  
Marcel van Gerven ◽  
...  

AbstractReconstructing complex and dynamic visual perception from brain activity remains a major challenge in machine learning applications to neuroscience. Here we present a new method for reconstructing naturalistic images and videos from very large single-participant functional magnetic resonance data that leverages the recent success of image-to-image transformation networks. This is achieved by exploiting spatial information obtained from retinotopic mappings across the visual system. More specifically, we first determine what position each voxel in a particular region of interest would represent in the visual field based on its corresponding receptive field location. Then, the 2D image representation of the brain activity on the visual field is passed to a fully convolutional image-to-image network trained to recover the original stimuli using VGG feature loss with an adversarial regularizer. In our experiments, we show that our method offers a significant improvement over existing video reconstruction techniques.


Author(s):  
Salman Qadri

The purpose of this study is to highlight the significance of machine vision for the Classification of kidney stone identification. A novel optimized fused texture features frame work was designed to identify the stones in kidney.  A fused 234 texture feature namely (GLCM, RLM and Histogram) feature set was acquired by each region of interest (ROI). It was observed that on each image 8 ROI’s of sizes (16x16, 20x20 and 22x22) were taken. It was difficult to handle a large feature space 280800 (1200x234). Now to overcome this data handling issue we have applied feature optimization technique namely POE+ACC and acquired 30 most optimized features set for each ROI. The optimized fused features data set 3600(1200x30) was used to four machine vision Classifiers that is Random Forest, MLP, j48 and Naïve Bayes. Finally, it was observed that Random Forest provides best results of 90% accuracy on ROI 22x22 among the above discussed deployed Classifiers


2019 ◽  
Vol 7 (4) ◽  
Author(s):  
Noha Elfiky

The Bag-of-Words (BoW) approach has been successfully applied in the context of category-level image classification. To incorporate spatial image information in the BoW model, Spatial Pyramids (SPs) are used. However, spatial pyramids are rigid in nature and are based on pre-defined grid configurations. As a consequence, they often fail to coincide with the underlying spatial structure of images from different categories which may negatively affect the classification accuracy.The aim of the paper is to use the 3D scene geometry to steer the layout of spatial pyramids for category-level image classification (object recognition). The proposed approach provides an image representation by inferring the constituent geometrical parts of a scene. As a result, the image representation retains the descriptive spatial information to yield a structural description of the image. From large scale experiments on the Pascal VOC2007 and Caltech101, it can be derived that SPs which are obtained by the proposed Generic SPs outperforms the standard SPs.


2010 ◽  
Author(s):  
Thavavel V ◽  
JafferBasha J

Segmentation forms the onset for image analysis especially for medical images, making any abnormalities in tissues distinctly visible. Possible application includes the detection of tumor boundary in SPECT, MRI or electron MRI (EMRI). Nevertheless, tumors being heterogeneous pose a great problem when automatic segmentation is attempted to accurately detect the region of interest (ROI). Consequently, it is a challenging task to design an automatic segmentation algorithm without the incorporation of ‘a priori’ knowledge of an organ being imaged. To meet this challenge, here we propose an intelligence-based approach integrating evolutionary k-means algorithm within multi-resolution framework for feature segmentation with higher accuracy and lower user interaction cost. The approach provides several advantages. First, spherical coordinate transform (SCT) is applied on original RGB data for the identification of variegated coloring as well as for significant computational overhead reduction. Second the translation invariant property of the discrete wavelet frames (DWF) is exploited to define the features, color and texture using chromaticity of LL band and luminance of LH and HL band respectively. Finally, the genetic algorithm based K-means (GKA), which has the ability to learn intelligently the distribution of different tissue types without any prior knowledge, is adopted to cluster the feature space with optimized cluster centers. Experimental results of proposed algorithm using multi-modality images such as MRI, SPECT, and EMRI are presented and analyzed in terms of error measures to verify its effectiveness and feasibility for medical applications.


Sign in / Sign up

Export Citation Format

Share Document