Automated segmentation of key structures of the eye using a light-weight two-step classifier

Author(s):  
Adish Rao ◽  
Aniruddha Mysore ◽  
Siddhanth Ajri ◽  
Abhishek Guragol ◽  
Poulami Sarkar ◽  
...  

We present an automated approach to segment key structures of the eye, viz., the iris, pupil and sclera in images obtained using an Augmented Reality (AR)/ Virtual Reality (VR) application. This is done using a two-step classifier: In the first step, we use an auto encoder-decoder network to obtain a pixel-wise classification of regions that comprise the iris, sclera and the background (image pixels that are outside the region of the eye). In the second step, we perform a pixel-wise classification of the iris region to delineate the pupil. The images in the study are from the OpenEDS challenge and were used to evaluate both the accuracy and computational cost of the proposed segmentation method. Our approach achieved a score of 0.93 on the leaderboard, outperforming the baseline model by achieving a higher accuracy and using a smaller number of parameters. These results demonstrate the great promise pipelined models hold along with the benefit of using domain-specific processing and feature engineering in conjunction with deep-learning based approaches for segmentation tasks. Keywords

2021 ◽  
Vol 11 (2) ◽  
pp. 535
Author(s):  
Mahbubunnabi Tamal

Quantification and classification of heterogeneous radiotracer uptake in Positron Emission Tomography (PET) using textural features (termed as radiomics) and artificial intelligence (AI) has the potential to be used as a biomarker of diagnosis and prognosis. However, textural features have been predicted to be strongly correlated with volume, segmentation and quantization, while the impact of image contrast and noise has not been assessed systematically. Further continuous investigations are required to update the existing standardization initiatives. This study aimed to investigate the relationships between textural features and these factors with 18F filled torso NEMA phantom to yield different contrasts and reconstructed with different durations to represent varying levels of noise. The phantom was also scanned with heterogeneous spherical inserts fabricated with 3D printing technology. All spheres were delineated using: (1) the exact boundaries based on their known diameters; (2) 40% fixed; and (3) adaptive threshold. Six textural features were derived from the gray level co-occurrence matrix (GLCM) using different quantization levels. The results indicate that homogeneity and dissimilarity are the most suitable for measuring PET tumor heterogeneity with quantization 64 provided that the segmentation method is robust to noise and contrast variations. To use these textural features as prognostic biomarkers, changes in textural features between baseline and treatment scans should always be reported along with the changes in volumes.


2021 ◽  
Vol 503 (2) ◽  
pp. 1828-1846
Author(s):  
Burger Becker ◽  
Mattia Vaccari ◽  
Matthew Prescott ◽  
Trienko Grobler

ABSTRACT The morphological classification of radio sources is important to gain a full understanding of galaxy evolution processes and their relation with local environmental properties. Furthermore, the complex nature of the problem, its appeal for citizen scientists, and the large data rates generated by existing and upcoming radio telescopes combine to make the morphological classification of radio sources an ideal test case for the application of machine learning techniques. One approach that has shown great promise recently is convolutional neural networks (CNNs). Literature, however, lacks two major things when it comes to CNNs and radio galaxy morphological classification. First, a proper analysis of whether overfitting occurs when training CNNs to perform radio galaxy morphological classification using a small curated training set is needed. Secondly, a good comparative study regarding the practical applicability of the CNN architectures in literature is required. Both of these shortcomings are addressed in this paper. Multiple performance metrics are used for the latter comparative study, such as inference time, model complexity, computational complexity, and mean per class accuracy. As part of this study, we also investigate the effect that receptive field, stride length, and coverage have on recognition performance. For the sake of completeness, we also investigate the recognition performance gains that we can obtain by employing classification ensembles. A ranking system based upon recognition and computational performance is proposed. MCRGNet, Radio Galaxy Zoo, and ConvXpress (novel classifier) are the architectures that best balance computational requirements with recognition performance.


Author(s):  
VLADIMIR NIKULIN ◽  
TIAN-HSIANG HUANG ◽  
GEOFFREY J. MCLACHLAN

The method presented in this paper is novel as a natural combination of two mutually dependent steps. Feature selection is a key element (first step) in our classification system, which was employed during the 2010 International RSCTC data mining (bioinformatics) Challenge. The second step may be implemented using any suitable classifier such as linear regression, support vector machine or neural networks. We conducted leave-one-out (LOO) experiments with several feature selection techniques and classifiers. Based on the LOO evaluations, we decided to use feature selection with the separation type Wilcoxon-based criterion for all final submissions. The method presented in this paper was tested successfully during the RSCTC data mining Challenge, where we achieved the top score in the Basic track.


2021 ◽  
Vol 1 (2) ◽  
pp. 239-251
Author(s):  
Ky Tran ◽  
Sid Keene ◽  
Erik Fretheim ◽  
Michail Tsikerdekis

Marine network protocols are domain-specific network protocols that aim to incorporate particular features within the specialized marine context that devices are implemented in. Devices implemented in such vessels involve critical equipment; however, limited research exists for marine network protocol security. In this paper, we provide an analysis of several marine network protocols used in today’s vessels and provide a classification of attack risks. Several protocols involve known security limitations, such as Automated Identification System (AIS) and National Marine Electronic Association (NMEA) 0183, while newer protocols, such as OneNet provide more security hardiness. We further identify several challenges and opportunities for future implementations of such protocols.


Koedoe ◽  
1995 ◽  
Vol 38 (1) ◽  
Author(s):  
G.J. Bredenkamp ◽  
H. Bezuidenhout

A procedure for the effective classification of large phytosociological data sets, and the combination of many data sets from various parts of the South African grasslands is demonstrated. The procedure suggests a region by region or project by project treatment of the data. The analyses are performed step by step to effectively bring together all releves of similar or related plant communities. The first step involves a separate numerical classification of each subset (region), and subsequent refinement by Braun- Blanquet procedures. The resulting plant communities are summarised in a single synoptic table, by calculating a synoptic value for each species in each community. In the second step all communities in the synoptic table are classified by numerical analysis, to bring related communities from different regions or studies together in a single cluster. After refinement of these clusters by Braun-Blanquet procedures, broad vegetation types are identified. As a third step phytosociological tables are compiled for each iden- tified broad vegetation type, and a comprehensive abstract hierarchy constructed.


2004 ◽  
Vol 26 (3) ◽  
pp. 125-134
Author(s):  
Armin Gerger ◽  
Patrick Bergthaler ◽  
Josef Smolle

Aims. In tissue counter analysis (TCA) digital images of complex histologic sections are dissected into elements of equal size and shape, and digital information comprising grey level, colour and texture features is calculated for each element. In this study we assessed the feasibility of TCA for the quantitative description of amount and also of distribution of immunostained material. Methods. In a first step, our system was trained for differentiating between background and tissue on the one hand and between immunopositive and so‐called other tissue on the other. In a second step, immunostained slides were automatically screened and the procedure was tested for the quantitative description of amount of cytokeratin (CK) and leukocyte common antigen (LCA) immunopositive structures. Additionally, fractal analysis was applied to all cases describing the architectural distribution of immunostained material. Results. The procedure yielded reproducible assessments of the relative amounts of immunopositive tissue components when the number and percentage of CK and LCA stained structures was assessed. Furthermore, a reliable classification of immunopositive patterns was found by means of fractal dimensionality. Conclusions. Tissue counter analysis combined with classification trees and fractal analysis is a fully automated and reproducible approach for the quantitative description in immunohistology.


Author(s):  
A. Montaldo ◽  
L. Fronda ◽  
I. Hedhli ◽  
G. Moser ◽  
S. B. Serpico ◽  
...  

Abstract. In this paper, a multiscale Markov framework is proposed in order to address the problem of the classification of multiresolution and multisensor remotely sensed data. The proposed framework makes use of a quadtree to model the interactions across different spatial resolutions and a Markov model with respect to a generic total order relation to deal with contextual information at each scale in order to favor applicability to very high resolution imagery. The methodological properties of the proposed hierarchical framework are investigated. Firstly, we prove the causality of the overall proposed model, a particularly advantageous property in terms of computational cost of the inference. Secondly, we prove the expression of the marginal posterior mode criterion for inference on the proposed framework. Within this framework, a specific algorithm is formulated by defining, within each layer of the quadtree, a Markov chain model with respect to a pixel scan that combines both a zig-zag trajectory and a Hilbert space-filling curve. Data collected by distinct sensors at the same spatial resolution are fused through gradient boosted regression trees. The developed algorithm was experimentally validated with two very high resolution datasets including multispectral, panchromatic and radar satellite images. The experimental results confirm the effectiveness of the proposed algorithm as compared to previous techniques based on alternate approaches to multiresolution fusion.


Sign in / Sign up

Export Citation Format

Share Document