scholarly journals Classification of Clothing Using Midlevel Layers

ISRN Robotics ◽  
2013 ◽  
Vol 2013 ◽  
pp. 1-17 ◽  
Author(s):  
Bryan Willimon ◽  
Ian Walker ◽  
Stan Birchfield

We present a multilayer approach to classify articles of clothing within a pile of laundry. The classification features are composed of color, texture, shape, and edge information from 2D and 3D data within a local and global perspective. The contribution of this paper is a novel approach of classification termed L-M-H, more specifically LC-S-H for clothing classification. The multilayer approach compartmentalizes the problem into a high (H) layer, multiple midlevel (characteristics (C), selection masks (S)) layers, and a low (L) layer. This approach produces “local” solutions to solve the global classification problem. Experiments demonstrate the ability of the system to efficiently classify each article of clothing into one of seven categories (pants, shorts, shirts, socks, dresses, cloths, or jackets). The results presented in this paper show that, on average, the classification rates improve by +27.47% for three categories (Willimon et al., 2011), +17.90% for four categories, and +10.35% for seven categories over the baseline system, using SVMs (Chang and Lin, 2001).

Author(s):  
V. A. Ganchenko ◽  
E. E. Marushko ◽  
L. P. Podenok ◽  
A. V. Inyutin

This article describes evaluation the information content of metal objects surfaces for classification of fractures using 2D and 3D data. As parameters, the textural characteristics of Haralick, local binary patterns of pixels for 2D images, macrogeometric descriptors of metal objects digitized by a 3D scanner are considered. The analysis carried out on basis of information content estimation to select the features that are most suitable for solving the problem of metals fractures classification. The results will be used for development of methods for complex forensic examination of complex polygonal surfaces of solid objects for automated system for analyzing digital images.


Author(s):  
Malcolm J. Beynon

This chapter investigates the effectiveness of a number of objective functions used in conjunction with a novel technique to optimise the classification of objects based on a number of characteristic values, which may or may not be missing. The classification and ranking belief simplex (CaRBS) technique is based on Dempster-Shafer theory and, hence, operates in the presence of ignorance. The objective functions considered minimise the level of ambiguity and/or ignorance in the classification of companies to being either failed or not-failed. Further results are found when an incomplete version of the original data set is considered. The findings in this chapter demonstrate how techniques such as CaRBS, which operate in an uncertain reasoning based environment, offer a novel approach to object classification problem solving.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Bayu Adhi Nugroho

AbstractA common problem found in real-word medical image classification is the inherent imbalance of the positive and negative patterns in the dataset where positive patterns are usually rare. Moreover, in the classification of multiple classes with neural network, a training pattern is treated as a positive pattern in one output node and negative in all the remaining output nodes. In this paper, the weights of a training pattern in the loss function are designed based not only on the number of the training patterns in the class but also on the different nodes where one of them treats this training pattern as positive and the others treat it as negative. We propose a combined approach of weights calculation algorithm for deep network training and the training optimization from the state-of-the-art deep network architecture for thorax diseases classification problem. Experimental results on the Chest X-Ray image dataset demonstrate that this new weighting scheme improves classification performances, also the training optimization from the EfficientNet improves the performance furthermore. We compare the aggregate method with several performances from the previous study of thorax diseases classifications to provide the fair comparisons against the proposed method.


2014 ◽  
Vol 13 (12) ◽  
pp. 888-888
Author(s):  
Sarah Crunkhorn

Electronics ◽  
2021 ◽  
Vol 10 (4) ◽  
pp. 495
Author(s):  
Imayanmosha Wahlang ◽  
Arnab Kumar Maji ◽  
Goutam Saha ◽  
Prasun Chakrabarti ◽  
Michal Jasinski ◽  
...  

This article experiments with deep learning methodologies in echocardiogram (echo), a promising and vigorously researched technique in the preponderance field. This paper involves two different kinds of classification in the echo. Firstly, classification into normal (absence of abnormalities) or abnormal (presence of abnormalities) has been done, using 2D echo images, 3D Doppler images, and videographic images. Secondly, based on different types of regurgitation, namely, Mitral Regurgitation (MR), Aortic Regurgitation (AR), Tricuspid Regurgitation (TR), and a combination of the three types of regurgitation are classified using videographic echo images. Two deep-learning methodologies are used for these purposes, a Recurrent Neural Network (RNN) based methodology (Long Short Term Memory (LSTM)) and an Autoencoder based methodology (Variational AutoEncoder (VAE)). The use of videographic images distinguished this work from the existing work using SVM (Support Vector Machine) and also application of deep-learning methodologies is the first of many in this particular field. It was found that deep-learning methodologies perform better than SVM methodology in normal or abnormal classification. Overall, VAE performs better in 2D and 3D Doppler images (static images) while LSTM performs better in the case of videographic images.


Author(s):  
David Lewis-Smith ◽  
Shiva Ganesan ◽  
Peter D. Galer ◽  
Katherine L. Helbig ◽  
Sarah E. McKeown ◽  
...  

AbstractWhile genetic studies of epilepsies can be performed in thousands of individuals, phenotyping remains a manual, non-scalable task. A particular challenge is capturing the evolution of complex phenotypes with age. Here, we present a novel approach, applying phenotypic similarity analysis to a total of 3251 patient-years of longitudinal electronic medical record data from a previously reported cohort of 658 individuals with genetic epilepsies. After mapping clinical data to the Human Phenotype Ontology, we determined the phenotypic similarity of individuals sharing each genetic etiology within each 3-month age interval from birth up to a maximum age of 25 years. 140 of 600 (23%) of all 27 genes and 3-month age intervals with sufficient data for calculation of phenotypic similarity were significantly higher than expect by chance. 11 of 27 genetic etiologies had significant overall phenotypic similarity trajectories. These do not simply reflect strong statistical associations with single phenotypic features but appear to emerge from complex clinical constellations of features that may not be strongly associated individually. As an attempt to reconstruct the cognitive framework of syndrome recognition in clinical practice, longitudinal phenotypic similarity analysis extends the traditional phenotyping approach by utilizing data from electronic medical records at a scale that is far beyond the capabilities of manual phenotyping. Delineation of how the phenotypic homogeneity of genetic epilepsies varies with age could improve the phenotypic classification of these disorders, the accuracy of prognostic counseling, and by providing historical control data, the design and interpretation of precision clinical trials in rare diseases.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Sakthi Kumar Arul Prakash ◽  
Conrad Tucker

AbstractThis work investigates the ability to classify misinformation in online social media networks in a manner that avoids the need for ground truth labels. Rather than approach the classification problem as a task for humans or machine learning algorithms, this work leverages user–user and user–media (i.e.,media likes) interactions to infer the type of information (fake vs. authentic) being spread, without needing to know the actual details of the information itself. To study the inception and evolution of user–user and user–media interactions over time, we create an experimental platform that mimics the functionality of real-world social media networks. We develop a graphical model that considers the evolution of this network topology to model the uncertainty (entropy) propagation when fake and authentic media disseminates across the network. The creation of a real-world social media network enables a wide range of hypotheses to be tested pertaining to users, their interactions with other users, and with media content. The discovery that the entropy of user–user and user–media interactions approximate fake and authentic media likes, enables us to classify fake media in an unsupervised learning manner.


Aerospace ◽  
2021 ◽  
Vol 8 (3) ◽  
pp. 79
Author(s):  
Carolyn J. Swinney ◽  
John C. Woods

Unmanned Aerial Vehicles (UAVs) undoubtedly pose many security challenges. We need only look to the December 2018 Gatwick Airport incident for an example of the disruption UAVs can cause. In total, 1000 flights were grounded for 36 h over the Christmas period which was estimated to cost over 50 million pounds. In this paper, we introduce a novel approach which considers UAV detection as an imagery classification problem. We consider signal representations Power Spectral Density (PSD); Spectrogram, Histogram and raw IQ constellation as graphical images presented to a deep Convolution Neural Network (CNN) ResNet50 for feature extraction. Pre-trained on ImageNet, transfer learning is utilised to mitigate the requirement for a large signal dataset. We evaluate performance through machine learning classifier Logistic Regression. Three popular UAVs are classified in different modes; switched on; hovering; flying; flying with video; and no UAV present, creating a total of 10 classes. Our results, validated with 5-fold cross validation and an independent dataset, show PSD representation to produce over 91% accuracy for 10 classifications. Our paper treats UAV detection as an imagery classification problem by presenting signal representations as images to a ResNet50, utilising the benefits of transfer learning and outperforming previous work in the field.


Sign in / Sign up

Export Citation Format

Share Document