scholarly journals JOINT DIRECTION AND PROXIMITY CLASSIFICATION OF OVERLAPPING SOUNDEVENTS FROM BINAURAL AUDIO

2021 ◽  
Author(s):  
Daniel Aleksander Krause ◽  
Archontis Politis ◽  
Annamaria Mesaros

Sound source proximity and distance estimation are of great interest in many practical applications, since they provide significant information for acoustic scene analysis. As both tasks share complementary qualities, ensuring efficient interaction between these two is crucial for a complete picture of an aural environment. In this paper, we aim to investigate several ways of performing joint proximity and direction estimation from binaural recordings, both defined as coarse classification problems based on Deep Neural Networks (DNNs). Considering the limitations of binaural audio, we propose two methods of splitting the sphere into angular areas in order to obtain a set of directional classes. For each method we study different model types to acquire information about the direction-of-arrival (DoA). Finally, we propose various ways of combining the proximity and direction estimation problems into a joint task providing temporal information about the onsets and offsets of the appearing sources. Experiments are performed for a synthetic reverberant binaural dataset consisting of up to two overlapping sound events.

2021 ◽  
Vol 13 (9) ◽  
pp. 1623
Author(s):  
João E. Batista ◽  
Ana I. R. Cabral ◽  
Maria J. P. Vasconcelos ◽  
Leonardo Vanneschi ◽  
Sara Silva

Genetic programming (GP) is a powerful machine learning (ML) algorithm that can produce readable white-box models. Although successfully used for solving an array of problems in different scientific areas, GP is still not well known in the field of remote sensing. The M3GP algorithm, a variant of the standard GP algorithm, performs feature construction by evolving hyperfeatures from the original ones. In this work, we use the M3GP algorithm on several sets of satellite images over different countries to create hyperfeatures from satellite bands to improve the classification of land cover types. We add the evolved hyperfeatures to the reference datasets and observe a significant improvement of the performance of three state-of-the-art ML algorithms (decision trees, random forests, and XGBoost) on multiclass classifications and no significant effect on the binary classifications. We show that adding the M3GP hyperfeatures to the reference datasets brings better results than adding the well-known spectral indices NDVI, NDWI, and NBR. We also compare the performance of the M3GP hyperfeatures in the binary classification problems with those created by other feature construction methods such as FFX and EFS.


Algorithms ◽  
2021 ◽  
Vol 14 (5) ◽  
pp. 134
Author(s):  
Loai Abdallah ◽  
Murad Badarna ◽  
Waleed Khalifa ◽  
Malik Yousef

In the computational biology community there are many biological cases that are considered as multi-one-class classification problems. Examples include the classification of multiple tumor types, protein fold recognition and the molecular classification of multiple cancer types. In all of these cases the real world appropriately characterized negative cases or outliers are impractical to achieve and the positive cases might consist of different clusters, which in turn might lead to accuracy degradation. In this paper we present a novel algorithm named MultiKOC multi-one-class classifiers based K-means to deal with this problem. The main idea is to execute a clustering algorithm over the positive samples to capture the hidden subdata of the given positive data, and then building up a one-class classifier for every cluster member’s examples separately: in other word, train the OC classifier on each piece of subdata. For a given new sample, the generated classifiers are applied. If it is rejected by all of those classifiers, the given sample is considered as a negative sample, otherwise it is a positive sample. The results of MultiKOC are compared with the traditional one-class, multi-one-class, ensemble one-classes and two-class methods, yielding a significant improvement over the one-class and like the two-class performance.


1997 ◽  
Vol 08 (01) ◽  
pp. 15-41 ◽  
Author(s):  
Carl H. Smith ◽  
Rolf Wiehagen ◽  
Thomas Zeugmann

The present paper studies a particular collection of classification problems, i.e., the classification of recursive predicates and languages, for arriving at a deeper understanding of what classification really is. In particular, the classification of predicates and languages is compared with the classification of arbitrary recursive functions and with their learnability. The investigation undertaken is refined by introducing classification within a resource bound resulting in a new hierarchy. Furthermore, a formalization of multi-classification is presented and completely characterized in terms of standard classification. Additionally, consistent classification is introduced and compared with both resource bounded classification and standard classification. Finally, the classification of families of languages that have attracted attention in learning theory is studied, too.


2017 ◽  
Vol 20 (K4) ◽  
pp. 30-38
Author(s):  
Tung Son Pham ◽  
Huy Minh Truong ◽  
Tuan Ba Pham

In recent years, Artificial Intelligence (AI) has become an emerging subject and been recognized as the flagship of the Fourth Industrial Revolution. AI is subtly growing and becoming vital in our daily life. Particularly, Self-Organizing Map (SOM), one of the major branches of AI, is a useful tool for clustering data and has been applied successfully and widespread in various aspects of human life such as psychology, economic, medical and technical fields like mechanical, construction and geology. In this paper, the primary purpose of the authors is to introduce SOM algorithm and its practical applications in geology and construction. The results are classification of rock facies versus depth in geology and clustering two sets of construction prices indices and building material costs indice.


2020 ◽  
pp. 103-111
Author(s):  
Emad Abulrahman Mohammed Salih Al-Heety

Earthquakes occur on faults and create new faults. They also occur on  normal, reverse and strike-slip faults. The aim of this work is to suggest a new unified classification of Shallow depth earthquakes based on the faulting styles, and to characterize each class. The characterization criteria include the maximum magnitude, focal depth, b-constant value, return period and relations between magnitude, focal depth and dip of fault plane. Global Centroid Moment Tensor (GCMT) catalog is the source of the used data. This catalog covers the period from Jan.1976 to Dec. 2017. We selected only the shallow (depth less than 70kms) pure, normal, strike-slip and reverse earthquakes (magnitude ≥ 5) and excluded the oblique earthquakes. The majority of normal and strike-slip earthquakes occurred in the upper crust, while the reverse earthquakes occurred throughout the thickness of the crust. The main trend for the derived b-values for the three classes was: b normal fault>bstrike-slip fault>breverse fault.  The mean return period for the normal earthquake was longer than that of the strike-slip earthquakes, while the reverse earthquakes had the shortest period. The obtained results report the relationship between the magnitude and focal depth of the normal earthquakes. A negative significant correlation between the magnitude and dip class for the normal and reverse earthquakes is reported. Negative and positive correlation relations between the focal depth and dip class were recorded for normal and reverse earthquakes, respectively. The suggested classification of earthquakes provides significant information to understand seismicity, seismtectonics, and seismic hazard analysis.


2021 ◽  
Vol 11 (22) ◽  
pp. 10713
Author(s):  
Dong-Gyu Lee

Autonomous driving is a safety-critical application that requires a high-level understanding of computer vision with real-time inference. In this study, we focus on the computational efficiency of an important factor by improving the running time and performing multiple tasks simultaneously for practical applications. We propose a fast and accurate multi-task learning-based architecture for joint segmentation of drivable area, lane line, and classification of the scene. An encoder-decoder architecture efficiently handles input frames through shared representation. A comprehensive understanding of the driving environment is improved by generalization and regularization from different tasks. The proposed method learns end-to-end through multi-task learning on a very challenging Berkeley Deep Drive dataset and shows its robustness for three tasks in autonomous driving. Experimental results show that the proposed method outperforms other multi-task learning approaches in both speed and accuracy. The computational efficiency of the method was over 93.81 fps at inference, enabling execution in real-time.


Author(s):  
Rehan Ullah ◽  
Abdullah Khan ◽  
Syed Bakhtawar Shah Abid ◽  
Siyab Khan ◽  
Said Khalid Shah ◽  
...  

DNA sequence classification is one of the main research activities in bioinformatics on which, many researchers have worked and are working on it. In bioinformatics, machine learning can be applied for the analysis of genomic sequences like the classification of DNA sequences, comparison of DNA sequences. This article proposes a new hybrid meta-heuristic model called Crow-ENN for leukemia DNA sequences classification. The proposed algorithm is the combination of the Crow Search Algorithm (CSA) and the Elman Neural Network (ENN). DNA sequences of Leukemia are used to train and test the proposed hybrid model. Five other comparable models i.e. Crow-ANN, Crow-BPNN, ANN, BPNN and ENN are also trained and tested on these DNA sequences. The performance of models is evaluated in terms of accuracy and MSE. The overall simulation results show that the proposed model has outperformed all the other five comparable models by attaining the highest accuracy of over 99%. This model may also be used for other classification problems in different fields because it can achieve promising results.


Complexity ◽  
2020 ◽  
Vol 2020 ◽  
pp. 1-12
Author(s):  
Guobin Chen ◽  
Xianzhong Xie ◽  
Shijin Li

Screening and classification of characteristic genes is a complex classification problem, and the characteristic sequences of gene expression show high-dimensional characteristics. How to select an effective gene screening algorithm is the main problem to be solved by analyzing gene chips. The combination of KNN, SVM, and SVM-RFE is selected to screen complex classification problems, and a new method to solve complex classification problems is provided. In the process of gene chip pretreatment, LogFC and P value equivalents in the gene expression matrix are screened, and different gene features are screened, and then SVM-RFE algorithm is used to sort and screen genes. Firstly, the characteristics of gene chips are analyzed and the number between probes and genes is counted. Clustering analysis among each sample and PCA classification analysis of different samples are carried out. Secondly, the basic algorithms of SVM and KNN are tested, and the important indexes such as error rate and accuracy rate of the algorithms are tested to obtain the optimal parameters. Finally, the performance indexes of accuracy, precision, recall, and F1 of several complex classification algorithms are compared through the complex classification of SVM, KNN, KNN-PCA, SVM-PCA, SVM-RFE-SVM, and SVM-RFE-KNN at P=0. 01,0.05,0.001. SVM-RFE-SVM has the best classification effect and can be used as a gene chip classification algorithm to analyze the characteristics of genes.


Author(s):  
Malcolm J. Beynon

Rough set theory (RST), since its introduction in Pawlak (1982), continues to develop as an effective tool in classification problems and decision support. In the majority of applications using RST based methodologies, there is the construction of ‘if .. then ..’ decision rules that are used to describe the results from an analysis. The variation of applications in management and decision making, using RST, recently includes discovering the operating rules of a Sicilian irrigation purpose reservoir (Barbagallo, Consoli, Pappalardo, Greco, & Zimbone, 2006), feature selection in customer relationship management (Tseng & Huang, 2007) and decisions that insurance companies make to satisfy customers’ needs (Shyng, Wang, Tzeng, & Wu, 2007). As a nascent symbolic machine learning technique, the popularity of RST is a direct consequence of its set theoretical operational processes, mitigating inhibiting issues associated with traditional techniques, such as within-group probability distribution assumptions (Beynon & Peel, 2001). Instead, the rudiments of the original RST are based on an indiscernibility relation, whereby objects are grouped into certain equivalence classes and inference taken from these groups. Characteristics like this mean that decision support will be built upon the underlying RST philosophy of “Let the data speak for itself” (Dunstch & Gediga, 1997). Recently, RST was viewed as being of fundamental importance in artificial intelligence and cognitive sciences, including decision analysis and decision support systems (Tseng & Huang, 2007). One of the first developments on RST was through the variable precision rough sets model (VPRSß), which allows a level of mis-classification to exist in the classification of objects, resulting in probabilistic rules (see Ziarko, 1993; Beynon, 2001; Li and Wang, 2004). VPRSß has specifically been applied as a potential decision support system with the UK Monopolies and Mergers Commission (Beynon & Driffield, 2005), predicting bank credit ratings (Griffiths & Beynon, 2005) and diffusion of medicaid home care programs (Kitchener, Beynon, & Harrington, 2004). Further developments of RST include extended variable precision rough sets (VPRSl,u), which infers asymmetric bounds on the possible classification and mis-classification of objects (Katzberg & Ziarko, 1996), dominance-based rough sets, which bases their approach around a dominance relation (Greco, Matarazzo, & Slowinski, 2004), fuzzy rough sets, which allows the grade of membership of objects to constructed sets (Greco, Inuiguchi, & Slowinski, 2006), and probabilistic bayesian rough sets model that considers an appropriate certainty gain function (Ziarko, 2005). A literal presentation of the diversity of work on RST can be viewed in the annual volumes of the Transactions on Rough Sets (most recent year 2006), also the annual conferences dedicated to RST and its developments (see for example, RSCTC, 2004). In this article, the theory underlying VPRSl,u is described, with its special case of VPRSß used in an example analysis. The utilisation of VPRSl,u, and VPRSß, is without loss of generality to other developments such as those referenced, its relative simplicity allows the non-proficient reader the opportunity to fully follow the details presented.


Sign in / Sign up

Export Citation Format

Share Document