scholarly journals Automatic Vehicle Classification in Systems with Single Inductive Loop Detector

2014 ◽  
Vol 21 (4) ◽  
pp. 619-630 ◽  
Author(s):  
J. Gajda ◽  
M. Mielczarek

Abstract The work proposes a new method for vehicle classification, which allows treating vehicles uniformly at the stage of defining the vehicle classes, as well as during the classification itself and the assessment of its correctness. The sole source of information about a vehicle is its magnetic signature normalised with respect to the amplitude and duration. The proposed method allows defining a large number (even several thousand) of classes comprising vehicles whose magnetic signatures are similar according to the assumed criterion with precisely determined degree of similarity. The decision about the degree of similarity and, consequently, about the number of classes, is taken by a user depending on the classification purpose. An additional advantage of the proposed solution is the automated defining of vehicle classes for the given degree of similarity between signatures determined by a user. Thus the human factor, which plays a significant role in currently used methods, has been removed from the classification process at the stage of defining vehicle classes. The efficiency of the proposed approach to the vehicle classification problem was demonstrated on the basis of a large set of experimental data.

Author(s):  
Aijun Xue ◽  
Xiaodan Wang

Many real world applications involve multiclass cost-sensitive learning problems. However, some well-worked binary cost-sensitive learning algorithms cannot be extended into multiclass cost-sensitive learning directly. It is meaningful to decompose the complex multiclass cost-sensitive classification problem into a series of binary cost-sensitive classification problems. So, in this paper we propose an alternative and efficient decomposition framework, using the original error correcting output codes. The main problem in our framework is how to evaluate the binary costs for each binary cost-sensitive base classifier. To solve this problem, we proposed to compute the expected misclassification costs starting from the given multiclass cost matrix. Furthermore, the general formulations to compute the binary costs are given. Experimental results on several synthetic and UCI datasets show that our method can obtain comparable performance in comparison with the state-of-the-art methods.


2019 ◽  
Vol 124 (12) ◽  
pp. 1718-1724 ◽  
Author(s):  
Tobias Opthof

In this article, I show that the distribution of citations to papers published by the top 30 journals in the category Cardiac & Cardiovascular Systems of the Web of Science is extremely skewed. This skewness is to the right, which means that there is a long tail of papers that are cited much more frequently than the other papers of the same journal. The consequence is that there is a large difference between the mean and the median of the citation of the papers published by the journals. I further found that there are no differences between the citation distributions of the top 4 journals European Heart Journal , Circulation , Journal of the American College of Cardiology , and Circulation Research . Despite the fact that the journal impact factor (IF) varied between 23.425 for Eur Heart J and 15.211 for Circ Res with the other 2 journals in between, the median citation of their articles plus reviews (IF Median) was 10 for all 4 journals. Given the fact that their citation distributions were similar, it is obvious that an indicator (IF Median) that reflects this similarity must be superior to the classical journal impact factor, which may indicate a nonexisting difference. It is underscored that the IF Median is substantially lower than the journal impact factor for all 30 journals under consideration in this article. Finally, the IF Median has the additional advantage that there is no artificial ranking of 128 journals in the category but rather an attribution of journals to a limited number of classes with comparable impact.


2016 ◽  
Vol 2016 ◽  
pp. 1-10
Author(s):  
Yidong Tang ◽  
Shucai Huang ◽  
Aijun Xue

The sparse representation based classifier (SRC) and its kernel version (KSRC) have been employed for hyperspectral image (HSI) classification. However, the state-of-the-art SRC often aims at extended surface objects with linear mixture in smooth scene and assumes that the number of classes is given. Considering the small target with complex background, a sparse representation based binary hypothesis (SRBBH) model is established in this paper. In this model, a query pixel is represented in two ways, which are, respectively, by background dictionary and by union dictionary. The background dictionary is composed of samples selected from the local dual concentric window centered at the query pixel. Thus, for each pixel the classification issue becomes an adaptive multiclass classification problem, where only the number of desired classes is required. Furthermore, the kernel method is employed to improve the interclass separability. In kernel space, the coding vector is obtained by using kernel-based orthogonal matching pursuit (KOMP) algorithm. Then the query pixel can be labeled by the characteristics of the coding vectors. Instead of directly using the reconstruction residuals, the different impacts the background dictionary and union dictionary have on reconstruction are used for validation and classification. It enhances the discrimination and hence improves the performance.


2013 ◽  
Vol 791-793 ◽  
pp. 1533-1536 ◽  
Author(s):  
Min Chen ◽  
Jian Hua Chen ◽  
Mo Hai Guo

In this paper, the context quantization for I-ary sources based on the affinity propagation algorithm is presented. In purpose of finding the optimal number of classes, the increment of the adaptive code length is suggested to be the similarity measure between two conditional probability distributions, by which the similarity matrix is constructed as the input of the affinity propagation algorithm. After the given number of iterations, the optimal quantizer with the optimal number of classes is achieved and the adaptive code length is minimized at the same time. The simulations indicate that the proposed algorithm produces results that are better than the results obtained by the minimum conditional entropy context quantization implemented by K-means with lower computational complexity.


Author(s):  
Valerian Kwigizile ◽  
Renatus N. Mussa ◽  
Majura Selekwa

The mechanistic–empirical pavement design methodology being developed under NCHRP Project 1–37A will require accurate classification of vehicles to develop axle load spectra information needed as the design input. Scheme F, used by most states to classify vehicles, can be used to develop the required load spectra. Unfortunately, the scheme is difficult to automate and is prone to errors resulting from imprecise demarcation of class thresholds. In this paper, the classification problem is viewed as a pattern recognition problem in which connectionist techniques such as probabilistic neural networks (PNN) can be used to assign vehicles to their correct classes and hence to establish optimum axle spacing thresholds. The PNN was developed, trained, and applied to field data composed of individual vehicles’ axle spacing, number of axles per vehicle, and overall vehicle weight. The PNN reduced the error rate from 9.5% to 6.2% compared with an existing classification algorithm used by the Florida Department of Transportation. The inclusion of overall vehicle weight as a classification variable further reduced the error rate from 6.2% to 3.0%. The promising results from neural networks were used to set up new thresholds that reduce classification error rate.


Author(s):  
Arup Kumar Bhattacharjee ◽  
Soumen Mukherjee ◽  
Arindam Mondal ◽  
Dipankar Majumdar

In the last two to three decades, use of credit cards is increasing rapidly due to fast economic growth in developing countries and worldwide globalization issues. Financial institutions like banks are facing a very tough time due to fast-rising cases of credit card loan payment defaulters. The banking institution is constantly searching for the perfect mechanisms or methods to identify possible defaulters among the whole set of credit card users. In this chapter, the most important features of a credit card holder are identified from a considerably large set of features using metaheuristic algorithms. In this work, a standard data set archived in UCI repository of credit card payments of Taiwan is used. Metaheuristic algorithms like particle swarm optimization, ant colony optimization, and simulated annealing are used to identify the significant sets of features from the given data set. Support vector machine classifier is used to identify the class in this two-class (loan defaulter or not) problem.


Author(s):  
Albert Asmaryan ◽  
Alexey Levanov ◽  
Irina Borovik

This chapter introduces the method to assess similarity based on Facebook Graph API and user movements. All movements of users are collected and analyzed. The chapter presents an additional method for analyzing user-generated images on Instagram Graph API. The chapter presents a two-step multiparameter algorithm that generates recommendations based on user social activity and movements. A flexible mechanism for the calculations of time that one spends on a variety of social activities to more accurately identify the relationships between users is presented. To reduce the load on the application, the algorithms of data analysis and transfer optimization are proposed. The ultimate result of the study is to build a platform based on the “client-server” model and includes a mobile app on the iOS platform and server, which would be set up on the “LAMP” platform (L - Linux operating system, A - Apache web server, M - MySQL database, P - PHP programming language). The given result can be used and applied in various spheres of our lives to identify different relationships between people.


2005 ◽  
Vol 495-497 ◽  
pp. 157-166 ◽  
Author(s):  
Leo Kestens ◽  
Kim Verbeken ◽  
R. Decocker ◽  
Roumen H. Petrov ◽  
Patricia Gobernado ◽  
...  

It is often assumed that the texture formation during solid state transformations in low carbon steels critically depends on the local crystallographic misorientation at the interface between transformed and not yet transformed material volume. In some cases, a theoretical crystallographic orientation relation can be presumed as a necessary prerequisite for the transformation to occur. Classical examples of such misorientation conditions in steel metallurgy are the orientation relations between parent and product grains of the allotropic phase transformation from austenite to ferrite (or martensite) or the hypothetical <110>26.5º misorientation between growing nuclei and disappearing grains in a recrystallization process. One way to verify the validity of such misorientation conditions is to carry out an experiment in which the transformation is partially completed and then observe locally, at the transformation interface, whether or not the presumed crystallographic condition is complied with. Such an experiment will produce a large set of misorientation data. As each observed misorientation Dg is represented by a single point in the Rodrigues-Frank (RF) space, a distribution of discrete misorientation points is obtained. This distribution is compared with the reference misorientation Dgr, corresponding to a specific physical condition, by determining the number fraction dn of misorientations that are confined within a narrow misorientation volume element dw around the given reference misorientation Dgr. In order to evaluate whether or not the proposed misorientation condition is obeyed, the number fraction dn of the experimentally measured distribution must be compared with the number fractions dr obtained for a random misorientation distribution. The ratio dn/dr can be interpreted as the number intensity fi of the given reference misorientation Dgr. This method was applied on the observed local misorientations between the recrystallizing grains growing into the single crystal matrix of a Fe-2.8%Si alloy. It was found that the number intensity of the <110>26.5º misorientation increased with a factor 10 when the misorientation distribution was evaluated before and after the growth stage. In another example the method was applied to the misorientations measured at the local interface between parent austenite and product martensite grains of a partially transformed Fe-28%Ni alloy. It could be established that the Nishiyama- Wasserman relations ({111}g//{110}a <112>g//<110>a) prevail over the Kurdjumov-Sachs relations ({111}g//{110}a and <110>g//<111>a) although a considerable scatter was observed around either of the theoretical correspondences. A full parametric misorientation description was also applied to evaluate the relative grain boundary energies associated with a set of crystallographic misorientations observed near triple junctions in Fe-2%Si. In this instance it was found that the boundaries carrying a misorientation of the type <110>w carry a lower interfacial energy than the <100> or <111> type boundaries.


Author(s):  
Badri Toppur ◽  
K J Jaims

A dataset of molecules used in the pharmaceuticals industry, was shared during a hackathon , arranged by the Indian government, for COVID-19 related drug discovery. The molecules were provided in SMILE format. It is decoded using the chem-informatics development kit written in the Java language. The kit can be accessed in the R statistical environment through the rJava package that is further wrapped in the rcdk package. The output to be predicted is the cardiotoxicity the molecules. The strings representing the molecular structure, are parsed by the rcdk functions, to provide structure-activity descriptors, that are known, to be good predictors for biological activity; the activity may be therapeutic or toxic. These descriptors constitute the input to the Decision Tree, Random Forest, Gradient Boosting, Support-Vector Machine,Logistic Regression and Artificial Neural Network algorithms. This paper reports the results of the data science project to determine the best subset of molecular descriptors, from the large set that is available.


Sign in / Sign up

Export Citation Format

Share Document