A nonparametric Bayesian framework for constructing flexible feature representations.

2013 ◽  
Vol 120 (4) ◽  
pp. 817-851 ◽  
Author(s):  
Joseph L. Austerweil ◽  
Thomas L. Griffiths
2019 ◽  
Vol 2 (5) ◽  
Author(s):  
Ji-hua Hu ◽  
Jia-xian Liang

Interstation travel speed is an important indicator of the running state of hybrid Bus Rapid Transit and passenger experience. Due to the influence of road traffic, traffic lights and other factors, the interstation travel speeds are often some kind of multi-peak and it is difficult to use a single distribution to model them. In this paper, a Gaussian mixture model charactizing the interstation travel speed of hybrid BRT under a Bayesian framework is established. The parameters of the model are inferred using the Reversible-Jump Markov Chain Monte Carlo approach (RJMCMC), including the number of model components and the weight, mean and variance of each component. Then the model is applied to Guangzhou BRT, a kind of hybrid BRT. From the results, it can be observed that the model can very effectively describe the heterogeneous speed data among different inter-stations, and provide richer information usually not available from the traditional models, and the model also produces an excellent fit to each multimodal speed distribution curve of the inter-stations. The causes of different speed distribution can be identified through investigating the Internet map of GBRT, they are big road traffic and long traffic lights respectively, which always contribute to a main road crossing. So, the BRT lane should be elevated through the main road to decrease the complexity of the running state.


2017 ◽  
Author(s):  
Sabrina Jaeger ◽  
Simone Fulle ◽  
Samo Turk

Inspired by natural language processing techniques we here introduce Mol2vec which is an unsupervised machine learning approach to learn vector representations of molecular substructures. Similarly, to the Word2vec models where vectors of closely related words are in close proximity in the vector space, Mol2vec learns vector representations of molecular substructures that are pointing in similar directions for chemically related substructures. Compounds can finally be encoded as vectors by summing up vectors of the individual substructures and, for instance, feed into supervised machine learning approaches to predict compound properties. The underlying substructure vector embeddings are obtained by training an unsupervised machine learning approach on a so-called corpus of compounds that consists of all available chemical matter. The resulting Mol2vec model is pre-trained once, yields dense vector representations and overcomes drawbacks of common compound feature representations such as sparseness and bit collisions. The prediction capabilities are demonstrated on several compound property and bioactivity data sets and compared with results obtained for Morgan fingerprints as reference compound representation. Mol2vec can be easily combined with ProtVec, which employs the same Word2vec concept on protein sequences, resulting in a proteochemometric approach that is alignment independent and can be thus also easily used for proteins with low sequence similarities.


Sign in / Sign up

Export Citation Format

Share Document