simple logistic regression model
Recently Published Documents


TOTAL DOCUMENTS

8
(FIVE YEARS 2)

H-INDEX

1
(FIVE YEARS 0)

2021 ◽  
Vol 118 (31) ◽  
pp. e2104624118
Author(s):  
Henrik D. Pinholt ◽  
Søren S.-R. Bohr ◽  
Josephine F. Iversen ◽  
Wouter Boomsma ◽  
Nikos S. Hatzakis

Single-particle tracking (SPT) is a key tool for quantitative analysis of dynamic biological processes and has provided unprecedented insights into a wide range of systems such as receptor localization, enzyme propulsion, bacteria motility, and drug nanocarrier delivery. The inherently complex diffusion in such biological systems can vary drastically both in time and across systems, consequently imposing considerable analytical challenges, and currently requires an a priori knowledge of the system. Here we introduce a method for SPT data analysis, processing, and classification, which we term “diffusional fingerprinting.” This method allows for dissecting the features that underlie diffusional behavior and establishing molecular identity, regardless of the underlying diffusion type. The method operates by isolating 17 descriptive features for each observed motion trajectory and generating a diffusional map of all features for each type of particle. Precise classification of the diffusing particle identity is then obtained by training a simple logistic regression model. A linear discriminant analysis generates a feature ranking that outputs the main differences among diffusional features, providing key mechanistic insights. Fingerprinting operates by both training on and predicting experimental data, without the need for pretraining on simulated data. We found this approach to work across a wide range of simulated and experimentally diverse systems, such as tracked lipases on fat substrates, transcription factors diffusing in cells, and nanoparticles diffusing in mucus. This flexibility ultimately supports diffusional fingerprinting’s utility as a universal paradigm for SPT diffusional analysis and prediction.



2020 ◽  
Vol 9 (1) ◽  
Author(s):  
Francesco Pierri ◽  
Carlo Piccardi ◽  
Stefano Ceri

AbstractWe tackle the problem of classifying news articles pertaining to disinformation vs mainstream news by solely inspecting their diffusion mechanisms on Twitter. This approach is inherently simple compared to existing text-based approaches, as it allows to by-pass the multiple levels of complexity which are found in news content (e.g. grammar, syntax, style). As we employ a multi-layer representation of Twitter diffusion networks where each layer describes one single type of interaction (tweet, retweet, mention, etc.), we quantify the advantage of separating the layers with respect to an aggregated approach and assess the impact of each layer on the classification. Experimental results with two large-scale datasets, corresponding to diffusion cascades of news shared respectively in the United States and Italy, show that a simple Logistic Regression model is able to classify disinformation vs mainstream networks with high accuracy (AUROC up to 94%). We also highlight differences in the sharing patterns of the two news domains which appear to be common in the two countries. We believe that our network-based approach provides useful insights which pave the way to the future development of a system to detect misleading and harmful information spreading on social media.



2018 ◽  
Author(s):  
Angela Lopez-del Rio ◽  
Alfons Nonell-Canals ◽  
David Vidal ◽  
Alexandre Perera-Lluna

Binding prediction between targets and drug-like compounds through Deep Neural Networks have generated promising results in recent years, outperforming traditional machine learning-based methods. However, the generalization capability of these classification models is still an issue to be addressed. In this work, we explored how different cross-validation strategies applied to data from different molecular databases affect to the performance of binding prediction proteochemometrics models. These strategies are: (1) random splitting, (2) splitting based on K-means clustering (both of actives and inactives), (3) splitting based on source database and (4) splitting based both in the clustering and in the source database. These schemas are applied to a Deep Learning proteochemometrics model and to a simple logistic regression model to be used as baseline. Additionally, two different ways of describing molecules in the model are tested: (1) by their SMILES and (2) by three fingerprints. The classification performance of our Deep Learning-based proteochemometrics model is comparable to the state of the art. Our results show that the lack of generalization of these models is due to a bias in public molecular databases and that a restrictive cross-validation schema based on compounds clustering leads to worse but more robust and credible results. Our results also show better performance when representing molecules by their fingerprints.



2018 ◽  
Author(s):  
Angela Lopez-del Rio ◽  
Alfons Nonell-Canals ◽  
David Vidal ◽  
Alexandre Perera-Lluna

Binding prediction between targets and drug-like compounds through Deep Neural Networks have generated promising results in recent years, outperforming traditional machine learning-based methods. However, the generalization capability of these classification models is still an issue to be addressed. In this work, we explored how different cross-validation strategies applied to data from different molecular databases affect to the performance of binding prediction proteochemometrics models. These strategies are: (1) random splitting, (2) splitting based on K-means clustering (both of actives and inactives), (3) splitting based on source database and (4) splitting based both in the clustering and in the source database. These schemas are applied to a Deep Learning proteochemometrics model and to a simple logistic regression model to be used as baseline. Additionally, two different ways of describing molecules in the model are tested: (1) by their SMILES and (2) by three fingerprints. The classification performance of our Deep Learning-based proteochemometrics model is comparable to the state of the art. Our results show that the lack of generalization of these models is due to a bias in public molecular databases and that a restrictive cross-validation schema based on compounds clustering leads to worse but more robust and credible results. Our results also show better performance when representing molecules by their fingerprints.



2018 ◽  
Vol 2 (S1) ◽  
pp. 39-39
Author(s):  
Jo E. Wilson ◽  
Richard Carlson ◽  
Maria C. Duggan ◽  
Pratik Pandharipande ◽  
Timothy D. Girard ◽  
...  

OBJECTIVES/SPECIFIC AIMS: Background: Delirium is a well described form of acute brain organ dysfunction characterized by decreased or increased movement, changes in attention and concentration as well as perceptual disturbances (i.e., hallucinations) and delusions. Catatonia, a neuropsychiatric syndrome traditionally described in patients with severe psychiatric illness, can present as phenotypically similar to delirium and is characterized by increased, decreased and/or abnormal movements, staring, rigidity, and mutism. Delirium and catatonia can co-occur in the setting of medical illness, but no studies have explored this relationship by age. Our objective was to assess whether advancing age and the presence of catatonia are associated with delirium. METHODS/STUDY POPULATION: Methods: We prospectively enrolled critically ill patients at a single institution who were on a ventilator or in shock and evaluated them daily for delirium using the Confusion Assessment for the ICU and for catatonia using the Bush Francis Catatonia Rating Scale. Measures of association (OR) were assessed with a simple logistic regression model with catatonia as the independent variable and delirium as the dependent variable. Effect measure modification by age was assessed using a Likelihood ratio test. RESULTS/ANTICIPATED RESULTS: Results: We enrolled 136 medical and surgical critically ill patients with 452 matched (concomitant) delirium and catatonia assessments. Median age was 59 years (IQR: 52–68). In our cohort of 136 patients, 58 patients (43%) had delirium only, 4 (3%) had catatonia only, 42 (31%) had both delirium and catatonia, and 32 (24%) had neither. Age was significantly associated with prevalent delirium (i.e., increasing age associated with decreased risk for delirium) (p=0.04) after adjusting for catatonia severity. Catatonia was significantly associated with prevalent delirium (p<0.0001) after adjusting for age. Peak delirium risk was for patients aged 55 years with 3 or more catatonic signs, who had 53.4 times the odds of delirium (95% CI: 16.06, 176.75) than those with no catatonic signs. Patients 70 years and older with 3 or more catatonia features had half this risk. DISCUSSION/SIGNIFICANCE OF IMPACT: Conclusions: Catatonia is significantly associated with prevalent delirium even after controlling for age. These data support an inverted U-shape risk of delirium after adjusting for catatonia. This relationship and its clinical ramifications need to be examined in a larger sample, including patients with dementia. Additionally, we need to assess which acute brain syndrome (delirium or catatonia) develops first.



2016 ◽  
Vol 113 (25) ◽  
pp. E3548-E3557 ◽  
Author(s):  
Arman Abrahamyan ◽  
Laura Luz Silva ◽  
Steven C. Dakin ◽  
Matteo Carandini ◽  
Justin L. Gardner

When making choices under conditions of perceptual uncertainty, past experience can play a vital role. However, it can also lead to biases that worsen decisions. Consistent with previous observations, we found that human choices are influenced by the success or failure of past choices even in a standard two-alternative detection task, where choice history is irrelevant. The typical bias was one that made the subject switch choices after a failure. These choice history biases led to poorer performance and were similar for observers in different countries. They were well captured by a simple logistic regression model that had been previously applied to describe psychophysical performance in mice. Such irrational biases seem at odds with the principles of reinforcement learning, which would predict exquisite adaptability to choice history. We therefore asked whether subjects could adapt their irrational biases following changes in trial order statistics. Adaptability was strong in the direction that confirmed a subject’s default biases, but weaker in the opposite direction, so that existing biases could not be eradicated. We conclude that humans can adapt choice history biases, but cannot easily overcome existing biases even if irrational in the current context: adaptation is more sensitive to confirmatory than contradictory statistics.



2010 ◽  
Vol 44-47 ◽  
pp. 3844-3848
Author(s):  
Shu Juan Lee ◽  
Hsiang Chuan Liu ◽  
Shih Ming Chen ◽  
Yu Du Jheng

In this study, the Web-based Multi-Survey System was adopted, one or two choices available among five options, to set up questionnaire to get a nine-point scale with network programs. As an empirical research, which was conformed to the policy of Ministry of Education to promote technology abilities for teachers in Taichung County in 2009, the important and satisfied survey from Teachers’ Free Software Application workshops was analyzed by four approaches which were Importance-Performance Analysis, Simple Logistic regression model, Choquet integral regression model with respect to λ-measure, and Choquet integral regression model with respect to L-measure. By comparing MSE, Choquet integral regression model with L-measure obtained the best performance. Two crosshairs which were Hollenhorst’s overall mean and Choquet integral regression model with L-measure were positioned for I-P matrix. From results, the L-measure model had shown better sensitivity about quadrant distribution, and reflected participants’ real responses. At the same time, it was definitely known what to keep up the good work, concentrate here, or possible overkill through assessing Importance and Performance of teacher’s workshops.



Sign in / Sign up

Export Citation Format

Share Document