scholarly journals Ciencia, técnica y algoritmos: notas en torno a su a priori ideológico-capitalista

2021 ◽  
pp. 178-193
Author(s):  
Jorge Luis Quintana Montes
Keyword(s):  

El objetivo del artículo es ofrecer una aproximación crítica a los algoritmos de aprendizaje automático, entendidos como una nueva faceta del dominio ideológico propio del modo de producción capitalista. En este sentido, el artículo se encuentra constituido por tres momentos temáticos: i) definición de algoritmos y de machine learning, ii) el asedio comercial como reproducción de la lógica de consumo del capital y iii) microtargeting político, amparado en la desinformación reinante en la esfera pública en tiempos de campaña electoral

2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Alexandre Boutet ◽  
Radhika Madhavan ◽  
Gavin J. B. Elias ◽  
Suresh E. Joel ◽  
Robert Gramer ◽  
...  

AbstractCommonly used for Parkinson’s disease (PD), deep brain stimulation (DBS) produces marked clinical benefits when optimized. However, assessing the large number of possible stimulation settings (i.e., programming) requires numerous clinic visits. Here, we examine whether functional magnetic resonance imaging (fMRI) can be used to predict optimal stimulation settings for individual patients. We analyze 3 T fMRI data prospectively acquired as part of an observational trial in 67 PD patients using optimal and non-optimal stimulation settings. Clinically optimal stimulation produces a characteristic fMRI brain response pattern marked by preferential engagement of the motor circuit. Then, we build a machine learning model predicting optimal vs. non-optimal settings using the fMRI patterns of 39 PD patients with a priori clinically optimized DBS (88% accuracy). The model predicts optimal stimulation settings in unseen datasets: a priori clinically optimized and stimulation-naïve PD patients. We propose that fMRI brain responses to DBS stimulation in PD patients could represent an objective biomarker of clinical response. Upon further validation with additional studies, these findings may open the door to functional imaging-assisted DBS programming.


2021 ◽  
Vol 26 (3) ◽  
pp. 1-17
Author(s):  
Urmimala Roy ◽  
Tanmoy Pramanik ◽  
Subhendu Roy ◽  
Avhishek Chatterjee ◽  
Leonard F. Register ◽  
...  

We propose a methodology to perform process variation-aware device and circuit design using fully physics-based simulations within limited computational resources, without developing a compact model. Machine learning (ML), specifically a support vector regression (SVR) model, has been used. The SVR model has been trained using a dataset of devices simulated a priori, and the accuracy of prediction by the trained SVR model has been demonstrated. To produce a switching time distribution from the trained ML model, we only had to generate the dataset to train and validate the model, which needed ∼500 hours of computation. On the other hand, if 10 6 samples were to be simulated using the same computation resources to generate a switching time distribution from micromagnetic simulations, it would have taken ∼250 days. Spin-transfer-torque random access memory (STTRAM) has been used to demonstrate the method. However, different physical systems may be considered, different ML models can be used for different physical systems and/or different device parameter sets, and similar ends could be achieved by training the ML model using measured device data.


2021 ◽  
Vol 13 (1) ◽  
Author(s):  
Annachiara Tinivella ◽  
Luca Pinzi ◽  
Giulio Rastelli

AbstractThe development of selective inhibitors of the clinically relevant human Carbonic Anhydrase (hCA) isoforms IX and XII has become a major topic in drug research, due to their deregulation in several types of cancer. Indeed, the selective inhibition of these two isoforms, especially with respect to the homeostatic isoform II, holds great promise to develop anticancer drugs with limited side effects. Therefore, the development of in silico models able to predict the activity and selectivity against the desired isoform(s) is of central interest. In this work, we have developed a series of machine learning classification models, trained on high confidence data extracted from ChEMBL, able to predict the activity and selectivity profiles of ligands for human Carbonic Anhydrase isoforms II, IX and XII. The training datasets were built with a procedure that made use of flexible bioactivity thresholds to obtain well-balanced active and inactive classes. We used multiple algorithms and sampling sizes to finally select activity models able to classify active or inactive molecules with excellent performances. Remarkably, the results herein reported turned out to be better than those obtained by models built with the classic approach of selecting an a priori activity threshold. The sequential application of such validated models enables virtual screening to be performed in a fast and more reliable way to predict the activity and selectivity profiles against the investigated isoforms.


Genes ◽  
2021 ◽  
Vol 12 (4) ◽  
pp. 527
Author(s):  
Eran Elhaik ◽  
Dan Graur

In the last 15 years or so, soft selective sweep mechanisms have been catapulted from a curiosity of little evolutionary importance to a ubiquitous mechanism claimed to explain most adaptive evolution and, in some cases, most evolution. This transformation was aided by a series of articles by Daniel Schrider and Andrew Kern. Within this series, a paper entitled “Soft sweeps are the dominant mode of adaptation in the human genome” (Schrider and Kern, Mol. Biol. Evolut. 2017, 34(8), 1863–1877) attracted a great deal of attention, in particular in conjunction with another paper (Kern and Hahn, Mol. Biol. Evolut. 2018, 35(6), 1366–1371), for purporting to discredit the Neutral Theory of Molecular Evolution (Kimura 1968). Here, we address an alleged novelty in Schrider and Kern’s paper, i.e., the claim that their study involved an artificial intelligence technique called supervised machine learning (SML). SML is predicated upon the existence of a training dataset in which the correspondence between the input and output is known empirically to be true. Curiously, Schrider and Kern did not possess a training dataset of genomic segments known a priori to have evolved either neutrally or through soft or hard selective sweeps. Thus, their claim of using SML is thoroughly and utterly misleading. In the absence of legitimate training datasets, Schrider and Kern used: (1) simulations that employ many manipulatable variables and (2) a system of data cherry-picking rivaling the worst excesses in the literature. These two factors, in addition to the lack of negative controls and the irreproducibility of their results due to incomplete methodological detail, lead us to conclude that all evolutionary inferences derived from so-called SML algorithms (e.g., S/HIC) should be taken with a huge shovel of salt.


2021 ◽  
Vol 8 (1) ◽  
pp. 205395172110135
Author(s):  
Florian Jaton

This theoretical paper considers the morality of machine learning algorithms and systems in the light of the biases that ground their correctness. It begins by presenting biases not as a priori negative entities but as contingent external referents—often gathered in benchmarked repositories called ground-truth datasets—that define what needs to be learned and allow for performance measures. I then argue that ground-truth datasets and their concomitant practices—that fundamentally involve establishing biases to enable learning procedures—can be described by their respective morality, here defined as the more or less accounted experience of hesitation when faced with what pragmatist philosopher William James called “genuine options”—that is, choices to be made in the heat of the moment that engage different possible futures. I then stress three constitutive dimensions of this pragmatist morality, as far as ground-truthing practices are concerned: (I) the definition of the problem to be solved (problematization), (II) the identification of the data to be collected and set up (databasing), and (III) the qualification of the targets to be learned (labeling). I finally suggest that this three-dimensional conceptual space can be used to map machine learning algorithmic projects in terms of the morality of their respective and constitutive ground-truthing practices. Such techno-moral graphs may, in turn, serve as equipment for greater governance of machine learning algorithms and systems.


2021 ◽  
Vol 143 (8) ◽  
Author(s):  
Opeoluwa Owoyele ◽  
Pinaki Pal ◽  
Alvaro Vidal Torreira

AbstractThe use of machine learning (ML)-based surrogate models is a promising technique to significantly accelerate simulation-driven design optimization of internal combustion (IC) engines, due to the high computational cost of running computational fluid dynamics (CFD) simulations. However, training the ML models requires hyperparameter selection, which is often done using trial-and-error and domain expertise. Another challenge is that the data required to train these models are often unknown a priori. In this work, we present an automated hyperparameter selection technique coupled with an active learning approach to address these challenges. The technique presented in this study involves the use of a Bayesian approach to optimize the hyperparameters of the base learners that make up a super learner model. In addition to performing hyperparameter optimization (HPO), an active learning approach is employed, where the process of data generation using simulations, ML training, and surrogate optimization is performed repeatedly to refine the solution in the vicinity of the predicted optimum. The proposed approach is applied to the optimization of a compression ignition engine with control parameters relating to fuel injection, in-cylinder flow, and thermodynamic conditions. It is demonstrated that by automatically selecting the best values of the hyperparameters, a 1.6% improvement in merit value is obtained, compared to an improvement of 1.0% with default hyperparameters. Overall, the framework introduced in this study reduces the need for technical expertise in training ML models for optimization while also reducing the number of simulations needed for performing surrogate-based design optimization.


2021 ◽  
Author(s):  
Geza Halasz ◽  
Michela Sperti ◽  
Matteo Villani ◽  
Umberto Michelucci ◽  
Piergiuseppe Agostoni ◽  
...  

BACKGROUND Several models have been developed to predict mortality in patients with Covid-19 pneumonia, but only few have demonstrated enough discriminatory capacity. Machine-learning algorithms represent a novel approach for data-driven prediction of clinical outcomes with advantages over statistical modelling. OBJECTIVE To developed the Piacenza score, a Machine-learning based score, to predict 30-day mortality in patients with Covid-19 pneumonia METHODS The study comprised 852 patients with COVID-19 pneumonia, admitted to the Guglielmo da Saliceto Hospital (Italy) from February to November 2020. The patients’ medical history, demographic and clinical data were collected in an electronic health records. The overall patient dataset was randomly splitted into derivation and test cohort. The score was obtained through the Naïve Bayes classifier and externally validated on 86 patients admitted to Centro Cardiologico Monzino (Italy) in February 2020. Using a forward-search algorithm six features were identified: age; mean corpuscular haemoglobin concentration; PaO2/FiO2 ratio; temperature; previous stroke; gender. The Brier index was used to evaluate the ability of ML to stratify and predict observed outcomes. A user-friendly web site available at (https://covid.7hc.tech.) was designed and developed to enable a fast and easy use of the tool by the final user (i.e., the physician). Regarding the customization properties to the Piacenza score, we added a personalized version of the algorithm inside the website, which enables an optimized computation of the mortality risk score for a single patient, when some variables used by the Piacenza score are not available. In this case, the Naïve Bayes classifier is re-trained over the same derivation cohort but using a different set of patient’s characteristics. We also compared the Piacenza score with the 4C score and with a Naïve Bayes algorithm with 14 features chosen a-priori. RESULTS The Piacenza score showed an AUC of 0.78(95% CI 0.74-0.84 Brier-score 0.19) in the internal validation cohort and 0.79(95% CI 0.68-0.89, Brier-score 0.16) in the external validation cohort showing a comparable accuracy respect to the 4C score and to the Naïve Bayes model with a-priori chosen features, which achieved an AUC of 0.78(95% CI 0.73-0.83, Brier-score 0.26) and 0.80(95% CI 0.75-0.86, Brier-score 0.17) respectively. CONCLUSIONS A personalized Machine-learning based score with a purely data driven features selection is feasible and effective to predict mortality in patients with COVID-19 pneumonia.


2020 ◽  
Author(s):  
Raphael Meier ◽  
Meret Burri ◽  
Samuel Fischer ◽  
Richard McKinley ◽  
Simon Jung ◽  
...  

AbstractObjectivesMachine learning (ML) has been demonstrated to improve the prediction of functional outcome in patients with acute ischemic stroke. However, its value in a specific clinical use case has not been investigated. Aim of this study was to assess the clinical utility of ML models with respect to predicting functional impairment and severe disability or death considering its potential value as a decision-support tool in an acute stroke workflow.Materials and MethodsPatients (n=1317) from a retrospective, non-randomized observational registry treated with Mechanical Thrombectomy (MT) were included. The final dataset of patients who underwent successful recanalization (TICI ≥ 2b) (n=932) was split in order to develop ML-based prediction models using data of (n=745, 80%) patients. Subsequently, the models were tested on the remaining patient data (n=187, 20%). For comparison, baseline algorithms using majority class prediction, SPAN-100 score, PRE score, and Stroke-TPI score were implemented. The ML methods included eight different algorithms (e.g. Support Vector Machines and Random forests), stacked ensemble method and tabular neural networks. Prediction of modified Rankin Scale (mRS) 3–6 (primary analysis) and mRS 5–6 (secondary analysis) at 3 months was performed using 25 baseline variables available at patient admission. ML models were assessed with respect to their ability for discrimination, calibration and clinical utility (decision curve analysis).ResultsAnalyzed patients (n=932) showed a median age of 74.7 (IQR 62.7–82.4) years with (n=461, 49.5%) being female. ML methods performed better than clinical scores with stacked ensemble method providing the best overall performance including an F1-score of 0.75 ± 0.01, an ROC-AUC of 0.81 ± 0.00, AP score of 0.81 ± 0.01, MCC of 0.48 ± 0.02, and ECE of 0.06 ± 0.01 for prediction of mRS 3–6, and an F1-score of 0.57 ± 0.02, an ROC-AUC of 0.79 ± 0.01, AP score of 0.54 ± 0.02, MCC of 0.39 ± 0.03, and ECE of 0.19 ± 0.01 for prediction of mRS 5–6. Decision curve analyses suggested highest mean net benefit of 0.09 ± 0.02 at a-priori defined threshold (0.8) for the stacked ensemble method in primary analysis (mRS 3–6). Across all methods, higher mean net benefits were achieved for optimized probability thresholds but with considerably reduced certainty (threshold probabilities 0.24–0.47). For the secondary analysis (mRS 5–6), none of the ML models achieved a positive net benefit for the a-priori threshold probability 0.8.ConclusionsThe clinical utility of ML prediction models in a decision-support scenario aimed at yielding a high certainty for prediction of functional dependency (mRS 3–6) is marginal and not evident for the prediction of severe disability or death (mRS 5–6). Hence, using those models for patient exclusion cannot be recommended and future research should evaluate utility gains after incorporating more advanced imaging parameters.


2021 ◽  
Author(s):  
Natacha Galmiche ◽  
Nello Blaser ◽  
Morten Brun ◽  
Helwig Hauser ◽  
Thomas Spengler ◽  
...  

<p>Probability distributions based on ensemble forecasts are commonly used to assess uncertainty in weather prediction. However, interpreting these distributions is not trivial, especially in the case of multimodality with distinct likely outcomes. The conventional summary employs mean and standard deviation across ensemble members, which works well for unimodal, Gaussian-like distributions. In the case of multimodality this misleads, discarding crucial information. </p><p>We aim at combining previously developed clustering algorithms in machine learning and topological data analysis to extract useful information such as the number of clusters in an ensemble. Given the chaotic behaviour of the atmosphere, machine learning techniques can provide relevant results even if no, or very little, a priori information about the data is available. In addition, topological methods that analyse the shape of the data can make results explainable.</p><p>Given an ensemble of univariate time series, a graph is generated whose edges and vertices represent clusters of members, including additional information for each cluster such as the members belonging to them, their uncertainty, and their relevance according to the graph. In the case of multimodality, this approach provides relevant and quantitative information beyond the commonly used mean and standard deviation approach that helps to further characterise the predictability.</p>


Sign in / Sign up

Export Citation Format

Share Document