scholarly journals Combinatorial Polycation Synthesis and Causal Machine Learning Reveal Divergent Polymer Design Rules for Effective pDNA and Ribonucleoprotein Delivery

Author(s):  
Ramya Kumar ◽  
Ngoc Le ◽  
Felipe Oviedo ◽  
Mary E. Brown ◽  
Theresa M. Reineke

The development of polymers that can replace engineered viral vectors in clinical gene therapy has proven elusive despite the vast portfolios of multifunctional polymers generated by advances in polymer synthesis. Functional delivery of payloads such as plasmids (pDNA) and ribonucleoproteins (RNP) to various cellular populations and tissue types requires design precision. Here, we systematically screen a combinatorially designed library of 43 well-defined polymers, ultimately identifying a lead polycationic vehicle (P38) for efficient pDNA delivery. Further, we demonstrate the versatility of P38 in co-delivering spCas9 RNP and pDNA payloads to mediate homology directed repair as well as in facilitating efficient pDNA delivery in ARPE-19 cells. P38 achieves nuclear import of pDNA and eludes lysosomal processing far more effectively than a structural analog that does not deliver pDNA as efficiently. To reveal the physicochemical drivers of P38's gene delivery performance, SHapley Additive exPlanations (SHAP) are computed for nine polyplex features, and a causal model is applied to evaluate the average treatment effect of the most important features selected by SHAP. Our machine learning interpretability and causal inference approach derives structure-function relationships underlying delivery efficiency, polyplex uptake, and cellular viability, and probes the overlap in polymer design criteria between RNP and pDNA payloads. Together, combinatorial polymer synthesis, parallelized biological screening, and machine learning establish that pDNA delivery demands careful tuning of polycation protonation equilibria while RNP payloads are delivered most efficaciously by polymers that deprotonate cooperatively via hydrophobic interactions. These payload-specific design guidelines will inform further design of bespoke polymers for specific therapeutic contexts.

2019 ◽  
Vol 116 (10) ◽  
pp. 4156-4165 ◽  
Author(s):  
Sören R. Künzel ◽  
Jasjeet S. Sekhon ◽  
Peter J. Bickel ◽  
Bin Yu

There is growing interest in estimating and analyzing heterogeneous treatment effects in experimental and observational studies. We describe a number of metaalgorithms that can take advantage of any supervised learning or regression method in machine learning and statistics to estimate the conditional average treatment effect (CATE) function. Metaalgorithms build on base algorithms—such as random forests (RFs), Bayesian additive regression trees (BARTs), or neural networks—to estimate the CATE, a function that the base algorithms are not designed to estimate directly. We introduce a metaalgorithm, the X-learner, that is provably efficient when the number of units in one treatment group is much larger than in the other and can exploit structural properties of the CATE function. For example, if the CATE function is linear and the response functions in treatment and control are Lipschitz-continuous, the X-learner can still achieve the parametric rate under regularity conditions. We then introduce versions of the X-learner that use RF and BART as base learners. In extensive simulation studies, the X-learner performs favorably, although none of the metalearners is uniformly the best. In two persuasion field experiments from political science, we demonstrate how our X-learner can be used to target treatment regimes and to shed light on underlying mechanisms. A software package is provided that implements our methods.


2021 ◽  
Author(s):  
◽  
Lars Holmberg

Machine Learning (ML) and Artificial Intelligence (AI) impact many aspects of human life, from recommending a significant other to assist the search for extraterrestrial life. The area develops rapidly and exiting unexplored design spaces are constantly laid bare. The focus in this work is one of these areas; ML systems where decisions concerning ML model training, usage and selection of target domain lay in the hands of domain experts. This work is then on ML systems that function as a tool that augments and/or enhance human capabilities. The approach presented is denoted Human In Command ML (HIC-ML) systems. To enquire into this research domain design experiments of varying fidelity were used. Two of these experiments focus on augmenting human capabilities and targets the domains commuting and sorting batteries. One experiment focuses on enhancing human capabilities by identifying similar hand-painted plates. The experiments are used as illustrative examples to explore settings where domain experts potentially can: independently train an ML model and in an iterative fashion, interact with it and interpret and understand its decisions. HIC-ML should be seen as a governance principle that focuses on adding value and meaning to users. In this work, concrete application areas are presented and discussed. To open up for designing ML-based products for the area an abstract model for HIC-ML is constructed and design guidelines are proposed. In addition, terminology and abstractions useful when designing for explicability are presented by imposing structure and rigidity derived from scientific explanations. Together, this opens up for a contextual shift in ML and makes new application areas probable, areas that naturally couples the usage of AI technology to human virtues and potentially, as a consequence, can result in a democratisation of the usage and knowledge concerning this powerful technology.


2018 ◽  
Vol 26 (1) ◽  
pp. 67-87 ◽  
Author(s):  
Emma Hart ◽  
Kevin Sim

Although the use of ensemble methods in machine-learning is ubiquitous due to their proven ability to outperform their constituent algorithms, ensembles of optimisation algorithms have received relatively little attention. Existing approaches lag behind machine-learning in both theory and practice, with no principled design guidelines available. In this article, we address fundamental questions regarding ensemble composition in optimisation using the domain of bin-packing as an example. In particular, we investigate the trade-off between accuracy and diversity, and whether diversity metrics can be used as a proxy for constructing an ensemble, proposing a number of novel metrics for comparing algorithm diversity. We find that randomly composed ensembles can outperform ensembles of high-performing algorithms under certain conditions and that judicious choice of diversity metric is required to construct good ensembles. The method and findings can be generalised to any metaheuristic ensemble, and lead to better understanding of how to undertake principled ensemble design.


2021 ◽  
Vol 263 (3) ◽  
pp. 3044-3055
Author(s):  
Alessandro Casaburo ◽  
Dario Magliacano ◽  
Giuseppe Petrone ◽  
Francesco Franco ◽  
Sergio De Rosa

The scope of this work is to consolidate research dealing with vibroacoustics of periodic media. This investigation aims at developing and validating tools for the design of global vibroacoustic treatments based on foam cores with embedded periodic patterns, which allow passive control of acoustic paths in layered concepts. Firstly, a numerical test campaign is carried out by considering some solid (but still non-perfectly rigid) inclusions in a 3D-modeled porous structure; this causes the excitation of additional acoustic modes due to the periodic nature of the meta-core itself. Then, some design guidelines are provided in order to predict several possible sets of characteristic parameters (i.e. inclusion geometry, elastic and foam properties) that, constrained by the imposition of mass and thickness of the acoustic package, may satisfy the target functions (i.e. the frequency at which the first Transmission Loss peak appears, together with its amplitude). Results are obtained through the implementation of machine learning algorithms, which may constitute a good basis in order to perform preliminary design considerations that could be interesting for further generalizations.


2017 ◽  
Vol 48 (5) ◽  
pp. 78-94 ◽  
Author(s):  
Giorgio Locatelli ◽  
Miljan Mikic ◽  
Milos Kovacevic ◽  
Naomi Brookes ◽  
Nenad Ivanisevic

Megaprojects are often associated with poor delivery performance and poor benefits realization. This article provides a method of identifying, in a quantitative and rigorous manner, the characteristics related to project management success in megaprojects. It provides an investigation of how stakeholders can use this knowledge to ensure more effective design and delivery for megaprojects. The research is grounded in 44 mega-projects and a systematic, empirically based methodology that employs the Fisher's exact test and machine learning techniques to identify the correlation between megaprojects’ characteristics and performance, paving the way to an understanding of their causation.


Author(s):  
Anthony D. McDonald ◽  
Nilesh Ade ◽  
S. Camille Peres

Objective The goal of this study is to assess machine learning for predicting procedure performance from operator and procedure characteristics. Background Procedures are vital for the performance and safety of high-risk industries. Current procedure design guidelines are insufficient because they rely on subjective assessments and qualitative analyses that struggle to integrate and quantify the diversity of factors that influence procedure performance. Method We used data from a 25-participant study with four procedures, conducted on a high-fidelity oil extraction simulation to develop logistic regression (LR), random forest (RF), and decision tree (DT) algorithms that predict procedure step performance from operator, step, readability, and natural language processing-based features. Features were filtered using the Boruta approach. The algorithms were trained and optimized with a repeated 10-fold cross-validation. After training, inference was performed using variable importance and partial dependence plots. Results The RF, DT, and LR algorithms with all features had an area under the receiver operating characteristic curve (AUC) of 0.78, 0.77, and 0.75, respectively, and significantly outperformed the LR with only operator features (LROP), with an AUC of 0.61. The most important features were experience, familiarity, total words, and character-based metrics. The partial dependence plots showed that steps with fewer words, abbreviations, and characters were correlated with correct step performance. Conclusion Machine learning algorithms are a promising approach for predicting step-level procedure performance, with acknowledged limitations on interpolating to nonobserved data, and may help guide procedure design after validation with additional data on further tasks. Application After validation, the inferences from these models can be used to generate procedure design alternatives.


Sign in / Sign up

Export Citation Format

Share Document