scholarly journals A Web Tool for Calculating Substituent Descriptors Compatible with Hammett Sigma Constants

Author(s):  
Peter Ertl

<p>Electron donating or accepting power of organic substituents is an important parameter affecting many properties of parent molecules, most notably their reactivity and pKa of ionizable groups. These substituent properties are usually described by Hammett sigma constants obtained by measuring ionization of substituted benzoic acids. Although values of these constants have been measured for the most common functional groups, data for many important substituents are not available. Some time ago we reported a method to calculate substituent descriptors compatible with Hammett sigma constants using quantum chemically derived parameters. The present publication revisits the older study by applying more sophisticated methodology and a larger training data set, as well as introduces a free web tool allowing to calculate substituent descriptors compatible with Hammett sigma constants available at <a href="https://bitly.com/getsigmas">https://bitly.com/getsigmas</a>.</p><div><br></div>

2021 ◽  
Author(s):  
Peter Ertl

<p>Electron donating or accepting power of organic substituents is an important parameter affecting many properties of parent molecules, most notably their reactivity and pKa of ionizable groups. These substituent properties are usually described by Hammett sigma constants obtained by measuring ionization of substituted benzoic acids. Although values of these constants have been measured for the most common functional groups, data for many important substituents are not available. Some time ago we reported a method to calculate substituent descriptors compatible with Hammett sigma constants using quantum chemically derived parameters. The present publication revisits the older study by applying more sophisticated methodology and a larger training data set, as well as introduces a free web tool allowing to calculate substituent descriptors compatible with Hammett sigma constants available at <a href="https://bitly.com/getsigmas">https://bitly.com/getsigmas</a>.</p><div><br></div>


2019 ◽  
Vol 12 (2) ◽  
pp. 120-127 ◽  
Author(s):  
Wael Farag

Background: In this paper, a Convolutional Neural Network (CNN) to learn safe driving behavior and smooth steering manoeuvring, is proposed as an empowerment of autonomous driving technologies. The training data is collected from a front-facing camera and the steering commands issued by an experienced driver driving in traffic as well as urban roads. Methods: This data is then used to train the proposed CNN to facilitate what it is called “Behavioral Cloning”. The proposed Behavior Cloning CNN is named as “BCNet”, and its deep seventeen-layer architecture has been selected after extensive trials. The BCNet got trained using Adam’s optimization algorithm as a variant of the Stochastic Gradient Descent (SGD) technique. Results: The paper goes through the development and training process in details and shows the image processing pipeline harnessed in the development. Conclusion: The proposed approach proved successful in cloning the driving behavior embedded in the training data set after extensive simulations.


Author(s):  
Ritu Khandelwal ◽  
Hemlata Goyal ◽  
Rajveer Singh Shekhawat

Introduction: Machine learning is an intelligent technology that works as a bridge between businesses and data science. With the involvement of data science, the business goal focuses on findings to get valuable insights on available data. The large part of Indian Cinema is Bollywood which is a multi-million dollar industry. This paper attempts to predict whether the upcoming Bollywood Movie would be Blockbuster, Superhit, Hit, Average or Flop. For this Machine Learning techniques (classification and prediction) will be applied. To make classifier or prediction model first step is the learning stage in which we need to give the training data set to train the model by applying some technique or algorithm and after that different rules are generated which helps to make a model and predict future trends in different types of organizations. Methods: All the techniques related to classification and Prediction such as Support Vector Machine(SVM), Random Forest, Decision Tree, Naïve Bayes, Logistic Regression, Adaboost, and KNN will be applied and try to find out efficient and effective results. All these functionalities can be applied with GUI Based workflows available with various categories such as data, Visualize, Model, and Evaluate. Result: To make classifier or prediction model first step is learning stage in which we need to give the training data set to train the model by applying some technique or algorithm and after that different rules are generated which helps to make a model and predict future trends in different types of organizations Conclusion: This paper focuses on Comparative Analysis that would be performed based on different parameters such as Accuracy, Confusion Matrix to identify the best possible model for predicting the movie Success. By using Advertisement Propaganda, they can plan for the best time to release the movie according to the predicted success rate to gain higher benefits. Discussion: Data Mining is the process of discovering different patterns from large data sets and from that various relationships are also discovered to solve various problems that come in business and helps to predict the forthcoming trends. This Prediction can help Production Houses for Advertisement Propaganda and also they can plan their costs and by assuring these factors they can make the movie more profitable.


1994 ◽  
Vol 59 (9) ◽  
pp. 2029-2041
Author(s):  
Oldřich Pytela ◽  
Taťjana Nevěčná

The kinetics of decomposition of 1,3-bis(4-methylphenyl)triazene catalyzed with 13 substituted benzoic acids of various concentrations have been measured in 25 vol.% aqueous methanol at 25.0 °C. The rate constants observed (297 data) have be used as values of independent variable in a series of models of the catalyzed decomposition. For the catalytic particles were considered the undissociated acid, its conjugated base, and the proton in both the specific and general catalyses. Some models presumed formation of reactive or nonreactive complexes of the individual reactants. The substituent effect is described by the Hammett equation. The statistically best model in which the observed rate constant is a superposition of a term describing the dependence on proton concentration and a term describing the dependence on the product of concentrations of proton and conjugated base is valid with the presumption of complete proton transfer from the catalyst acid to substrate, which has been proved. The behaviour of 4-dimethylamino, 4-amino, and 3-amino derivatives is anomalous (lower catalytic activity as compared with benzoic acid). This supports the presumed participation of conjugated base in the title process.


2009 ◽  
Vol 74 (1) ◽  
pp. 29-42 ◽  
Author(s):  
Vilve Nummert ◽  
Mare Piirsalu ◽  
Signe Vahur ◽  
Oksana Travnikova ◽  
Ilmar A. Koppel

The second-order rate constants k (in dm3 mol–1 s–1) for alkaline hydrolysis of phenyl esters of meta-, para- and ortho-substituted benzoic acids, X-C6H4CO2C6H5, have been measured spectrophotometrically in aqueous 0.5 and 2.25 M Bu4NBr at 25 °C. The substituent effects for para and meta derivatives were described using the Hammett relationship. For the ortho derivatives the Charton equation was used. For ortho-substituted esters two steric scales were involved: the EsB and the Charton steric (υ) constants. When going from pure water to aqueous 0.5 and 2.25 M Bu4NBr, the meta and para polar effects, the ortho inductive and resonance effects in alkaline hydrolysis of phenyl esters of substituted benzoic acids, became stronger nearly to the same extent as found for alkaline hydrolysis of C6H5CO2C6H4-X. The steric term of ortho-substituted esters was almost independent of the media considered. The rate constants of alkaline hydrolysis of ortho-, meta- and para-substituted phenyl benzoates (X-C6H4CO2C6H5, C6H5CO2C6H4-X) and alkyl benzoates, C6H5CO2R, in water, 0.5 and 2.25 M Bu4NBr were correlated with the corresponding IR stretching frequencies of carbonyl group, (ΔνCO)X.


2019 ◽  
Vol 9 (6) ◽  
pp. 1128 ◽  
Author(s):  
Yundong Li ◽  
Wei Hu ◽  
Han Dong ◽  
Xueyan Zhang

Using aerial cameras, satellite remote sensing or unmanned aerial vehicles (UAV) equipped with cameras can facilitate search and rescue tasks after disasters. The traditional manual interpretation of huge aerial images is inefficient and could be replaced by machine learning-based methods combined with image processing techniques. Given the development of machine learning, researchers find that convolutional neural networks can effectively extract features from images. Some target detection methods based on deep learning, such as the single-shot multibox detector (SSD) algorithm, can achieve better results than traditional methods. However, the impressive performance of machine learning-based methods results from the numerous labeled samples. Given the complexity of post-disaster scenarios, obtaining many samples in the aftermath of disasters is difficult. To address this issue, a damaged building assessment method using SSD with pretraining and data augmentation is proposed in the current study and highlights the following aspects. (1) Objects can be detected and classified into undamaged buildings, damaged buildings, and ruins. (2) A convolution auto-encoder (CAE) that consists of VGG16 is constructed and trained using unlabeled post-disaster images. As a transfer learning strategy, the weights of the SSD model are initialized using the weights of the CAE counterpart. (3) Data augmentation strategies, such as image mirroring, rotation, Gaussian blur, and Gaussian noise processing, are utilized to augment the training data set. As a case study, aerial images of Hurricane Sandy in 2012 were maximized to validate the proposed method’s effectiveness. Experiments show that the pretraining strategy can improve of 10% in terms of overall accuracy compared with the SSD trained from scratch. These experiments also demonstrate that using data augmentation strategies can improve mAP and mF1 by 72% and 20%, respectively. Finally, the experiment is further verified by another dataset of Hurricane Irma, and it is concluded that the paper method is feasible.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Ryoya Shiode ◽  
Mototaka Kabashima ◽  
Yuta Hiasa ◽  
Kunihiro Oka ◽  
Tsuyoshi Murase ◽  
...  

AbstractThe purpose of the study was to develop a deep learning network for estimating and constructing highly accurate 3D bone models directly from actual X-ray images and to verify its accuracy. The data used were 173 computed tomography (CT) images and 105 actual X-ray images of a healthy wrist joint. To compensate for the small size of the dataset, digitally reconstructed radiography (DRR) images generated from CT were used as training data instead of actual X-ray images. The DRR-like images were generated from actual X-ray images in the test and adapted to the network, and high-accuracy estimation of a 3D bone model from a small data set was possible. The 3D shape of the radius and ulna were estimated from actual X-ray images with accuracies of 1.05 ± 0.36 and 1.45 ± 0.41 mm, respectively.


Genetics ◽  
2021 ◽  
Author(s):  
Marco Lopez-Cruz ◽  
Gustavo de los Campos

Abstract Genomic prediction uses DNA sequences and phenotypes to predict genetic values. In homogeneous populations, theory indicates that the accuracy of genomic prediction increases with sample size. However, differences in allele frequencies and in linkage disequilibrium patterns can lead to heterogeneity in SNP effects. In this context, calibrating genomic predictions using a large, potentially heterogeneous, training data set may not lead to optimal prediction accuracy. Some studies tried to address this sample size/homogeneity trade-off using training set optimization algorithms; however, this approach assumes that a single training data set is optimum for all individuals in the prediction set. Here, we propose an approach that identifies, for each individual in the prediction set, a subset from the training data (i.e., a set of support points) from which predictions are derived. The methodology that we propose is a Sparse Selection Index (SSI) that integrates Selection Index methodology with sparsity-inducing techniques commonly used for high-dimensional regression. The sparsity of the resulting index is controlled by a regularization parameter (λ); the G-BLUP (the prediction method most commonly used in plant and animal breeding) appears as a special case which happens when λ = 0. In this study, we present the methodology and demonstrate (using two wheat data sets with phenotypes collected in ten different environments) that the SSI can achieve significant (anywhere between 5-10%) gains in prediction accuracy relative to the G-BLUP.


Sign in / Sign up

Export Citation Format

Share Document