modeling power
Recently Published Documents


TOTAL DOCUMENTS

169
(FIVE YEARS 29)

H-INDEX

20
(FIVE YEARS 3)

2021 ◽  
Author(s):  
Pavel V. Afonine ◽  
Paul D. Adams ◽  
Oleg V Sobolev ◽  
Alexandre Urzhumtsev

Bulk solvent is a major component of bio-macromolecular crystals and therefore contributes significantly to diffraction intensities. Accurate modeling of the bulk-solvent region has been recognized as important for many crystallographic calculations, from computing of R-factors and density maps to model building and refinement. Owing to its simplicity and computational and modeling power, the flat (mask-based) bulk-solvent model introduced by Jiang & Brunger (1994) is used by most modern crystallographic software packages to account for disordered solvent. In this manuscript we describe further developments of the mask-based model that improves the fit between the model and the data and aids in map interpretation. The new algorithm, here referred to as mosaic bulk-solvent model, considers solvent variation across the unit cell. The mosaic model is implemented in the computational crystallography toolbox and can be used in Phenix in most contexts where accounting for bulk-solvent is required. It has been optimized and validated using a sufficiently large subset of the Protein Data Bank entries that have crystallographic data available.


2021 ◽  
Author(s):  
Rishit Dagli ◽  
Ali Mustufa Shaikh ◽  
Hussain Mahdi ◽  
Sameer Nanivadekar

In this paper, we focus on creating a keywords extractor especially for a given job description job-related text corpus for better search engine optimization using attention based deep learning techniques. Millions of jobs are posted but most of them end up not being located due to improper SEO and keyword management. We aim to make this as easy to use as possible and allow us to use this for a large number of job descriptions very easily. We also make use of these algorithms to screen or get insights from large number of resumes, summarize and create keywords for a general piece of text or scientific articles. We also investigate the modeling power of BERT (Bidirectional Encoder Representations from Transformers) for the task of keyword extraction from job descriptions. We further validate our results by providing a fully-functional API and testing out the model with real-time job descriptions.


2021 ◽  
Author(s):  
Rishit Dagli ◽  
Ali Mustufa Shaikh ◽  
Hussain Mahdi ◽  
Sameer Nanivadekar

In this paper, we focus on creating a keywords extractor especially for a given job description job-related text corpus for better search engine optimization using attention based deep learning techniques. Millions of jobs are posted but most of them end up not being located due to improper SEO and keyword management. We aim to make this as easy to use as possible and allow us to use this for a large number of job descriptions very easily. We also make use of these algorithms to screen or get insights from large number of resumes, summarize and create keywords for a general piece of text or scientific articles. We also investigate the modeling power of BERT (Bidirectional Encoder Representations from Transformers) for the task of keyword extraction from job descriptions. We further validate our results by providing a fully-functional API and testing out the model with real-time job descriptions.


Author(s):  
Dr. Joel Sunny Deol Gosu, Dr. Pullagura Priyadarsini, Ravi Kanth Motupalli

Every day, millions of people in many institutions communicate with each other on the Internet. The past two decades have witnessed unprecedented levels of Internet use by people around the world. Almost alongside these rapid developments in the internet space, an ever increasing incidence of attacks carried out on the internet has been consistently reported every minute. In such a difficult environment, Anomaly Detection Systems (ADS) play an important role in monitoring and analyzing daily internet activities for security breaches and threats. However, the analytical data routinely generated from computer networks are usually of enormous size and of little use. This creates a major challenge for ADSs, who must examine all the functionality of a certain dataset to identify intrusive patterns. The selection of features is an important factor in modeling anomaly-based intrusion detection systems. An irrelevant characteristic can lead to overfitting which in turn negatively affects the modeling power of classification algorithms. The objective of this study is to analyze and select the most discriminating input characteristics for the construction of efficient and computationally efficient schemes for an ADS. In the first step, a heuristic algorithm called IG-BA is proposed for dimensionality reduction by selecting the optimal subset based on the concept of entropy. Then, the relevant and meaningful features are selected, before implementing Number of Classifiers which includes: (1) An irrelevant feature can lead to overfitting which in turn negatively affects the modeling power of the classification algorithms. Experiment was done on CICIDS-2017 dataset by applying (1) Random Forest (RF), (2) Bayes Network (BN), (3) Naive Bayes (NB), (4) J48 and (5) Random Tree (RT) with results showing better detection precision and faster execution time. The proposed heuristic algorithm outperforms the existing ones as it is more accurate in detection as well as faster. However, Random Forest algorithm emerges as the best classifier for feature selection technique and scores over others by virtue of its accuracy in optimal selection of features.


2020 ◽  
Vol 91 (11) ◽  
pp. 673-680
Author(s):  
A. B. Petrochenkov ◽  
A. V. Romodin ◽  
D. Yu. Leizgold ◽  
A. S. Semenov

Author(s):  
Hui Cai ◽  
Xinya Song ◽  
Jan-Philipp Hammer ◽  
Teng Jiang ◽  
Steffen Schlegel ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document