scholarly journals Frontiers in the Simulation of Dislocations

2020 ◽  
Vol 50 (1) ◽  
pp. 437-464 ◽  
Author(s):  
Nicolas Bertin ◽  
Ryan B. Sills ◽  
Wei Cai

Dislocations play a vital role in the mechanical behavior of crystalline materials during deformation. To capture dislocation phenomena across all relevant scales, a multiscale modeling framework of plasticity has emerged, with the goal of reaching a quantitative understanding of microstructure–property relations, for instance, to predict the strength and toughness of metals and alloys for engineering applications. This review describes the state of the art of the major dislocation modeling techniques, and then discusses how recent progress can be leveraged to advance the frontiers in simulations of dislocations. The frontiers of dislocation modeling include opportunities to establish quantitative connections between the scales, validate models against experiments, and use data science methods (e.g., machine learning) to gain an understanding of and enhance the current predictive capabilities.

2020 ◽  
Author(s):  
Saeed Nosratabadi ◽  
Amir Mosavi ◽  
Puhong Duan ◽  
Pedram Ghamisi ◽  
Ferdinand Filip ◽  
...  

This paper provides a state-of-the-art investigation of advances in data science in emerging economic applications. The analysis was performed on novel data science methods in four individual classes of deep learning models, hybrid deep learning models, hybrid machine learning, and ensemble models. Application domains include a wide and diverse range of economics research from the stock market, marketing, and e-commerce to corporate banking and cryptocurrency. Prisma method, a systematic literature review methodology, was used to ensure the quality of the survey. The findings reveal that the trends follow the advancement of hybrid models, which, based on the accuracy metric, outperform other learning algorithms. It is further expected that the trends will converge toward the advancements of sophisticated hybrid deep learning models.


Author(s):  
Ihor Ponomarenko ◽  
Oleksandra Lubkovska

The subject of the research is the approach to the possibility of using data science methods in the field of health care for integrated data processing and analysis in order to optimize economic and specialized processes The purpose of writing this article is to address issues related to the specifics of the use of Data Science methods in the field of health care on the basis of comprehensive information obtained from various sources. Methodology. The research methodology is system-structural and comparative analyzes (to study the application of BI-systems in the process of working with large data sets); monograph (the study of various software solutions in the market of business intelligence); economic analysis (when assessing the possibility of using business intelligence systems to strengthen the competitive position of companies). The scientific novelty the main sources of data on key processes in the medical field. Examples of innovative methods of collecting information in the field of health care, which are becoming widespread in the context of digitalization, are presented. The main sources of data in the field of health care used in Data Science are revealed. The specifics of the application of machine learning methods in the field of health care in the conditions of increasing competition between market participants and increasing demand for relevant products from the population are presented. Conclusions. The intensification of the integration of Data Science in the medical field is due to the increase of digitized data (statistics, textual informa- tion, visualizations, etc.). Through the use of machine learning methods, doctors and other health professionals have new opportunities to improve the efficiency of the health care system as a whole. Key words: Data science, efficiency, information, machine learning, medicine, Python, healthcare.


2020 ◽  
Author(s):  
Patrick Knapp ◽  
Michael Glinsky ◽  
Benjamin Tobias ◽  
John Kline
Keyword(s):  

2020 ◽  
Author(s):  
Laura Melissa Guzman ◽  
Tyler Kelly ◽  
Lora Morandin ◽  
Leithen M’Gonigle ◽  
Elizabeth Elle

AbstractA challenge in conservation is the gap between knowledge generated by researchers and the information being used to inform conservation practice. This gap, widely known as the research-implementation gap, can limit the effectiveness of conservation practice. One way to address this is to design conservation tools that are easy for practitioners to use. Here, we implement data science methods to develop a tool to aid in conservation of pollinators in British Columbia. Specifically, in collaboration with Pollinator Partnership Canada, we jointly develop an interactive web app, the goal of which is two-fold: (i) to allow end users to easily find and interact with the data collected by researchers on pollinators in British Columbia (prior to development of this app, data were buried in supplements from individual research publications) and (ii) employ up to date statistical tools in order to analyse phenological coverage of a set of plants. Previously, these tools required high programming competency in order to access. Our app provides an example of one way that we can make the products of academic research more accessible to conservation practitioners. We also provide the source code to allow other developers to develop similar apps suitable for their data.


2021 ◽  
Vol 7 (4) ◽  
pp. 208
Author(s):  
Mor Peleg ◽  
Amnon Reichman ◽  
Sivan Shachar ◽  
Tamir Gadot ◽  
Meytal Avgil Tsadok ◽  
...  

Triggered by the COVID-19 crisis, Israel’s Ministry of Health (MoH) held a virtual datathon based on deidentified governmental data. Organized by a multidisciplinary committee, Israel’s research community was invited to offer insights to help solve COVID-19 policy challenges. The Datathon was designed to develop operationalizable data-driven models to address COVID-19 health policy challenges. Specific relevant challenges were defined and diverse, reliable, up-to-date, deidentified governmental datasets were extracted and tested. Secure remote-access research environments were established. Registration was open to all citizens. Around a third of the applicants were accepted, and they were teamed to balance areas of expertise and represent all sectors of the community. Anonymous surveys for participants and mentors were distributed to assess usefulness and points for improvement and retention for future datathons. The Datathon included 18 multidisciplinary teams, mentored by 20 data scientists, 6 epidemiologists, 5 presentation mentors, and 12 judges. The insights developed by the three winning teams are currently considered by the MoH as potential data science methods relevant for national policies. Based on participants’ feedback, the process for future data-driven regulatory responses for health crises was improved. Participants expressed increased trust in the MoH and readiness to work with the government on these or future projects.


2021 ◽  
Author(s):  
Yuxiang Chen ◽  
Chuanlei Liu ◽  
Yang An ◽  
Yue Lou ◽  
Yang Zhao ◽  
...  

Machine learning and computer-aided approaches significantly accelerate molecular design and discovery in scientific and industrial fields increasingly relying on data science for efficiency. The typical method used is supervised learning which needs huge datasets. Semi-supervised machine learning approaches are effective to train unlabeled data with improved modeling performance, whereas they are limited by the accumulation of prediction errors. Here, to screen solvents for removal of methyl mercaptan, a type of organosulfur impurities in natural gas, we constructed a computational framework by integrating molecular similarity search and active learning methods, namely, molecular active selection machine learning (MASML). This new model framework identifies the optimal molecules set by molecular similarity search and iterative addition to the training dataset. Among all 126,068 compounds in the initial dataset, 3 molecules were identified to be promising for methyl mercaptan (MeSH) capture, including benzylamine (BZA), p-methoxybenzylamine (PZM), and N,N-diethyltrimethylenediamine (DEAPA). Further experiments confirmed the effectiveness of our modeling framework in efficient molecular design and identification for capturing methyl mercaptan, in which DEAPA presents a Henry's law constant 89.4% lower than that of methyl diethanolamine (MDEA).


Author(s):  
Dileep Kumar G.

Tree-based learning techniques are considered to be one of the best and most used supervised learning methods. Tree-based methods empower predictive models with high accuracy, stability, and ease of interpretation. Unlike linear models, they map non-linear relationships pretty well. These methods are adaptable at solving any kind of problem at hand (classification or regression). Methods like decision trees, random forest, gradient boosting are being widely used in all kinds of machine learning and data science problems. Hence, for every data analyst, it is important to learn these algorithms and use them for modeling. This chapter guide the learner to learn tree-based modeling techniques from scratch.


Sign in / Sign up

Export Citation Format

Share Document