Development of the Multifactorial Computational Models of the Solid Propellants Combustion by Means of Data Science Methods – Phase II

Author(s):  
Victor S Abrukov ◽  
Alexander N. Lukin ◽  
Charlie Oommen ◽  
Nichith Chandrasekaran ◽  
Rajaghatta S. Bharath ◽  
...  
Author(s):  
Victor S. Abrukov ◽  
Alexander N. Lukin ◽  
Nichith C ◽  
Charlie Oommen ◽  
Mikhail V. Kiselev ◽  
...  

Author(s):  
Victor S. Abrukov ◽  
Alexander N. Lukin ◽  
Charlie Oommen ◽  
VR Sanal Kumar ◽  
Nichith Chandrasekaran ◽  
...  

2019 ◽  
Vol 69 (1) ◽  
pp. 20-26 ◽  
Author(s):  
Victor S. Abrukov ◽  
Alexander N. Lukin ◽  
Darya A. Anufrieva ◽  
Charlie Oommen ◽  
V. R. Sanalkumar ◽  
...  

The efforts of Russian-Indian research team for application of the data science methods, in particular, artificial neural networks for development of the multi-factor computational models for studying effects of additive’s properties on the solid rocket propellants combustion are presented. The possibilities of the artificial neural networks (ANN) application in the generalisation of the connections between the variables of combustion experiments as well as in forecasting of “new experimental results” are demonstrated. The effect of particle size of catalyst, oxidizer surface area and kinetic parameters like activation energy and heat release on the final ballistic property of AP-HTPB based propellant composition has been modelled using ANN methods. The validated ANN models can predict many unexplored regimes, like pressures, particle sizes of oxidiser, for which experimental data are not available. Some of the regularly measured kinetic parameters extracted from non-combustion conditions could be related to properties at combustion conditions. Results predicted are within desirable limits accepted in combustion conditions.


2020 ◽  
Vol 330 ◽  
pp. 01048
Author(s):  
Victor Abrukov ◽  
Darya Anufrieva ◽  
Alexander Lukin ◽  
Charlie Oommen ◽  
V. R. Sanalkumar ◽  
...  

The results of usage of data science methods, in particular artificial neural networks, for the creation of new multifactor computational models of the solid propellants (SP) combustion that solve the direct and inverse tasks are presented. The own analytical platform Loginom was used for the models creation. The models of combustion of double based SP with such nano additives as metals, metal oxides, termites were created by means of experimental data published in scientific literature. The goal function of the models were burning rate (direct tasks) as well as propellants composition (inverse tasks). The basis (script) of a creation of Data Warehouse of SP combustion was developed. The Data Warehouse can be supplemented by new experimental data and metadata in automated mode and serve as a basis for creating generalized combustion models of SP and thus the beginning of work in a new direction of combustion science, which the authors propose to call "Propellant Combustion Genome" (by analogy with a very famous Materials Genome Initiative, USA). "Propellant Combustion Genome" opens wide possibilities for accelerate the advanced propellants development Genome" opens wide possibilities for accelerate the advanced propellants development.


Author(s):  
Victor S Abrukov ◽  
Alexander N. Lukin ◽  
Nichith Chandrasekaran ◽  
Charlie Oommen ◽  
Thianesh U.K ◽  
...  

2020 ◽  
Author(s):  
Saeed Nosratabadi ◽  
Amir Mosavi ◽  
Puhong Duan ◽  
Pedram Ghamisi ◽  
Ferdinand Filip ◽  
...  

This paper provides a state-of-the-art investigation of advances in data science in emerging economic applications. The analysis was performed on novel data science methods in four individual classes of deep learning models, hybrid deep learning models, hybrid machine learning, and ensemble models. Application domains include a wide and diverse range of economics research from the stock market, marketing, and e-commerce to corporate banking and cryptocurrency. Prisma method, a systematic literature review methodology, was used to ensure the quality of the survey. The findings reveal that the trends follow the advancement of hybrid models, which, based on the accuracy metric, outperform other learning algorithms. It is further expected that the trends will converge toward the advancements of sophisticated hybrid deep learning models.


Author(s):  
Ihor Ponomarenko ◽  
Oleksandra Lubkovska

The subject of the research is the approach to the possibility of using data science methods in the field of health care for integrated data processing and analysis in order to optimize economic and specialized processes The purpose of writing this article is to address issues related to the specifics of the use of Data Science methods in the field of health care on the basis of comprehensive information obtained from various sources. Methodology. The research methodology is system-structural and comparative analyzes (to study the application of BI-systems in the process of working with large data sets); monograph (the study of various software solutions in the market of business intelligence); economic analysis (when assessing the possibility of using business intelligence systems to strengthen the competitive position of companies). The scientific novelty the main sources of data on key processes in the medical field. Examples of innovative methods of collecting information in the field of health care, which are becoming widespread in the context of digitalization, are presented. The main sources of data in the field of health care used in Data Science are revealed. The specifics of the application of machine learning methods in the field of health care in the conditions of increasing competition between market participants and increasing demand for relevant products from the population are presented. Conclusions. The intensification of the integration of Data Science in the medical field is due to the increase of digitized data (statistics, textual informa- tion, visualizations, etc.). Through the use of machine learning methods, doctors and other health professionals have new opportunities to improve the efficiency of the health care system as a whole. Key words: Data science, efficiency, information, machine learning, medicine, Python, healthcare.


2020 ◽  
Author(s):  
Patrick Knapp ◽  
Michael Glinsky ◽  
Benjamin Tobias ◽  
John Kline
Keyword(s):  

2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Ann-Marie Mallon ◽  
Dieter A. Häring ◽  
Frank Dahlke ◽  
Piet Aarden ◽  
Soroosh Afyouni ◽  
...  

Abstract Background Novartis and the University of Oxford’s Big Data Institute (BDI) have established a research alliance with the aim to improve health care and drug development by making it more efficient and targeted. Using a combination of the latest statistical machine learning technology with an innovative IT platform developed to manage large volumes of anonymised data from numerous data sources and types we plan to identify novel patterns with clinical relevance which cannot be detected by humans alone to identify phenotypes and early predictors of patient disease activity and progression. Method The collaboration focuses on highly complex autoimmune diseases and develops a computational framework to assemble a research-ready dataset across numerous modalities. For the Multiple Sclerosis (MS) project, the collaboration has anonymised and integrated phase II to phase IV clinical and imaging trial data from ≈35,000 patients across all clinical phenotypes and collected in more than 2200 centres worldwide. For the “IL-17” project, the collaboration has anonymised and integrated clinical and imaging data from over 30 phase II and III Cosentyx clinical trials including more than 15,000 patients, suffering from four autoimmune disorders (Psoriasis, Axial Spondyloarthritis, Psoriatic arthritis (PsA) and Rheumatoid arthritis (RA)). Results A fundamental component of successful data analysis and the collaborative development of novel machine learning methods on these rich data sets has been the construction of a research informatics framework that can capture the data at regular intervals where images could be anonymised and integrated with the de-identified clinical data, quality controlled and compiled into a research-ready relational database which would then be available to multi-disciplinary analysts. The collaborative development from a group of software developers, data wranglers, statisticians, clinicians, and domain scientists across both organisations has been key. This framework is innovative, as it facilitates collaborative data management and makes a complicated clinical trial data set from a pharmaceutical company available to academic researchers who become associated with the project. Conclusions An informatics framework has been developed to capture clinical trial data into a pipeline of anonymisation, quality control, data exploration, and subsequent integration into a database. Establishing this framework has been integral to the development of analytical tools.


Sign in / Sign up

Export Citation Format

Share Document