scholarly journals Using Machine Learning to Predict Core Sizes of High-Efficiency Turbofan Engines

2019 ◽  
Vol 141 (11) ◽  
Author(s):  
Michael T. Tong

Abstract With the rise in big data and analytics, machine learning is transforming many industries. It is being increasingly employed to solve a wide range of complex problems, producing autonomous systems that support human decision-making. For the aircraft engine industry, machine learning of historical and existing engine data could provide insights that help drive for better engine design. This work explored the application of machine learning to engine preliminary design. Engine core-size prediction was chosen for the first study because of its relative simplicity in terms of number of input variables required (only three). Specifically, machine-learning predictive tools were developed for turbofan engine core-size prediction, using publicly available data of two hundred manufactured engines and engines that were studied previously in NASA aeronautics projects. The prediction results of these models show that, by bringing together big data, robust machine-learning algorithms and data science, a machine learning-based predictive model can be an effective tool for turbofan engine core-size prediction. The promising results of this first study paves the way for further exploration of the use of machine learning for aircraft engine preliminary design.

Author(s):  
Michael T. Tong

Abstract With the rise in big data and analytics, machine learning is transforming many industries. It is being increasingly employed to solve a wide range of complex problems, producing autonomous systems that support human decision-making. For the aircraft engine industry, machine learning of historical and existing engine data could provide insights that help drive for better engine design. This work explored the application of machine learning to engine preliminary design. Engine core-size prediction was chosen for the first study because of its relative simplicity in terms of number of input variables required (only three). Specifically, machine-learning predictive tools were developed for turbofan engine core-size prediction, using publicly available data of two hundred manufactured engines and engines that were studied previously in NASA aeronautics projects. The prediction results of these models show that, by bringing together big data, robust machine-learning algorithms and automation, a machine learning-based predictive model can be an effective tool for turbofan engine core-size prediction. The promising results of this first study paves the way for further exploration of the use of machine learning for aircraft engine preliminary design.


2018 ◽  
Vol 7 (2.21) ◽  
pp. 335
Author(s):  
R Anandan ◽  
Srikanth Bhyrapuneni ◽  
K Kalaivani ◽  
P Swaminathan

Big Data Analytics and Deep Learning are two immense purpose of meeting of data science. Big Data has ended up being major a tantamount number of affiliations both open and private have been gathering huge measures of room specific information, which can contain enduring information about issues, for instance, national cognizance, motorized security, coercion presentation, advancing, and healing informatics. Relationship, for instance, Microsoft and Google are researching wide volumes of data for business examination and decisions, influencing existing and future progression. Critical Learning figuring's isolate odd state, complex reflections as data outlines through another levelled learning practice. Complex reflections are learnt at a given level in setting of all around less asking for thoughts figured in the past level in the dynamic framework. An indispensable favoured perspective of Profound Learning is the examination and culture of beast measures of unconfirmed data, making it a fundamental contraption for Great Statistics Analytics where offensive data is, everything seen as, unlabelled and un-arranged. In the present examination, we investigate how Deep Learning can be used for keeping an eye out for some essential issues in Big Data Analytics, including removing complex cases from Big volumes of information, semantic asking for, information naming, smart data recovery, and streamlining discriminative errands .Deep learning using Machine Learning(ML) is continuously unleashing its power in a wide range of applications. It has been pushed to the front line as of late mostly attributable to the advert of huge information. ML counts have never been remarkable ensured while tried by gigantic data. Gigantic data engages ML counts to uncover more fine-grained cases and make more advantageous and correct gauges than whenever in late memory with deep learning; on the other hand, it exhibits genuine challenges to deep learning in ML, for instance, show adaptability and appropriated enlisting. In this paper, we introduce a framework of Deep learning in ML on big data (DLiMLBiD) to guide the discussion of its opportunities and challenges. In this paper, different machine learning algorithms have been talked about. These calculations are utilized for different purposes like information mining, picture handling, prescient examination, and so forth to give some examples. The fundamental favourable position of utilizing machine learning is that, once a calculation realizes what to do with information, it can do its work consequently. In this paper we are providing the review of different Deep learning in text using Machine Learning and Big data methods.  


2021 ◽  
Vol 8 ◽  
Author(s):  
Yoshihiko Raita ◽  
Carlos A. Camargo ◽  
Liming Liang ◽  
Kohei Hasegawa

Clinicians handle a growing amount of clinical, biometric, and biomarker data. In this “big data” era, there is an emerging faith that the answer to all clinical and scientific questions reside in “big data” and that data will transform medicine into precision medicine. However, data by themselves are useless. It is the algorithms encoding causal reasoning and domain (e.g., clinical and biological) knowledge that prove transformative. The recent introduction of (health) data science presents an opportunity to re-think this data-centric view. For example, while precision medicine seeks to provide the right prevention and treatment strategy to the right patients at the right time, its realization cannot be achieved by algorithms that operate exclusively in data-driven prediction modes, as do most machine learning algorithms. Better understanding of data science and its tasks is vital to interpret findings and translate new discoveries into clinical practice. In this review, we first discuss the principles and major tasks of data science by organizing it into three defining tasks: (1) association and prediction, (2) intervention, and (3) counterfactual causal inference. Second, we review commonly-used data science tools with examples in the medical literature. Lastly, we outline current challenges and future directions in the fields of medicine, elaborating on how data science can enhance clinical effectiveness and inform medical practice. As machine learning algorithms become ubiquitous tools to handle quantitatively “big data,” their integration with causal reasoning and domain knowledge is instrumental to qualitatively transform medicine, which will, in turn, improve health outcomes of patients.


2021 ◽  
Vol 71 (4) ◽  
pp. 302-317
Author(s):  
Jelena Đuriš ◽  
Ivana Kurćubić ◽  
Svetlana Ibrić

Machine learning algorithms, and artificial intelligence in general, have a wide range of applications in the field of pharmaceutical technology. Starting from the formulation development, through a great potential for integration within the Quality by design framework, these data science tools provide a better understanding of the pharmaceutical formulations and respective processing. Machine learning algorithms can be especially helpful with the analysis of the large volume of data generated by the Process analytical technologies. This paper provides a brief explanation of the artificial neural networks, as one of the most frequently used machine learning algorithms. The process of the network training and testing is described and accompanied with illustrative examples of machine learning tools applied in the context of pharmaceutical formulation development and related technologies, as well as an overview of the future trends. Recently published studies on more sophisticated methods, such as deep neural networks and light gradient boosting machine algorithm, have been described. The interested reader is also referred to several official documents (guidelines) that pave the way for a more structured representation of the machine learning models in their prospective submissions to the regulatory bodies.


Author(s):  
Bella Yigong Zhang ◽  
Mark Chignell

Human Factors Engineering (HFE) is an applied discipline that uses a wide range of methodologies to better the design of systems and devices for human use. Underpinning all human factors design is the maxim to fit the human to the task/machine/system rather than vice versa. While some HFE methods such as task analysis and anthropometrics remain relatively fixed over time, areas such as human-technology interaction are strongly influenced by the fast-evolving technological trend. In times of big data, human factors engineers need to have a good understanding of topics like machine learning, advanced data analytics, and data visualization so that they can design data-driven products that involve big data sets. There is a natural lag between industrial trends and HFE curricula, leading to gaps between what people are taught and what they will need to know. In this paper, we present the results of a survey involving HFE practitioners (N=101) and we demonstrate the need for including data science and machine learning components in HFE curricula.


Author(s):  
Xabier Rodríguez-Martínez ◽  
Enrique Pascual-San-José ◽  
Mariano Campoy-Quiles

This review article presents the state-of-the-art in high-throughput computational and experimental screening routines with application in organic solar cells, including materials discovery, device optimization and machine-learning algorithms.


Author(s):  
Katherine Darveau ◽  
Daniel Hannon ◽  
Chad Foster

There is growing interest in the study and practice of applying data science (DS) and machine learning (ML) to automate decision making in safety-critical industries. As an alternative or augmentation to human review, there are opportunities to explore these methods for classifying aviation operational events by root cause. This study seeks to apply a thoughtful approach to design, compare, and combine rule-based and ML techniques to classify events caused by human error in aircraft/engine assembly, maintenance or operation. Event reports contain a combination of continuous parameters, unstructured text entries, and categorical selections. A Human Factors approach to classifier development prioritizes the evaluation of distinct data features and entry methods to improve modeling. Findings, including the performance of tested models, led to recommendations for the design of textual data collection systems and classification approaches.


2017 ◽  
Vol 47 (10) ◽  
pp. 2625-2626 ◽  
Author(s):  
Fuchun Sun ◽  
Guang-Bin Huang ◽  
Q. M. Jonathan Wu ◽  
Shiji Song ◽  
Donald C. Wunsch II

2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Yao Huimin

With the development of cloud computing and distributed cluster technology, the concept of big data has been expanded and extended in terms of capacity and value, and machine learning technology has also received unprecedented attention in recent years. Traditional machine learning algorithms cannot solve the problem of effective parallelization, so a parallelization support vector machine based on Spark big data platform is proposed. Firstly, the big data platform is designed with Lambda architecture, which is divided into three layers: Batch Layer, Serving Layer, and Speed Layer. Secondly, in order to improve the training efficiency of support vector machines on large-scale data, when merging two support vector machines, the “special points” other than support vectors are considered, that is, the points where the nonsupport vectors in one subset violate the training results of the other subset, and a cross-validation merging algorithm is proposed. Then, a parallelized support vector machine based on cross-validation is proposed, and the parallelization process of the support vector machine is realized on the Spark platform. Finally, experiments on different datasets verify the effectiveness and stability of the proposed method. Experimental results show that the proposed parallelized support vector machine has outstanding performance in speed-up ratio, training time, and prediction accuracy.


Sign in / Sign up

Export Citation Format

Share Document