data scaling
Recently Published Documents


TOTAL DOCUMENTS

76
(FIVE YEARS 23)

H-INDEX

13
(FIVE YEARS 2)

JURNAL ELTEK ◽  
2021 ◽  
Vol 19 (2) ◽  
pp. 80
Author(s):  
Muhamad Rifa’i ◽  
Herwandi . ◽  
Hari Kurnia Safitri ◽  
Abrar Kadafi

Scaling data PLC untuk penggerak motor stepper pada sistem extruder memengaruhi bentuk produk yang dihasilkan saat proses ekstrusi melalui kecepatan putar dan torsi motor. Produk hasil cetakan akan gagal jika kecepatan putar motor stepper terlalu cepat atau lambat karena pengaruh torsi motor yang bekerja. Dibutuhkan pembatasan kecepatan putar motor stepper menjadi beraturan untuk menghindari kegagalan proses ekstrusi. Tujuan penelitian ini adalah mendesain scaling setpoint dan kecepatan putar motor (rpm) beserta torsi motor (Nm) untuk kontrol torsi motor melalui kecepatan putar motor stepper. Metode yang digunakan adalah eksperimen kuantitatif data scaling dengan menggunakan persamaan matematis scaling setpoint, kecepatan putar motor (rpm) dan torsi motor (Nm). Data hasil didapatkan melalui pengujian simulasi persamaan matematis scaling pada PLC dengan sampel input periode pulsa setpoint antara 100us sampai 1000us. Hasil pengujian dengan daya motor 24Watt menunjukkan kecepatan putar motor stepper antara 49,3rpm sampai 9,4rpm berbanding terbalik dengan torsi motor stepper antara 0,49Nm sampai 2,55Nm. Pada setpoint 800us didapatkan hasil scaling setpoint 820us nilai error sebesar 2,5%, cukup ideal diaplikasikan dengan kecepatan putar 11,4rpm serta torsi 2,1Nm untuk menjalankan extruder dimensi kecil.   PLC data scaling for stepper motor drive in extruder system affects the shape of product produced during extrusion process through motor rotational speed and torque. Printed product will fail if  rotational speed of stepper motor is too fast or slow due the working torque influence of the motor. It is necessary to limit rotational speed of stepper motor to be regular to avoid failure of extrusion process. The purpose of this research is design scaling setpoint and motor rotational speed (rpm) along with motor torque (Nm) to control motor torque through stepper motor rotational speed. Method used is quantitative experimental data scaling using mathematical equations of scaling setpoint, motor rotational speed (rpm) and motor torque (Nm). Result data is obtained by simulation testing the scaling mathematical equation on PLC with input samples of the setpoint pulse period between 100us to 1000us. Test results with 24Watt motor power show that stepper motor rotational speed is between 49.3rpm to 9.4rpm and inversely proportional to stepper motor torque between 0.49Nm until 2.55Nm. At 800us setpoint, the 820us setpoint scaling results in error value of 2.5%, which is ideal for application with rotational speed of 11.4rpm and torque of 2.1Nm to run small-dimensional extruder.


2021 ◽  
Author(s):  
Miroslava Ivko Jordovic Pavlovic ◽  
Katarina Djordjevic ◽  
Zarko Cojbasic ◽  
Slobodanka Galovic ◽  
Marica Popovic ◽  
...  

Abstract In this paper, the influence of the input and output data scaling and normalization on the neural network overall performances is investigated aimed at inverse problem-solving in photoacoustics of semiconductors. The logarithmic scaling of the photoacoustic signal amplitudes as input data and numerical scaling of the sample thermal parameters as output data are presented as useful tools trying to reach maximal network precision. Max and min-max normalizations to the input data are presented to change their numerical values in the dataset to common scales, without distorting differences. It was demonstrated in theory that the largest network prediction error of all targeted parameters is obtained by a network with non-scaled output data. Also, it was found out that the best network prediction was achieved with min-max normalization of the input data and network predicted output data scale within the range of [110]. Network training and prediction performances analyzed with experimental input data show that the benefits and improvements of input and output scaling and normalization are not guaranteed but are strongly dependent on a specific problem to be solved.


Technologies ◽  
2021 ◽  
Vol 9 (3) ◽  
pp. 52
Author(s):  
Md Manjurul Ahsan ◽  
M. A. Parvez Mahmud ◽  
Pritom Kumar Saha ◽  
Kishor Datta Gupta ◽  
Zahed Siddique

Heart disease, one of the main reasons behind the high mortality rate around the world, requires a sophisticated and expensive diagnosis process. In the recent past, much literature has demonstrated machine learning approaches as an opportunity to efficiently diagnose heart disease patients. However, challenges associated with datasets such as missing data, inconsistent data, and mixed data (containing inconsistent missing data both as numerical and categorical) are often obstacles in medical diagnosis. This inconsistency led to a higher probability of misprediction and a misled result. Data preprocessing steps like feature reduction, data conversion, and data scaling are employed to form a standard dataset—such measures play a crucial role in reducing inaccuracy in final prediction. This paper aims to evaluate eleven machine learning (ML) algorithms—Logistic Regression (LR), Linear Discriminant Analysis (LDA), K-Nearest Neighbors (KNN), Classification and Regression Trees (CART), Naive Bayes (NB), Support Vector Machine (SVM), XGBoost (XGB), Random Forest Classifier (RF), Gradient Boost (GB), AdaBoost (AB), Extra Tree Classifier (ET)—and six different data scaling methods—Normalization (NR), Standscale (SS), MinMax (MM), MaxAbs (MA), Robust Scaler (RS), and Quantile Transformer (QT) on a dataset comprising of information of patients with heart disease. The result shows that CART, along with RS or QT, outperforms all other ML algorithms with 100% accuracy, 100% precision, 99% recall, and 100% F1 score. The study outcomes demonstrate that the model’s performance varies depending on the data scaling method.


2021 ◽  
Vol 22 (1) ◽  
Author(s):  
Alexander Partin ◽  
Thomas Brettin ◽  
Yvonne A. Evrard ◽  
Yitan Zhu ◽  
Hyunseung Yoo ◽  
...  

Abstract Background Motivated by the size and availability of cell line drug sensitivity data, researchers have been developing machine learning (ML) models for predicting drug response to advance cancer treatment. As drug sensitivity studies continue generating drug response data, a common question is whether the generalization performance of existing prediction models can be further improved with more training data. Methods We utilize empirical learning curves for evaluating and comparing the data scaling properties of two neural networks (NNs) and two gradient boosting decision tree (GBDT) models trained on four cell line drug screening datasets. The learning curves are accurately fitted to a power law model, providing a framework for assessing the data scaling behavior of these models. Results The curves demonstrate that no single model dominates in terms of prediction performance across all datasets and training sizes, thus suggesting that the actual shape of these curves depends on the unique pair of an ML model and a dataset. The multi-input NN (mNN), in which gene expressions of cancer cells and molecular drug descriptors are input into separate subnetworks, outperforms a single-input NN (sNN), where the cell and drug features are concatenated for the input layer. In contrast, a GBDT with hyperparameter tuning exhibits superior performance as compared with both NNs at the lower range of training set sizes for two of the tested datasets, whereas the mNN consistently performs better at the higher range of training sizes. Moreover, the trajectory of the curves suggests that increasing the sample size is expected to further improve prediction scores of both NNs. These observations demonstrate the benefit of using learning curves to evaluate prediction models, providing a broader perspective on the overall data scaling characteristics. Conclusions A fitted power law learning curve provides a forward-looking metric for analyzing prediction performance and can serve as a co-design tool to guide experimental biologists and computational scientists in the design of future experiments in prospective research studies.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Andreas Pfnuer ◽  
Julian Seger ◽  
Rianne Appel-Meulenbroek

Purpose The purpose of this study is to explain the contribution of Corporate Real Estate Management (CREM) to corporate success and to substantiate it empirically. However, no empirically tested holistic concept classifies and explains the different success contributions of CREM in their mechanisms of action and organisational levels. Design/methodology/approach This study develops a holistic two-dimensional model from existing literature to explain the relationship between CREM decisions and business success, and then tests it empirically using multidimensional data scaling from a telephone company survey (CATI) of 59 CREM managers sampled from the 200 largest German companies. Findings The created theoretical model holistically explains CREM success and existence as part of a non-property company, with specific performance drivers on specific organisational levels. The empirical data confirm that both dimensions of the model and, thus the measurement concept for modelling the CREM contribution to business success is robust across sectors and company/portfolio size in Germany. Originality/value The empirical confirmation of the conceptual model of CREM success provides novel support for the institutionalisation of the CREM function in companies and the holistic classification of different CREM research directions.


Author(s):  
A. V. Gulay ◽  
V. M. Zaitsev

Problems of architectural and functional construction and a heterogeneous network structure of intelligent control system of technological and industrious assignment have been considered. Complex research of the intelligent system is based on the modern paradigm of convergence of technical, algorithmic and software solutions. The concept of convergence of system solutions in the technology of construction of intelligent systems presupposes digitizing of physical values at sensor measurements, as well as uniform depiction and successive conversion of values of each controlled parameter with a certain set of scales. For solution of this task a functionally full set of scales; natural values of measured physical parameters; results of sensor conversions of physical values; results of parameters measurement in the format of integer valued binary codes; parameter values in the format of real scaled binary numbers were proposed. Unified algorithms of digitizing results of direct measurements of continuous parameters and their representation in the format of ρ-bit integer valued binary codes, as well as conversions of measured parameters in the format of real scaled binary numbers and their depiction in the scale of natural values of physical parameters were built with this set of scales. Operations with digitized physical parameters were specified in tracts of the intelligent system: calibration of results of indirect measurements; digitizing of discrete sensor signals; digital filtration of measurement results; adjustment of measuring tracts; intellect control of executive mechanisms. The consequence of application of convergence provisions is similarity of morphological construction, as well as schematic uniformity of implementation of information processes and cycles of control in the systems. The results of analysis of the functional construction of the intelligent system may be used, for example, for construction of hybrid systems of industrial automation.


2021 ◽  
Vol 1 (395) ◽  
pp. 65-78
Author(s):  
O. Orlov ◽  

Object and purpose of research. This paper discusses hydromechanics properties of propeller and their scaling laws. The purpose of this study was to analyse existing methods of scaling model test data through their comparison with full-scale test results, identify possible sources of considerable error that might be present in them, as well as update the method of model test data scaling taking into account hydrodmechanic interaction between propeller and hull in terms of their model data extrapolation to the full scale. Materials and methods. The paper discusses general relationships between hydromechanic parameters of hull and propeller, that arise, in their turn, from the fundamental laws of mechanics. These relationships were used to analyse interconnected laws governing the full-scale extrapolation of model test data for hull resistance, propeller thrust and propeller torque. Main results. The study identified some incorrect hypotheses in current scaling methods for hydrodynamics of propeller in behind-hull conditions, that might bring about considerable error in full-scale estimates of operational advance coefficient, thrust coefficient, efficiency and RPM. Conclusion. This paper suggests alternative techniques for determination of operational advance coefficient and other hydromechanics parameters of full-scale propeller, so as to obtain the estimates that take into account physical peculiarities of scale effect and also correlate with the results of full-scale trials.


2021 ◽  
Vol 18 (2) ◽  
pp. 597-618
Author(s):  
Sushil Singh ◽  
Jeonghun Cha ◽  
Tae Kim ◽  
Jong Park

For the advancement of the Internet of Things (IoT) and Next Generation Web, various applications have emerged to process structured or unstructured data. Latency, accuracy, load balancing, centralization, and others are issues on the cloud layer of transferring the IoT data. Machine learning is an emerging technology for big data analytics in IoT applications. Traditional data analyzing and processing techniques have several limitations, such as centralization and load managing in a massive amount of data. This paper introduces a Machine Learning Based Distributed Big Data Analysis Framework for Next Generation Web in IoT. We are utilizing feature extraction and data scaling at the edge layer paradigm for processing the data. Extreme Learning Machine (ELM) is adopting in the cloud layer for classification and big data analysis in IoT. The experimental evaluation demonstrates that the proposed distributed framework has a more reliable performance than the traditional framework.


Sign in / Sign up

Export Citation Format

Share Document