scholarly journals Analyzing the Impact of Climate Factors on GNSS-Derived Displacements by Combining the Extended Helmert Transformation and XGboost Machine Learning Algorithm

2021 ◽  
Vol 2021 ◽  
pp. 1-13
Author(s):  
Hanlin Liu ◽  
Linqiang Yang ◽  
Linchao Li

A variety of climate factors influence the precision of the long-term Global Navigation Satellite System (GNSS) monitoring data. To precisely analyze the effect of different climate factors on long-term GNSS monitoring records, this study combines the extended seven-parameter Helmert transformation and a machine learning algorithm named Extreme Gradient boosting (XGboost) to establish a hybrid model. We established a local-scale reference frame called stable Puerto Rico and Virgin Islands reference frame of 2019 (PRVI19) using ten continuously operating long-term GNSS sites located in the rigid portion of the Puerto Rico and Virgin Islands (PRVI) microplate. The stability of PRVI19 is approximately 0.4 mm/year and 0.5 mm/year in the horizontal and vertical directions, respectively. The stable reference frame PRVI19 can avoid the risk of bias due to long-term plate motions when studying localized ground deformation. Furthermore, we applied the XGBoost algorithm to the postprocessed long-term GNSS records and daily climate data to train the model. We quantitatively evaluated the importance of various daily climate factors on the GNSS time series. The results show that wind is the most influential factor with a unit-less index of 0.013. Notably, we used the model with climate and GNSS records to predict the GNSS-derived displacements. The results show that the predicted displacements have a slightly lower root mean square error compared to the fitted results using spline method (prediction: 0.22 versus fitted: 0.31). It indicates that the proposed model considering the climate records has the appropriate predict results for long-term GNSS monitoring.

2020 ◽  
Vol 17 (9) ◽  
pp. 4197-4201
Author(s):  
Heena Gupta ◽  
V. Asha

The prediction problem in any domain is very important to assess the prices and preferences among people. This issue varies for different kinds of data. Data may be nominal or ordinal, it may involve more categories or less. For any category to be considered by a machine learning algorithm, it needs to be encoded before any other operation can be further performed. There are various encoding schemes available like label encoding, count encoding and one hot encoding. This paper aims to understand the impact of various encoding schemes and the accuracy among the prediction problems of high cardinality categorical data. The paper also proposes an encoding scheme based on curated strings. The domain chosen for this purpose is predicting doctors’ fees in various cities having different profiles and qualification.


2021 ◽  
Vol 42 (Supplement_1) ◽  
Author(s):  
A Rosier ◽  
E Crespin ◽  
A Lazarus ◽  
G Laurent ◽  
A Menet ◽  
...  

Abstract Background Implantable Loop Recorders (ILRs) are increasingly used and generate a high workload for timely adjudication of ECG recordings. In particular, the excessive false positive rate leads to a significant review burden. Purpose A novel machine learning algorithm was developed to reclassify ILR episodes in order to decrease by 80% the False Positive rate while maintaining 99% sensitivity. This study aims to evaluate the impact of this algorithm to reduce the number of abnormal episodes reported in Medtronic ILRs. Methods Among 20 European centers, all Medtronic ILR patients were enrolled during the 2nd semester of 2020. Using a remote monitoring platform, every ILR transmitted episode was collected and anonymised. For every ILR detected episode with a transmitted ECG, the new algorithm reclassified it applying the same labels as the ILR (asystole, brady, AT/AF, VT, artifact, normal). We measured the number of episodes identified as false positive and reclassified as normal by the algorithm, and their proportion among all episodes. Results In 370 patients, ILRs recorded 3755 episodes including 305 patient-triggered and 629 with no ECG transmitted. 2821 episodes were analyzed by the novel algorithm, which reclassified 1227 episodes as normal rhythm. These reclassified episodes accounted for 43% of analyzed episodes and 32.6% of all episodes recorded. Conclusion A novel machine learning algorithm significantly reduces the quantity of episodes flagged as abnormal and typically reviewed by healthcare professionals. FUNDunding Acknowledgement Type of funding sources: None. Figure 1. ILR episodes analysis


Author(s):  
Chitrarth Lav ◽  
Jimmy Philip ◽  
Richard D. Sandberg

Abstract The unsteady flow prediction for turbomachinery applications relies heavily on unsteady RANS (URANS). For flows that exhibit vortex shedding, such as the wall-jet/wake flows considered in this study, URANS is unable to predict the correct momentum mixing with sufficient accuracy. We suggest a novel framework to improve that prediction, whereby the deterministic scales associated with vortex shedding are resolved while the stochastic scales of pure turbulence are modelled. The framework first separates the stochastic from the deterministic length scales and then develops a bespoke turbulence closure for the stochastic scales using a data-driven machine-learning algorithm. The novelty of the method lies in the use of machine-learning to develop closures tailored to URANS calculations. For the walljet/wake flow, three different mass flow ratios (0.86, 1.07 and 1.26) have been considered and a high-fidelity dataset of the idealised geometry is utilised for the sake of model development. This study serves as an a priori analysis, where the closures obtained from the machine-learning algorithm are evaluated before their implementation in URANS. The analysis looks at the impact of using all length scales versus the stochastic scales for closure development, and the impact of the extent of the spatial domain for developing the closure. It is found that a two-layer approach, using bespoke trained models for the near wall and the jet/wake regions, produce the best results. Finally, the generalisability of the developed closures is also evaluated by applying a given closure developed using a particular mass flow ratio to the other cases.


2020 ◽  
Vol 36 (2) ◽  
pp. 297-303
Author(s):  
Koichi Furui ◽  
Itsuro Morishima ◽  
Yasuhiro Morita ◽  
Yasunori Kanzaki ◽  
Kensuke Takagi ◽  
...  

2011 ◽  
Vol 403-408 ◽  
pp. 1266-1269 ◽  
Author(s):  
Wei Tang ◽  
Jun Lai

The traditional agent intelligence designing always lead to a fixed behavior manner. In this way, the NPC(Non-Player Character) in the game will act in a fixed and expectable way. It has greatly weakened the long-term attraction of single-played game. Extracting the human action patterns using a statistical-based machine learning algorithm can provide an easily-understanding way to implement the agent behavior intelligence. A daemon program records and sample the human player’s input action and related properties of character and virtual environment, and then apply certain statistical-based machine learning algorithm on the sample data. As a result, a human-similar intelligent behavior model was obtained. It can be used to help agent making an action decision. Repeating the learning process can give the agent a variety of intelligent behavior.


The purpose of empirical research study to know the impact of various HRD practices and its impact on predictor (job satisfaction). The structured survey research instrument was used to gather the data from 500 sample respondents. The questionnaire was validated with pilot study and data was with crone Bach’s alpha reliability test. The results of the outcome validated with R-Machine Learning Algorithm, multiple linear regression analysis with the help of train data and test data (30:70) ratio. Furthermore results reveals corrgram plot, matrix correlation plot and validation of data with validation match test among various HRD practices and it’s inter relationship. The analysis supported with various reviews which include both western and Indian reviews. The study can be generalized to any sector wherever HRD practices can be implemented. The study feasible/applicable to social implications and employee concern problems and related productivity. The study provides new insights to the readers and analysis which was not published by any other in the relevant topic related machine learning algorithm in analytics world.


2020 ◽  
Vol 75 (9) ◽  
pp. 2677-2680 ◽  
Author(s):  
Ed Moran ◽  
Esther Robinson ◽  
Christopher Green ◽  
Matt Keeling ◽  
Benjamin Collyer

Abstract Background Electronic decision support systems could reduce the use of inappropriate or ineffective empirical antibiotics. We assessed the accuracy of an open-source machine-learning algorithm trained in predicting antibiotic resistance for three Gram-negative bacterial species isolated from patients’ blood and urine within 48 h of hospital admission. Methods This retrospective, observational study used routine clinical information collected between January 2010 and October 2016 in Birmingham, UK. Patients from whose blood or urine cultures Escherichia coli, Klebsiella pneumoniae or Pseudomonas aeruginosa was isolated were identified. Their demographic, microbiology and prescribing data were used to train an open-source machine-learning algorithm—XGBoost—in predicting resistance to co-amoxiclav and piperacillin/tazobactam. Multivariate analysis was performed to identify predictors of resistance and create a point-scoring tool. The performance of both methods was compared with that of the original prescribers. Results There were 15 695 admissions. The AUC of the receiver operating characteristic curve for the point-scoring tools ranged from 0.61 to 0.67, and performed no better than medical staff in the selection of appropriate antibiotics. The machine-learning system performed statistically but marginally better (AUC 0.70) and could have reduced the use of unnecessary broad-spectrum antibiotics by as much as 40% among those given co-amoxiclav, piperacillin/tazobactam or carbapenems. A validation study is required. Conclusions Machine-learning algorithms have the potential to help clinicians predict antimicrobial resistance in patients found to have a Gram-negative infection of blood or urine. Prospective studies are required to assess performance in an unselected patient cohort, understand the acceptability of such systems to clinicians and patients, and assess the impact on patient outcome.


Sensors ◽  
2018 ◽  
Vol 18 (11) ◽  
pp. 4070 ◽  
Author(s):  
Xijun Ye ◽  
Xueshuai Chen ◽  
Yaxiong Lei ◽  
Jiangchao Fan ◽  
Liu Mei

Deflection is one of the key indexes for the safety evaluation of bridge structures. In reality, due to the changing operational and environmental conditions, the deflection signals measured by structural health monitoring systems are greatly affected. These ambient changes in the system often cover subtle changes in the vibration signals caused by damage to the system. The deflection signals of prestressed concrete (PC) bridges are regarded as the superposition of different effects, including concrete shrinkage, creep, prestress loss, material deterioration, temperature effects, and live load effects. According to multiscale analysis theory of the long-term deflection signal, in this paper, an integrated machine learning algorithm that combines a Butterworth filter, ensemble empirical mode decomposition (EEMD), principle component analysis (PCA), and fast independent component analysis (FastICA) is proposed for separating the individual deflection components from a measured single channel deflection signal. The proposed algorithm consists of four stages: (1) the live load effect, which is a high-frequency signal, is separated from the raw signal by a Butterworth filter; (2) the EEMD algorithm is used to extract the intrinsic mode function (IMF) components; (3) these IMFs are utilized as input in the PCA model and some uncorrelated and dominant basis components are extracted; and (4) FastICA is applied to derive the independent deflection component. The simulated results show that each individual deflection component can be successfully separated when the noise level is under 10%. Verified by a practical application, the algorithm is feasible for extracting the structural deflection (including concrete shrinkage, creep, and prestress loss) only caused by structural damage or material deterioration.


Sign in / Sign up

Export Citation Format

Share Document