scholarly journals Denoising and Dehazing an Image in a Cascaded Pattern for Continuous Casting

Metals ◽  
2022 ◽  
Vol 12 (1) ◽  
pp. 126
Author(s):  
Wenbin Su ◽  
Yifei Zhang ◽  
Hongbo Wei ◽  
Qi Gao

Automatic vision systems have been widely used in the continuous casting of the steel industry, which improve efficiency and reduce labor. At present, high temperatures with evaporating fog cause images to be noisy and hazy, impeding the usage of advanced machine learning algorithms in this task. Instead of considering denoising and dehazing separately like previous papers, we established that by taking advantage of deep learning in a modeling complex formulation, our proposed algorithm, called Cascaded Denoising and Dehazing Net (CDDNet) reduces noise and hazy in a cascading pattern. Experimental results on both synthesized images and a pragmatic video from a continuous casting factory demonstrate our method’s superior performance in various metrics. Compared with existing methods, CDDNet achieved a 50% improvement in terms of peak signal-to-noise ratio on the validation dataset, and a nearly 5% improvement on a dataset that has never seen before. Besides, our model generalizes so well that processing a video from an operating continuous casting factory with CDDNet resulted in high visual quality.

2021 ◽  
Vol 11 (15) ◽  
pp. 6787
Author(s):  
Jože M. Rožanec ◽  
Blaž Kažič ◽  
Maja Škrjanc ◽  
Blaž Fortuna ◽  
Dunja Mladenić

Demand forecasting is a crucial component of demand management, directly impacting manufacturing companies’ planning, revenues, and actors through the supply chain. We evaluate 21 baseline, statistical, and machine learning algorithms to forecast smooth and erratic demand on a real-world use case scenario. The products’ data were obtained from a European original equipment manufacturer targeting the global automotive industry market. Our research shows that global machine learning models achieve superior performance than local models. We show that forecast errors from global models can be constrained by pooling product data based on the past demand magnitude. We also propose a set of metrics and criteria for a comprehensive understanding of demand forecasting models’ performance.


2015 ◽  
Vol 32 (6) ◽  
pp. 821-827 ◽  
Author(s):  
Enrique Audain ◽  
Yassel Ramos ◽  
Henning Hermjakob ◽  
Darren R. Flower ◽  
Yasset Perez-Riverol

Abstract Motivation: In any macromolecular polyprotic system—for example protein, DNA or RNA—the isoelectric point—commonly referred to as the pI—can be defined as the point of singularity in a titration curve, corresponding to the solution pH value at which the net overall surface charge—and thus the electrophoretic mobility—of the ampholyte sums to zero. Different modern analytical biochemistry and proteomics methods depend on the isoelectric point as a principal feature for protein and peptide characterization. Protein separation by isoelectric point is a critical part of 2-D gel electrophoresis, a key precursor of proteomics, where discrete spots can be digested in-gel, and proteins subsequently identified by analytical mass spectrometry. Peptide fractionation according to their pI is also widely used in current proteomics sample preparation procedures previous to the LC-MS/MS analysis. Therefore accurate theoretical prediction of pI would expedite such analysis. While such pI calculation is widely used, it remains largely untested, motivating our efforts to benchmark pI prediction methods. Results: Using data from the database PIP-DB and one publically available dataset as our reference gold standard, we have undertaken the benchmarking of pI calculation methods. We find that methods vary in their accuracy and are highly sensitive to the choice of basis set. The machine-learning algorithms, especially the SVM-based algorithm, showed a superior performance when studying peptide mixtures. In general, learning-based pI prediction methods (such as Cofactor, SVM and Branca) require a large training dataset and their resulting performance will strongly depend of the quality of that data. In contrast with Iterative methods, machine-learning algorithms have the advantage of being able to add new features to improve the accuracy of prediction. Contact: [email protected] Availability and Implementation: The software and data are freely available at https://github.com/ypriverol/pIR. Supplementary information: Supplementary data are available at Bioinformatics online.


2021 ◽  
Author(s):  
Fang He ◽  
John H Page ◽  
Kerry R Weinberg ◽  
Anirban Mishra

BACKGROUND The current COVID-19 pandemic is unprecedented; under resource-constrained setting, predictive algorithms can help to stratify disease severity, alerting physicians of high-risk patients, however there are few risk scores derived from a substantially large EHR dataset, using simplified predictors as input. OBJECTIVE To develop and validate simplified machine learning algorithms which predicts COVID-19 adverse outcomes, to evaluate the AUC (area under the receiver operating characteristic curve), sensitivity, specificity and calibration of the algorithms, to derive clinically meaningful thresholds. METHODS We conducted machine learning model development and validation via cohort study using multi-center, patient-level, longitudinal electronic health records (EHR) from Optum® COVID-19 database which provides anonymized, longitudinal EHR from across US. The models were developed based on clinical characteristics to predict 28-day in-hospital mortality, ICU admission, respiratory failure, mechanical ventilator usages at inpatient setting. Data from patients who were admitted prior to Sep 7, 2020, is randomly sampled into development, test and validation datasets; data collected from Sep 7, 2020 through Nov 15, 2020 was reserved as prospective validation dataset. RESULTS Of 3.7M patients in the analysis, a total of 585,867 patients were diagnosed or tested positive for SARS-CoV-2; and 50,703 adult patients were hospitalized with COVID-19 between Feb 1 and Nov 15, 2020. Among the study cohort (N=50,703), there were 6,204 deaths, 9,564 ICU admissions, 6,478 mechanically ventilated or EMCO patients and 25,169 patients developed ARDS or respiratory failure within 28 days since hospital admission. The algorithms demonstrated high accuracy (AUC = 0.89 (0.89 - 0.89) on validation dataset (N=10,752)), consistent prediction through the second wave of pandemic from September to November (AUC = 0.85 (0.85 - 0.86) on post-development validation (N= 14,863)), great clinical relevance and utility. Besides, a comprehensive 386 input covariates from baseline and at admission was included in the analysis; the end-to-end pipeline automates feature selection and model development process, producing 10 key predictors as input such as age, blood urea nitrogen, oxygen saturation, which are both commonly measured and concordant with recognized risk factors for COVID-19. CONCLUSIONS The systematic approach and rigorous validations demonstrate consistent model performance to predict even beyond the time period of data collection, with satisfactory discriminatory power and great clinical utility. Overall, the study offers an accurate, validated and reliable prediction model based on only ten clinical features as a prognostic tool to stratifying COVID-19 patients into intermediate, high and very high-risk groups. This simple predictive tool could be shared with a wider healthcare community, to enable service as an early warning system to alert physicians of possible high-risk patients, or as a resource triaging tool to optimize healthcare resources. CLINICALTRIAL N/A


2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Palash Rai ◽  
Rahul Kaushik

Abstract A technique for the estimation of an optical signal-to-noise ratio (OSNR) using machine learning algorithms has been proposed. The algorithms are trained with parameters derived from eye-diagram via simulation in 10 Gb/s On-Off Keying (OOK) nonreturn-to-zero (NRZ) data signal. The performance of different machine learning (ML) techniques namely, multiple linear regression, random forest, and K-nearest neighbor (K-NN) for OSNR estimation in terms of mean square error and R-squared value has been compared. The proposed methods may be useful for intelligent signal analysis in a test instrument and to monitor optical performance.


2020 ◽  
Vol 2020 ◽  
pp. 1-7
Author(s):  
Nalindren Naicker ◽  
Timothy Adeliyi ◽  
Jeanette Wing

Educational Data Mining (EDM) is a rich research field in computer science. Tools and techniques in EDM are useful to predict student performance which gives practitioners useful insights to develop appropriate intervention strategies to improve pass rates and increase retention. The performance of the state-of-the-art machine learning classifiers is very much dependent on the task at hand. Investigating support vector machines has been used extensively in classification problems; however, the extant of literature shows a gap in the application of linear support vector machines as a predictor of student performance. The aim of this study was to compare the performance of linear support vector machines with the performance of the state-of-the-art classical machine learning algorithms in order to determine the algorithm that would improve prediction of student performance. In this quantitative study, an experimental research design was used. Experiments were set up using feature selection on a publicly available dataset of 1000 alpha-numeric student records. Linear support vector machines benchmarked with ten categorical machine learning algorithms showed superior performance in predicting student performance. The results of this research showed that features like race, gender, and lunch influence performance in mathematics whilst access to lunch was the primary factor which influences reading and writing performance.


Entropy ◽  
2020 ◽  
Vol 22 (9) ◽  
pp. 936
Author(s):  
Milton A. Garcés

Increased data acquisition by uncalibrated, heterogeneous digital sensor systems such as smartphones present new challenges. Binary metrics are proposed for the quantification of cyber-physical signal characteristics and features, and a standardized constant-Q variation of the Gabor atom is developed for use with wavelet transforms. Two different continuous wavelet transform (CWT) reconstruction formulas are presented and tested under different signal to noise ratio (SNR) conditions. A sparse superposition of Nth order Gabor atoms worked well against a synthetic blast transient using the wavelet entropy and an entropy-like parametrization of the SNR as the CWT coefficient-weighting functions. The proposed methods should be well suited for sparse feature extraction and dictionary-based machine learning across multiple sensor modalities.


2020 ◽  
Vol 2020 ◽  
pp. 1-15
Author(s):  
Gustavo Asumu Mboro Nchama ◽  
Angela Leon Mecias ◽  
Mariano Rodriguez Ricard

The Perona-Malik (PM) model is used successfully in image processing to eliminate noise while preserving edges; however, this model has a major drawback: it tends to make the image look blocky. This work proposes to modify the PM model by introducing the Caputo-Fabrizio fractional gradient inside the diffusivity function. Experiments with natural images show that our model can suppress efficiently the blocky effect. Also, our model has good performance in visual quality, high peak signal-to-noise ratio (PSNR), and lower value of mean absolute error (MAE) and mean square error (MSE).


2020 ◽  
pp. 107754632092983
Author(s):  
Leonardo S Jablon ◽  
Sergio L Avila ◽  
Bruno Borba ◽  
Gustavo L Mourão ◽  
Fabrizio L Freitas ◽  
...  

The diagnosis of failures in rotating machines has been subject to studies because of its benefits to maintenance improvement. Condition monitoring reduces maintenance costs, increases reliability and availability, and extends the useful life of critical rotating machinery in industry ambiance. Machine learning techniques have been evolving rapidly, and its applications are bringing better performance to many fields. This study presents a new strategy to improve the diagnosis performance of rotating machines using machine learning strategies on vibration orbital features. The advantage of using orbits in comparison to other vibration measurement systems is the simplicity of the instrumentation involved as well as the information multiplicity contained in the orbit. On the other hand, rolling element bearings are prevalent in industrial machinery. This type of bearing has less orbital oscillation and is noisier than sliding contact bearings. Therefore, it is more difficult to extract useful information. Practical results on an industry motor workbench with rolling element bearings are presented, and the algorithm robustness is evaluated by calculating diagnosis accuracy using inputs with different signal-to-noise ratios. For this kind of noisy scenario where signal analysis is naturally tough, the algorithm classifies approximately 85% of the time correctly. In a completely harsh environment, where the signal-to-noise ratio can be smaller than −25 dB, the accuracy achieved is close to 60%. These statistics show that the strategy proposed can be robust for rotating machine unbalance condition diagnosis even in the worst scenarios, which is required for industrial applications.


Author(s):  
Shuangxia Ren ◽  
Jill Zupetic ◽  
Mehdi Nouraie ◽  
Xinghua Lu ◽  
Richard D. Boyce ◽  
...  

AbstractBackgroundThe partial pressure of oxygen (PaO2)/fraction of oxygen delivered (FIO2) ratio is the reference standard for assessment of hypoxemia in mechanically ventilated patients. Non-invasive monitoring with the peripheral saturation of oxygen (SpO2) is increasingly utilized to estimate PaO2 because it does not require invasive sampling. Several equations have been reported to impute PaO2/FIO2 from SpO2 /FIO2. However, machine-learning algorithms to impute the PaO2 from the SpO2 has not been compared to published equations.Research QuestionHow do machine learning algorithms perform at predicting the PaO2 from SpO2 compared to previously published equations?MethodsThree machine learning algorithms (neural network, regression, and kernel-based methods) were developed using 7 clinical variable features (n=9,900 ICU events) and subsequently 3 features (n=20,198 ICU events) as input into the models from data available in mechanically ventilated patients from the Medical Information Mart for Intensive Care (MIMIC) III database. As a regression task, the machine learning models were used to impute PaO2 values. As a classification task, the models were used to predict patients with moderate-to-severe hypoxemic respiratory failure based on a clinically relevant cut-off of PaO2/FIO2 ≤ 150. The accuracy of the machine learning models was compared to published log-linear and non-linear equations. An online imputation calculator was created.ResultsCompared to seven features, three features (SpO2, FiO2 and PEEP) were sufficient to impute PaO2/FIO2 ratio using a large dataset. Any of the tested machine learning models enabled imputation of PaO2/FIO2 from the SpO2/FIO2 with lower error and had greater accuracy in predicting PaO2/FIO2 ≤ 150 compared to published equations. Using three features, the machine learning models showed superior performance in imputing PaO2 across the entire span of SpO2 values, including those ≥ 97%.InterpretationThe improved performance shown for the machine learning algorithms suggests a promising framework for future use in large datasets.


Sign in / Sign up

Export Citation Format

Share Document