repeated runs
Recently Published Documents


TOTAL DOCUMENTS

22
(FIVE YEARS 9)

H-INDEX

5
(FIVE YEARS 2)

2020 ◽  
Vol 83 (1) ◽  
pp. 85-92
Author(s):  
Mohd Azahar Mohd Ariff ◽  
Muhammad Syafiq Abd Jalil ◽  
Noor ‘Aina Abdul Razak ◽  
Jefri Jaapar

Caesalpinia sappan linn. (CSL) is a plant which is also known as Sepang tree contains various medicinal values such as to treat diarrhea, skin rashes, syphilis, jaundice, drinking water for blood purifying, diabetes, and to improve skin complexion. The aim of this study is to obtain the most optimum condition in terms of the ratio of sample to solvent, particle size, and extraction time to get the highest amount of concentration of the CSL extract. In this study, the ranges of each parameters used were: ratio sample to solvent: 1.0:20, 1.5:20, 2.0:20, 2.5:20, 3.0:20, particle size: 1 mm, 500 um, 250 um, 125 um, 63 um, and extraction time: 1 hr, 2 hr, 3 hr, 4 hr, 5 hr. The concentration was analyzed using a UV-vis spectrophotometer. The optimum conditions were obtained by response surface methodology. From the design, 20 samples were run throughout this experiment. The optimized value from the RSM were 2.0:20 for ratio sample to solvent, 125 µm of particle size and 2.48 hours with the concentration of 37.1184 ppm. The accuracy of the predictive model was validated with 2 repeated runs and the mean percentage error was less than 3%. This confirmed the model’s capability for optimizing the conditions for the reflux extraction of CSL’s wood.


Author(s):  
Mohamed ElSeify ◽  
Sylvain Cornu ◽  
Raymond Karé ◽  
Ali Fathi ◽  
John Richmond

Abstract Axial strain inspection using the AXISS™ is an established tool in the pipeline operator’s toolbox to assess pipeline geotechnical threats and other strain related events. Consequently, there is a large database of axial strain data for several different pipelines operating in different environments and from multiple inspections at the same geographical locations. The Cheecham slope, located south east of Fort McMurray, Alberta, is a known geohazard site crossed by six individual pipelines. The lines were constructed between 1999 and 2013 and have a size range of 10” to 36”. Five out of the six lines, 12” to 36”, have been inspected using the axial strain tool. The pipelines inspected cover a range of characteristics including, different vintages, pipe diameters and positions in the ROW. These differences, and the ILI runs provide an insight into the effect of a landslide event on the strain response of these pipelines. Axial strain measurement of the multiple pipelines in the Cheecham slope’s ROW allows: i) a direct comparison between lines ii) evaluation of the strain profile across the slope iii) assessment of the magnitude of the axial strain in terms of pipe characteristics e.g. pipe vintage and mechanical properties. More importantly, the axial strain data may provide an additional tool to assess the effectiveness of strain mitigation steps carried out over the years. An increase in the frequency of axial strain ILI runs resulted in additional data being available and more importantly data from run to run inspections spread over months or sometime years. A single run captures the strain at the time of inspection but run to run inspections provide an additional comparative tool to evaluate and monitor pipeline movement. Two out of the five lines inspected have run to run axial strain data. This paper takes the Cheecham slope as a case study to discuss the benefits of run comparison of ILI axial strain data either by comparing strain values of repeated runs for a single line or by the cross comparison of strain responses of different lines in the same ROW. The paper aims to demonstrate how run to run analysis of ILI axial strain data can be implemented as part of geohazard risk management program to asses strain risk profiles of these locations and to assess the effectiveness of strain mitigation programs previously undertaken by operators.


2020 ◽  
Vol 6 (3) ◽  
pp. 497-500
Author(s):  
Hannes Oppermann ◽  
Felix Wichum ◽  
Jens Haueisen ◽  
Matthias Klemm ◽  
Lorenz Esch

AbstractTranscranial magnetic stimulation (TMS) is an established method to treat various neurological diseases, such as depression, Alzheimer’s disease, and tinnitus. New applications for TMS are closed loop neurofeedback (NF) scenarios, which require software control of the TMS system, instead of the currently used manual control. Hence, the MagCPP (https://github.com/MagCPP) toolbox was developed and is described in this work. The toolbox enables the external control of Magstim TMS devices via a C++ interface. Comparing MagCPP to two other toolboxes in a TMS application scenario with 40% power, we found that MagCPP works faster and has lower variability in repeated runs (MagCPP, Python, MATLAB [mean±std in seconds]: 1.19±0.00, 1.59±0.01, 1.44±0.02). An integration of MagCPP in a real-time data processing platform MNE-CPP with an optional GUI demonstrates its ability as part of a closed-loop NF-scenario. With its performing advantages over other toolboxes, MagCPP is a first step towards a complete closed loop NF scenario and offers possibilities for novel study designs.


eLife ◽  
2020 ◽  
Vol 9 ◽  
Author(s):  
Nalin Leelatian ◽  
Justine Sinnaeve ◽  
Akshitkumar M Mistry ◽  
Sierra M Barone ◽  
Asa A Brockman ◽  
...  

A goal of cancer research is to reveal cell subsets linked to continuous clinical outcomes to generate new therapeutic and biomarker hypotheses. We introduce a machine learning algorithm, Risk Assessment Population IDentification (RAPID), that is unsupervised and automated, identifies phenotypically distinct cell populations, and determines whether these populations stratify patient survival. With a pilot mass cytometry dataset of 2 million cells from 28 glioblastomas, RAPID identified tumor cells whose abundance independently and continuously stratified patient survival. Statistical validation within the workflow included repeated runs of stochastic steps and cell subsampling. Biological validation used an orthogonal platform, immunohistochemistry, and a larger cohort of 73 glioblastoma patients to confirm the findings from the pilot cohort. RAPID was also validated to find known risk stratifying cells and features using published data from blood cancer. Thus, RAPID provides an automated, unsupervised approach for finding statistically and biologically significant cells using cytometry data from patient samples.


2020 ◽  
Vol 81 (1) ◽  
pp. 148-158
Author(s):  
Xiaoping Wang ◽  
Wei Liu ◽  
Xueqian Liu ◽  
Jihang Luo

Abstract The microbubble pretreated resin was used for demulsification and deoiling of the simulated O/W emulsion. The demulsification and deoiling performance and the influencing factors were investigated systematically. Experimental results indicate that the microbubble pretreated resin reaches a 97% oil removal within 80 min; on the contrary, oil removals are 90% and 85% for NaOH solution soaked and un-pretreated resins respectively. After five repeated runs, the oil removal of microbubble pretreated resin can be maintained at over 70%. The demulsification mechanism was revealed by comparing zeta potential, surface tension, contact angle of the emulsion in treatment, and the characterization results of the resin before and after use. Three possible pathways of demulsification were concluded and the ranking contributions can be shown below. Pathway 1: Competitive trapping of surfactant. The cationic groups of the resin combine with the anionic groups of the surfactant and drag them away from the oil particle surface. Pathway 2: Distribution equilibrium of surfactant. Free surfactants in the emulsion are captured by resin and reduce the concentration of uncombined surfactant. This results in surfactants on the oil particle partly detaching from the oil surface to maintain the adsorption-desorption equilibrium of the surfactant. Pathway 3: Adsorption coalescence.


Symmetry ◽  
2019 ◽  
Vol 11 (8) ◽  
pp. 956 ◽  
Author(s):  
Yu-Tung Chen ◽  
Eduardo Piedad ◽  
Cheng-Chien Kuo

Energy consumers may not know whether their next-hour forecasted load is either high or low based on the actual value predicted from their historical data. A conventional method of level prediction with a pattern recognition approach was performed by first predicting the actual numerical values using typical pattern-based regression models, hen classifying them into pattern levels (e.g., low, average, and high). A proposed prediction with pattern recognition scheme was developed to directly predict the desired levels using simpler classifier models without undergoing regression. The proposed pattern recognition classifier was compared to its regression method using a similar algorithm applied to a real-world energy dataset. A random forest (RF) algorithm which outperformed other widely used machine learning (ML) techniques in previous research was used in both methods. Both schemes used similar parameters for training and testing simulations. After 10-time cross training validation and five averaged repeated runs with random permutation per data splitting, the proposed classifier shows better computation speed and higher classification accuracy than the conventional method. However, when the number of its desired levels increases, its prediction accuracy seems to decrease and approaches the accuracy of the conventional method. The developed energy level prediction, which is computationally inexpensive and has a good classification performance, can serve as an alternative forecasting scheme.


Processing of unordered and unbounded data is the prime requirement of the current businesses. Large amount of rapidly generated data demands the processing of the same without the storage and as per the timestamp associated with it. It is difficult to process these unbounded data with batch engine as the existing batch systems suffer from the delay intrinsic by accumulating entire incoming records in a group prior to process it. However windowing can be useful when dealing with unbounded data which pieces up a dataset into fixed chunks for processing with repeated runs of batch engine. Contrast to batch processing, stream handling system aims to process information that is gathered in a little timeframe. In this way, stream data processing ought to be coordinated with the flow of data. In the real world the event time is always skewed with the processing time which introduce issues of delay and completeness in incoming stream of data. In this paper, we presented the analysis on the watermark and trigger approach which can be used to manage these unconventional desires in the processing of unbounded data.


2019 ◽  
Vol 30 (6) ◽  
pp. 3518-3527 ◽  
Author(s):  
Martin Szinte ◽  
Tomas Knapen

Abstract The default network (DN) is a brain network with correlated activities spanning frontal, parietal, and temporal cortical lobes. The DN activates for high-level cognition tasks and deactivates when subjects are actively engaged in perceptual tasks. Despite numerous observations, the role of DN deactivation remains unclear. Using computational neuroimaging applied to a large dataset of the Human Connectome Project (HCP) and to two individual subjects scanned over many repeated runs, we demonstrate that the DN selectively deactivates as a function of the position of a visual stimulus. That is, we show that spatial vision is encoded within the DN by means of deactivation relative to baseline. Our results suggest that the DN functions as a set of high-level visual regions, opening up the possibility of using vision-science tools to understand its putative function in cognition and perception.


2018 ◽  
Vol 7 (4.10) ◽  
pp. 1089
Author(s):  
Sivanantham S ◽  
Aravind Babu S ◽  
Babu Ramki ◽  
Mallick P.S

This paper presents a new X-filling algorithm for test power reduction and a novel encoding technique for test data compression in scan-based VLSI testing. The proposed encoding technique focuses on replacing redundant runs of the equal-run-length vector with a shorter codeword. The effectiveness of this compression method depends on a number of repeated runs occur in the fully specified test set. In order to maximize the repeated runs with equal run length, the unspecified bits in the test cubes are filled with the proposed technique called alternating equal-run-length (AERL) filling. The resultant test data are compressed using the proposed alternating equal-run-length coding to reduce the test data volume. Efficient decompression architecture is also presented to decode the original data with lesser area overhead and power. Experimental results obtained from larger ISCAS'89 benchmark circuits show the efficiency of the proposed work. The AERL achieves up to 82.05 % of compression ratio as well as up to 39.81% and 93.20 % of peak and average-power transitions in scan-in mode during IC testing.  


Sign in / Sign up

Export Citation Format

Share Document