scholarly journals Decadal Climate Predictions Using Sequential Learning Algorithms

2016 ◽  
Vol 29 (10) ◽  
pp. 3787-3809 ◽  
Author(s):  
Ehud Strobach ◽  
Golan Bel

Abstract Ensembles of climate models are commonly used to improve decadal climate predictions and assess the uncertainties associated with them. Weighting the models according to their performances holds the promise of further improving their predictions. Here, an ensemble of decadal climate predictions is used to demonstrate the ability of sequential learning algorithms (SLAs) to reduce the forecast errors and reduce the uncertainties. Three different SLAs are considered, and their performances are compared with those of an equally weighted ensemble, a linear regression, and the climatology. Predictions of four different variables—the surface temperature, the zonal and meridional wind, and pressure—are considered. The spatial distributions of the performances are presented, and the statistical significance of the improvements achieved by the SLAs is tested. The reliability of the SLAs is also tested, and the advantages and limitations of the different measures of the performance are discussed. It was found that the best performances of the SLAs are achieved when the learning period is comparable to the prediction period. The spatial distribution of the SLAs performance showed that they are skillful and better than the other forecasting methods over large continuous regions. This finding suggests that, despite the fact that each of the ensemble models is not skillful, they were able to capture some physical processes that resulted in deviations from the climatology and that the SLAs enabled the extraction of this additional information.

2015 ◽  
Vol 15 (15) ◽  
pp. 8631-8641 ◽  
Author(s):  
E. Strobach ◽  
G. Bel

Abstract. Simulated climate dynamics, initialized with observed conditions, is expected to be synchronized, for several years, with the actual dynamics. However, the predictions of climate models are not sufficiently accurate. Moreover, there is a large variance between simulations initialized at different times and between different models. One way to improve climate predictions and to reduce the associated uncertainties is to use an ensemble of climate model predictions, weighted according to their past performances. Here, we show that skillful predictions, for a decadal time scale, of the 2 m temperature can be achieved by applying a sequential learning algorithm to an ensemble of decadal climate model simulations. The predictions generated by the learning algorithm are shown to be better than those of each of the models in the ensemble, the better performing simple average and a reference climatology. In addition, the uncertainties associated with the predictions are shown to be reduced relative to those derived from an equally weighted ensemble of bias-corrected predictions. The results show that learning algorithms can help to better assess future climate dynamics.


2015 ◽  
Vol 15 (5) ◽  
pp. 7707-7734 ◽  
Author(s):  
E. Strobach ◽  
G. Bel

Abstract. Simulated climate dynamics, initialized with observed conditions is expected to be synchronized, for several years, with the actual dynamics. However, the predictions of climate models are not sufficiently accurate. Moreover, there is a large variance between simulations initialized at different times and between different models. One way to improve climate predictions and to reduce the associated uncertainties is to use an ensemble of climate model predictions, weighted according to their past performance. Here, we show that skillful predictions, for a decadal time scale, of the 2 m-temperature can be achieved by applying a sequential learning algorithm to an ensemble of decadal climate model simulations. The predictions generated by the learning algorithm are shown to be better than those of each of the models in the ensemble, the better performing simple average and a reference climatology. In addition, the uncertainties associated with the predictions are shown to be reduced relative to those derived from equally weighted ensemble of bias corrected predictions. The results show that learning algorithms can help to better assess future climate dynamics.


2020 ◽  
Author(s):  
Kshema Jose

<p>This study observed how two hypertext features – absence of a linear or author-specified order and availability of multiple reading aids – influence reading comprehension processes of ESL readers. Studies with native or highly proficient users of English, have suggested that readers reading hypertexts comprehend better than readers reading print texts. This was attributed to (i) presence of hyperlinks that provide access to additional information that can potentially help overcome comprehension obstacles and (ii) the absence of an author-imposed reading order that helps readers exercise cognitive flexibility. An aspect that remains largely un-researched is how well readers with low language competence comprehend hypertexts. This research sought to initiate research in the area by exploring the question: Do all ESL readers comprehend a hypertext better than a print text?</p> <p>Keeping in mind the fact that a majority of readers reading online texts in English can be hindered by three types of comprehension deficits – low levels of language proficiency, non-availability of prior knowledge, or both – this study investigated how two characteristic features of hypertext, viz., linking to additional information and non-linearity in presentation of information, affect reading comprehension of ESL readers. </p> <p>Two types of texts that occur in the electronic medium – linear or pre-structured texts and non-linear or self-navigating texts, were used in this study. Based on a comparison of subjects’ comprehension outcomes and free recalls, text factors and reader factors that can influence hypertext reading comprehension of ESL readers are identified. </p> Contradictory to what many researchers believe, results indicate that self-navigating hypertexts might not promote deep comprehension in all ESL readers.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Martin Saveski ◽  
Edmond Awad ◽  
Iyad Rahwan ◽  
Manuel Cebrian

AbstractAs groups are increasingly taking over individual experts in many tasks, it is ever more important to understand the determinants of group success. In this paper, we study the patterns of group success in Escape The Room, a physical adventure game in which a group is tasked with escaping a maze by collectively solving a series of puzzles. We investigate (1) the characteristics of successful groups, and (2) how accurately humans and machines can spot them from a group photo. The relationship between these two questions is based on the hypothesis that the characteristics of successful groups are encoded by features that can be spotted in their photo. We analyze >43K group photos (one photo per group) taken after groups have completed the game—from which all explicit performance-signaling information has been removed. First, we find that groups that are larger, older and more gender but less age diverse are significantly more likely to escape. Second, we compare humans and off-the-shelf machine learning algorithms at predicting whether a group escaped or not based on the completion photo. We find that individual guesses by humans achieve 58.3% accuracy, better than random, but worse than machines which display 71.6% accuracy. When humans are trained to guess by observing only four labeled photos, their accuracy increases to 64%. However, training humans on more labeled examples (eight or twelve) leads to a slight, but statistically insignificant improvement in accuracy (67.4%). Humans in the best training condition perform on par with two, but worse than three out of the five machine learning algorithms we evaluated. Our work illustrates the potentials and the limitations of machine learning systems in evaluating group performance and identifying success factors based on sparse visual cues.


Nafta-Gaz ◽  
2021 ◽  
Vol 77 (5) ◽  
pp. 283-292
Author(s):  
Tomasz Topór ◽  

The application of machine learning algorithms in petroleum geology has opened a new chapter in oil and gas exploration. Machine learning algorithms have been successfully used to predict crucial petrophysical properties when characterizing reservoirs. This study utilizes the concept of machine learning to predict permeability under confining stress conditions for samples from tight sandstone formations. The models were constructed using two machine learning algorithms of varying complexity (multiple linear regression [MLR] and random forests [RF]) and trained on a dataset that combined basic well information, basic petrophysical data, and rock type from a visual inspection of the core material. The RF algorithm underwent feature engineering to increase the number of predictors in the models. In order to check the training models’ robustness, 10-fold cross-validation was performed. The MLR and RF applications demonstrated that both algorithms can accurately predict permeability under constant confining pressure (R2 0.800 vs. 0.834). The RF accuracy was about 3% better than that of the MLR and about 6% better than the linear reference regression (LR) that utilized only porosity. Porosity was the most influential feature of the models’ performance. In the case of RF, the depth was also significant in the permeability predictions, which could be evidence of hidden interactions between the variables of porosity and depth. The local interpretation revealed the common features among outliers. Both the training and testing sets had moderate-low porosity (3–10%) and a lack of fractures. In the test set, calcite or quartz cementation also led to poor permeability predictions. The workflow that utilizes the tidymodels concept will be further applied in more complex examples to predict spatial petrophysical features from seismic attributes using various machine learning algorithms.


2021 ◽  
Author(s):  
◽  
Urvesh Bhowan

<p>In classification,machine learning algorithms can suffer a performance bias when data sets are unbalanced. Binary data sets are unbalanced when one class is represented by only a small number of training examples (called the minority class), while the other class makes up the rest (majority class). In this scenario, the induced classifiers typically have high accuracy on the majority class but poor accuracy on the minority class. As the minority class typically represents the main class-of-interest in many real-world problems, accurately classifying examples from this class can be at least as important as, and in some cases more important than, accurately classifying examples from the majority class. Genetic Programming (GP) is a promising machine learning technique based on the principles of Darwinian evolution to automatically evolve computer programs to solve problems. While GP has shown much success in evolving reliable and accurate classifiers for typical classification tasks with balanced data, GP, like many other learning algorithms, can evolve biased classifiers when data is unbalanced. This is because traditional training criteria such as the overall success rate in the fitness function in GP, can be influenced by the larger number of examples from the majority class.  This thesis proposes a GP approach to classification with unbalanced data. The goal is to develop new internal cost-adjustment techniques in GP to improve classification performances on both the minority class and the majority class. By focusing on internal cost-adjustment within GP rather than the traditional databalancing techniques, the unbalanced data can be used directly or "as is" in the learning process. This removes any dependence on a sampling algorithm to first artificially re-balance the input data prior to the learning process. This thesis shows that by developing a number of new methods in GP, genetic program classifiers with good classification ability on the minority and the majority classes can be evolved. This thesis evaluates these methods on a range of binary benchmark classification tasks with unbalanced data. This thesis demonstrates that unlike tasks with multiple balanced classes where some dynamic (non-static) classification strategies perform significantly better than the simple static classification strategy, either a static or dynamic strategy shows no significant difference in the performance of evolved GP classifiers on these binary tasks. For this reason, the rest of the thesis uses this static classification strategy.  This thesis proposes several new fitness functions in GP to perform cost adjustment between the minority and the majority classes, allowing the unbalanced data sets to be used directly in the learning process without sampling. Using the Area under the Receiver Operating Characteristics (ROC) curve (also known as the AUC) to measure how well a classifier performs on the minority and majority classes, these new fitness functions find genetic program classifiers with high AUC on the tasks on both classes, and with fast GP training times. These GP methods outperform two popular learning algorithms, namely, Naive Bayes and Support Vector Machines on the tasks, particularly when the level of class imbalance is large, where both algorithms show biased classification performances.  This thesis also proposes a multi-objective GP (MOGP) approach which treats the accuracies of the minority and majority classes separately in the learning process. The MOGP approach evolves a good set of trade-off solutions (a Pareto front) in a single run that perform as well as, and in some cases better than, multiple runs of canonical single-objective GP (SGP). In SGP, individual genetic program solutions capture the performance trade-off between the two objectives (minority and majority class accuracy) using an ROC curve; whereas in MOGP, this requirement is delegated to multiple genetic program solutions along the Pareto front.  This thesis also shows how multiple Pareto front classifiers can be combined into an ensemble where individual members vote on the class label. Two ensemble diversity measures are developed in the fitness functions which treat the diversity on both the minority and the majority classes as equally important; otherwise, these measures risk being biased toward the majority class. The evolved ensembles outperform their individual members on the tasks due to good cooperation between members.  This thesis further improves the ensemble performances by developing a GP approach to ensemble selection, to quickly find small groups of individuals that cooperate very well together in the ensemble. The pruned ensembles use much fewer individuals to achieve performances that are as good as larger (unpruned) ensembles, particularly on tasks with high levels of class imbalance, thereby reducing the total time to evaluate the ensemble.</p>


2021 ◽  
Author(s):  
Ziyang Chen ◽  
Kai-Ming Chen ◽  
Ying Shi ◽  
Zhao-Da Ye ◽  
Sheng Chen ◽  
...  

Abstract AimTo investigate the effect of orthokeratology (OK) lens on axial length (AL) elongation in myopia with anisometropia children.MethodsThirty-seven unilateral myopia (group 1) and fifty-nine bilateral myopia with anisometropia children were involved in this 1-year retrospective study. And bilateral myopia with anisometropia children were divided into group 2A (diopter of the lower SER eye under − 2.00D) and group 2B(diopter of the lower SER eye is equal or greater than − 2.00D). The change in AL were observed.The datas were analysed using SPSS 21.0.Results(1) In group 1, the mean baseline AL of the H eyes and L eye were 24.70 ± 0.89 mm and 23.55 ± 0.69 mm, respectively. In group 2A, the mean baseline AL of the H eyes and L eyes were 24.61 ± 0.84 mm and 24.00 ± 0.70 mm respectively. In group 2B, the mean baseline AL of the H eyes and L eyes were 25.28 ± 0.72 mm and 24.70 ± 0.74 mm. After 1 year, the change in AL of the L eyes was faster than the H eyes in group 1 and group 2A (all P<0.001).While the AL of the H eyes and L eyes had the same increased rate in group 2B. (2) The effect of controlling AL elongation of H eyes is consistent in three groups (P = 0.559).The effect of controlling AL elongation of L eyes in group 2B was better than that in group 1 and group 2A (P < 0.001). And the difference between group 1 and group 2A has no statistical significance. (3) The AL difference in H eyes and L eyes decreased from baseline 1.16 ± 0.55mm to 0.88 ± 0.68mm after 1 year in group 1.And in group 2A, the AL difference in H eyes and L eyes decreased from baseline 0.61 ± 0.34mm to 0.48 ± 0.28mm. There was statistically significant difference (all P<0.001). In group 2B, the baseline AL difference in H eyes and L eyes has no significant difference from that after 1 year (P = 0.069).ConclusionsMonocular OK lens is effective on suppression AL growth of the myopic eyes and reduce anisometropia value in unilateral myopic children. Binocular OK lenses only reduce anisometropia with the diopter of the low eye under − 2.00D. Binocular OK lenses cannot reduce anisometropia with the diopter of the low eye equal or greater than − 2.00D. Whether OK lens can reduce refractive anisometropia value is related to the spherical equivalent refractive of low refractive eye in bilateral myopia with anisometropia children after 1-year follow-up.


2021 ◽  
Vol 7 (6) ◽  
pp. 6445-6452
Author(s):  
Haijuan Hu ◽  
Yishu Zhao ◽  
Jianhua Ma

To analyze the clinical effect of nursing cooperation in transsphenoidal approach microscopic hypophysectomy. From January 2017 to January 2020, 80 patients who underwent transsphenoidal microscopic hypophysectomy in our hospital were selected to participate in the analysis and study. They were divided into two groups according to the randomized allocation, namely the observation group and the control group. Among them, 40 patients in the observation group and 40 patients in the control group were given routine nursing care for the patients in the control group, and comprehensive nursing intervention was adopted for the patients in the observation group, and the overall nursing effect of the two groups of patients was compared. After taking different nursing methods, the condition of patients in both groups was effectively controlled, and the effective rate of patients in the observation group with comprehensive nursing intervention was significantly better than that of patients in the control group with conventional nursing methods, and the difference had certain statistical significance (P < 0.05); The satisfaction degree of patients in the study group was significantly better than that of patients in the control group, and the difference was statistically significant (P < 0.05). The degree of negative emotions of patients in the study group was significantly better than that of patients in the control group after receiving comprehensive nursing intervention, and the difference was statistically significant (P < 0.05), and the difference in the incidence of adverse events between the two groups was not statistically significant (P > 0.05); The scores of each index of SF-36 questionnaire of patients in both groups were higher than those before nursing, and the scores of each index of patients in observation group were higher than those of patients in control group, and the difference was statistically significant (P > 0.05). With adequate preoperative preparation and mastery of the use of mechanical equipment, comprehensive nursing intervention can effectively improve the treatment effect of patients, make patients more satisfied with the nursing work, and can soothe patients’ negative psychological mood, eliminate panic, improve patients' life confidence, enhance intraoperative cooperation, and ensure that the operation can be completed smoothly.


2009 ◽  
Vol 33 (8) ◽  
pp. 293-295
Author(s):  
Alan Smith ◽  
James Warner

Aims and MethodPharmaceutical advertising material can confuse clinical and statistical significance. We used a brief questionnaire (five questions) to evaluate psychiatrists' appreciation of this difference. This approximated to the level of critical appraisal competence of the MRCPsych part 3 examination.ResultsOf the 113 questionnaires distributed 93 were returned complete (response rate 82%). Senior trainees were significantly better than junior trainees at correctly interpreting data (mean score (maximum 5) 2.61v.2.08; P = 0.04). Consultants did less well than senior trainees, although our sample of consultant respondents was too small for significance testing.Clinical ImplicationsLearning critical appraisal for the MRCPsych examination may provide psychiatrists with valuable transferable skills and prevent gaps in our knowledge being exploited by misleading study data. Psychiatrists of all grades need to maintain their research appraisal skills and should not regard the MRCPsych examination as the end of their learning.


Sign in / Sign up

Export Citation Format

Share Document