adaptive methods
Recently Published Documents


TOTAL DOCUMENTS

548
(FIVE YEARS 125)

H-INDEX

32
(FIVE YEARS 3)

2021 ◽  
pp. 875529302110520
Author(s):  
Mark D Petersen ◽  
Allison M Shumway ◽  
Peter M Powers ◽  
Morgan P Moschetti ◽  
Andrea L Llenos ◽  
...  

The 2021 US National Seismic Hazard Model (NSHM) for the State of Hawaii updates the previous two-decade-old assessment by incorporating new data and modeling techniques to improve the underlying ground shaking forecasts of tectonic-fault, tectonic-flexure, volcanic, and caldera collapse earthquakes. Two earthquake ground shaking hazard forecasts (public policy and research) are produced that differ in how they account for declustered catalogs. The earthquake source model is based on (1) declustered earthquake catalogs smoothed with adaptive methods, (2) earthquake rate forecasts based on three temporally varying 60-year time periods, (3) maximum magnitude criteria that extend to larger earthquakes than previously considered, (4) a separate Kīlauea-specific seismogenic caldera collapse model that accounts for clustered event behavior observed during the 2018 eruption, and (5) fault ruptures that consider historical seismicity, GPS-based strain rates, and a new Quaternary fault database. Two new Hawaii-specific ground motion models (GMMs) and five additional global models consistent with Hawaii shaking data are used to forecast ground shaking at 23 spectral periods and peak parameters. Site effects are calculated using western US and Hawaii specific empirical equations and provide shaking forecasts for 8 site classes. For most sites the new analysis results in similar spectral accelerations as those in the 2001 NSHM, with a few exceptions caused mostly by GMM changes. Ground motions are the highest in the southern portion of the Island of Hawai’i due to high rates of forecasted earthquakes on décollement faults. Shaking decays to the northwest where lower earthquake rates result from flexure of the tectonic plate. Large epistemic uncertainties in source characterizations and GMMs lead to an overall high uncertainty (more than a factor of 3) in ground shaking at Honolulu and Hilo. The new shaking model indicates significant chances of slight or greater damaging ground motions across most of the island chain.


2021 ◽  
Vol 66 (2) ◽  
pp. 51
Author(s):  
T.-V. Pricope

Imperfect information games describe many practical applications found in the real world as the information space is rarely fully available. This particular set of problems is challenging due to the random factor that makes even adaptive methods fail to correctly model the problem and find the best solution. Neural Fictitious Self Play (NFSP) is a powerful algorithm for learning approximate Nash equilibrium of imperfect information games from self-play. However, it uses only crude data as input and its most successful experiment was on the in-limit version of Texas Hold’em Poker. In this paper, we develop a new variant of NFSP that combines the established fictitious self-play with neural gradient play in an attempt to improve the performance on large-scale zero-sum imperfect information games and to solve the more complex no-limit version of Texas Hold’em Poker using powerful handcrafted metrics and heuristics alongside crude, raw data. When applied to no-limit Hold’em Poker, the agents trained through self-play outperformed the ones that used fictitious play with a normal-form single-step approach to the game. Moreover, we showed that our algorithm converges close to a Nash equilibrium within the limited training process of our agents with very limited hardware. Finally, our best self-play-based agent learnt a strategy that rivals expert human level.  


2021 ◽  
pp. 15-25
Author(s):  
В.К. Качанов ◽  
И.В. Соколов ◽  
Р.В. Концов ◽  
М.Б. Федоров ◽  
В.В. Первушин

It is shown that should be used adaptive antenna arrays, the shape of which can take the form of a non-planar surface of the tested product, for ultrasonic tomography of concrete building structures with a non-standard surface configuration. It should also be used adaptive methods of ultrasound tomography, which allows both to determine the coordinates of defects and the velocity of ultrasound in concrete, as well as adjust the parameters of the probing signals to the characteristics of concrete products.


2021 ◽  
Vol 13 (1) ◽  
Author(s):  
Lon S. Schneider ◽  
Yuqi Qiu ◽  
Ronald G. Thomas ◽  
Carol Evans ◽  
Diane M. Jacobs ◽  
...  

Abstract Background The COVID-19 pandemic disrupted Alzheimer disease randomized clinical trials (RCTs), forcing investigators to make changes in the conduct of such trials while endeavoring to maintain their validity. Changing ongoing RCTs carries risks for biases and threats to validity. To understand the impact of exigent modifications due to COVID-19, we examined several scenarios in symptomatic and disease modification trials that could be made. Methods We identified both symptomatic and disease modification Alzheimer disease RCTs as exemplars of those that would be affected by the pandemic and considered the types of changes that sponsors could make to each. We modeled three scenarios for each of the types of trials using existing datasets, adjusting enrollment, follow-ups, and dropouts to examine the potential effects COVID-19-related changes. Simulations were performed that accounted for completion and dropout patterns using linear mixed effects models, modeling time as continuous and categorical. The statistical power of the scenarios was determined. Results Truncating both symptomatic and disease modification trials led to underpowered trials. By contrast, adapting the trials by extending the treatment period, temporarily stopping treatment, delaying outcomes assessments, and performing remote assessment allowed for increased statistical power nearly to the level originally planned. Discussion These analyses support the idea that disrupted trials under common scenarios are better continued and extended even in the face of dropouts, treatment disruptions, missing outcomes, and other exigencies and that adaptations can be made that maintain the trials’ validity. We suggest some adaptive methods to do this noting that some changes become under-powered to detect the original effect sizes and expected outcomes. These analyses provide insight to better plan trials that are resilient to unexpected changes to the medical, social, and political milieu.


2021 ◽  
pp. 1-22
Author(s):  
Julien Audiffren ◽  
Jean-Pierre Bresciani

The quantification of human perception through the study of psychometric functions Ψ is one of the pillars of experimental psychophysics. In particular, the evaluation of the threshold is at the heart of many neuroscience and cognitive psychology studies, and a wide range of adaptive procedures has been developed to improve its estimation. However, these procedures are often implicitly based on different mathematical assumptions on the psychometric function, and unfortunately, these assumptions cannot always be validated prior to data collection. This raises questions about the accuracy of the estimator produced using the different procedures. In the study we examine in this letter, we compare five adaptive procedures commonly used in psychophysics to estimate the threshold: Dichotomous Optimistic Search (DOS), Staircase, PsiMethod, Gaussian Processes, and QuestPlus. These procedures range from model-based methods, such as the PsiMethod, which relies on strong assumptions regarding the shape of Ψ, to model-free methods, such as DOS, for which assumptions are minimal. The comparisons are performed using simulations of multiple experiments, with psychometric functions of various complexity. The results show that while model-based methods perform well when Ψ is an ideal psychometric function, model-free methods rapidly outshine them when Ψ deviates from this model, as, for instance, when Ψ is a beta cumulative distribution function. Our results highlight the importance of carefully choosing the most appropriate method depending on the context.


Author(s):  
R R Tribhuvan ◽  
T. Bhaskar

Outcome-based learning (OBL) is a tried-and-true learning technique based on a set of predetermined objectives. Program Educational Objectives (PEOs), Program Outcomes (POs), and Course Outcomes are the three components of OBL (COs). Faculty members may adopt many ML-based advised actions at the conclusion of each course to improve the quality of learning and, as a result, the overall education. Due to the huge number of courses and faculty members involved, harmful behaviors may be advocated, resulting in unwanted and incorrect choices. The education system is described in this study based on college course requirements, academic records, and course learning results evaluations is provided for anticipating appropriate actions utilizing various machine learning algorithms. Dataset translates to different problem conversion methods and adaptive methods such as one-versus-all, binary significance, naming power set, series classification and custom classification ML-KNN. The suggested recommender ML-based system is used as a case study at the Institute of Computer and Information Sciences to assist academic staff in boosting learning quality and instructional methodologies. The results suggest that the proposed recommendation system offers more measures to improve students' learning experiences.


2021 ◽  
Author(s):  
◽  
Nathaniel Ridley

<p>Despite rapid growth of adaptation theory in the last two decades, there is a gap in the field. Books like Linda Hutcheon’s A Theory of Adaptation (2006) and Julie Sanders’ Adaptation and Appropriation (2006) approach adaptations from an audience’s perspective, describing the effects of the adaptation process and providing a robust taxonomy, identifying all of different forms that adaptation might take. They do not, however, describe the details of the process of adaptation itself, even though they often refer to the need for a process-oriented account of adaptation. Existing adaptation manuals focus on screen-writing, leaving someone with an interest in the specifics of adapting a play nowhere to turn. This paper begins to address this gap in the available knowledge by documenting the adaptation process involved in the creation of four new adaptations of Anton Chekhov's Uncle Vanya, targeted at a New Zealand audience.  The experiments presented here confirm what is suggested by a survey of the reception of English-language adaptations of Chekhov: there is no single correct method for adapting a play. An adapter's greatest challenge can be identifying which strategy is appropriate for the conditions they face. This project experiments with different adaptive methods and strategies, developed by looking at other English-language Chekhov adaptations, including techniques of approximating the setting, language and themes to a target audience. I attempt to identify which methodologies will achieve the desired results, revealing a variety of different challenges, advantages and weaknesses inherent to each approach. Moreover, both the research and the experiments suggest how the success or failure of an adaptation depends on a variety of contextual factors, including the target audience's relationship with the adapted work, the dramaturgical characteristics of that work, and the abilities of the adapter.</p>


2021 ◽  
Author(s):  
◽  
Nathaniel Ridley

<p>Despite rapid growth of adaptation theory in the last two decades, there is a gap in the field. Books like Linda Hutcheon’s A Theory of Adaptation (2006) and Julie Sanders’ Adaptation and Appropriation (2006) approach adaptations from an audience’s perspective, describing the effects of the adaptation process and providing a robust taxonomy, identifying all of different forms that adaptation might take. They do not, however, describe the details of the process of adaptation itself, even though they often refer to the need for a process-oriented account of adaptation. Existing adaptation manuals focus on screen-writing, leaving someone with an interest in the specifics of adapting a play nowhere to turn. This paper begins to address this gap in the available knowledge by documenting the adaptation process involved in the creation of four new adaptations of Anton Chekhov's Uncle Vanya, targeted at a New Zealand audience.  The experiments presented here confirm what is suggested by a survey of the reception of English-language adaptations of Chekhov: there is no single correct method for adapting a play. An adapter's greatest challenge can be identifying which strategy is appropriate for the conditions they face. This project experiments with different adaptive methods and strategies, developed by looking at other English-language Chekhov adaptations, including techniques of approximating the setting, language and themes to a target audience. I attempt to identify which methodologies will achieve the desired results, revealing a variety of different challenges, advantages and weaknesses inherent to each approach. Moreover, both the research and the experiments suggest how the success or failure of an adaptation depends on a variety of contextual factors, including the target audience's relationship with the adapted work, the dramaturgical characteristics of that work, and the abilities of the adapter.</p>


2021 ◽  
Vol 11 (21) ◽  
pp. 10184
Author(s):  
Yanan Li ◽  
Xuebin Ren ◽  
Fangyuan Zhao ◽  
Shusen Yang

Due to powerful data representation ability, deep learning has dramatically improved the state-of-the-art in many practical applications. However, the utility highly depends on fine-tuning of hyper-parameters, including learning rate, batch size, and network initialization. Although many first-order adaptive methods (e.g., Adam, Adagrad) have been proposed to adjust learning rate based on gradients, they are susceptible to the initial learning rate and network architecture. Therefore, the main challenge of using deep learning in practice is how to reduce the cost of tuning hyper-parameters. To address this, we propose a heuristic zeroth-order learning rate method, Adacomp, which adaptively adjusts the learning rate based only on values of the loss function. The main idea is that Adacomp penalizes large learning rates to ensure the convergence and compensates small learning rates to accelerate the training process. Therefore, Adacomp is robust to the initial learning rate. Extensive experiments, including comparison to six typically adaptive methods (Momentum, Adagrad, RMSprop, Adadelta, Adam, and Adamax) on several benchmark datasets for image classification tasks (MNIST, KMNIST, Fashion-MNIST, CIFAR-10, and CIFAR-100), were conducted. Experimental results show that Adacomp is not only robust to the initial learning rate but also to the network architecture, network initialization, and batch size.


Sign in / Sign up

Export Citation Format

Share Document