early stopping
Recently Published Documents


TOTAL DOCUMENTS

301
(FIVE YEARS 74)

H-INDEX

25
(FIVE YEARS 4)

2022 ◽  
Author(s):  
Daniel Freilich ◽  
Jennifer Victory ◽  
Paul Jenkins ◽  
James Wheeler ◽  
G Matthew Vail ◽  
...  

Background ACEi/ARB medications have been hypothesized to have potential benefit in COVID-19. Despite concern for increased ACE-2 expression in some animal models, preclinical and observational-retrospective and uncontrolled trials suggested possible benefit. Two RCTs of the ARB losartan from University of Minnesota showed no benefit yet safety signals for losartan in outpatient and hospitalized COVID-19 patients. COVID MED, started early in the pandemic, also assessed losartan in a RCT in hospitalized patients with COVID-19. Methods COVID MED was quadruple-blinded, placebo-controlled, multicenter randomized clinical trial (RCT). Hospitalized COVID-19 patients were randomized to receive standard care and hydroxychloroquine, lopinavir/ritonavir, losartan, or placebo. Hydroxychloroquine and lopinavir/ritonavir arms were discontinued after RCTs showed no benefit. We report data from the losartan arm compared to combined (lopinavir-ritonavir and placebo) and prespecified placebo-only controls. The primary endpoint was the NCOSS slope of change. Slow enrollment prompted early stopping. Results Of 432 screened patients, 14 were enrolled (3.5%), 9 received losartan and 5 combined control (lopinavir/ritonavir [N=2], placebo [N=3]); 1 hydroxychloroquine arm patient was excluded. Most baseline parameters were balanced. Treatment with losartan was not associated with a difference in NCOSS slope of change in comparison with combined control (p=0.4) or placebo-only control (p=0.05) (trend favoring placebo). 60-day mortality and overall AE and SAE rates were numerically but not significantly higher with losartan. Conclusions In this small blinded RCT in hospitalized COVID-19 patients, losartan did not improve outcome vs. control comparisons and was associated with adverse safety signals.


Author(s):  
Hamdi Bilel ◽  
Aguili Taoufik

This paper proposes a radiation pattern synthesis of the almost periodic antenna arrays including mutual coupling effects (that extracted by the Floquet analysis according to our previous work), which principally has a high directivity and large bandwidth. For modeling the given structures, the moment method combined with the Generalized Equivalent Circuit (MoM-GEC) is proposed. The artificial neural network (ANN) as a powerful computational model has been successfully applied to the antenna array pattern synthesis. The results showed that the multilayer feedforward neural networks are rugged and can successfully and efficiently resolve various distinctive complex almost periodic antenna patterns (with different source amplitudes) (in particular, both periodic and randomly aperiodic structures are taken into account). However, the artificial neural network (ANN) is capable of quickly producing the synthesis results using generalization with the early stopping (ES) method. A significant time gain and memory consumption are achieved by using this given method to improve the generalization (called early stopping). To justify this work, several examples are developed and discussed.


Entropy ◽  
2021 ◽  
Vol 23 (12) ◽  
pp. 1629
Author(s):  
Ali Unlu ◽  
Laurence Aitchison

We developed Variational Laplace for Bayesian neural networks (BNNs), which exploits a local approximation of the curvature of the likelihood to estimate the ELBO without the need for stochastic sampling of the neural-network weights. The Variational Laplace objective is simple to evaluate, as it is the log-likelihood plus weight-decay, plus a squared-gradient regularizer. Variational Laplace gave better test performance and expected calibration errors than maximum a posteriori inference and standard sampling-based variational inference, despite using the same variational approximate posterior. Finally, we emphasize the care needed in benchmarking standard VI, as there is a risk of stopping before the variance parameters have converged. We show that early-stopping can be avoided by increasing the learning rate for the variance parameters.


2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Juan Wang ◽  
Liangzhu Ge ◽  
Guorui Liu ◽  
Guoyan Li

During the development of deep neural networks (DNNs), it is difficult to trade off the performance of fitting ability and generalization ability in training set and unknown data (such as test set). The current solution is to reduce the complexity of the objective function, using regularization methods. In this paper, we propose a method called VOVU (Variance Of Variance of Units in the last hidden layer) to maximize the optimization of the balance between fitting power and generalization during monitoring the training process. The main idea is to give full play to the predictability of the variance of the hidden layer units in the complexity of the neural network model and use it as a generalization evaluation index. In particular, we take full advantage of the last layer of hidden layers since it has the greatest impact. The algorithm was tested on Fashion-MNIST and CIFAR-10. The experimental results demonstrate that VOVU and test loss are highly positively correlated. This implies that a smaller VOVU indicates that the network has better generalization. VOVU can serve as an alternative method for early stopping and a good predictor of the generalization performance in DNNs. Specially, when the sample size is limited, VOVU will be a better choice because it does not require dividing training data as validation set.


2021 ◽  
Author(s):  
Vanderson M. do Rosario ◽  
Thais A. Silva Camacho ◽  
Otávio O. Napoli ◽  
Edson Borin

The wide variety of virtual machine types, network configurations, number of instances, among others configuration tweaks, in cloud computing, makes the finding of the best configuration a hard problem. Trying to reduce costs and resource underutilization while achieving acceptable performance can be a hard task even for specialists. Thus, many approaches to find these optimal or almost optimal configurations for a given program were proposed in the literature. Observing the performance of an application in the cloud takes time and money. Therefore, most of the approaches aim not only to find good solutions but also to reduce the search cost. One of those approaches relies on Bayesian Optimization, which analyzes fewer configurations, reducing the search cost while still finding good solutions. Another approach found in the literature is the use of a technique named Paramount Iteration, which enables users to reason about cloud configurations' cost and performance without executing the application to its completion (early-stopping) this approach reduces the cost of each observation. In this work, we show that both techniques can be used together to do fewer and lower-cost observations. We demonstrate that such an approach can recommend solutions that are 1.68x better on average than Random Searching and with a 6x cheaper search.


Entropy ◽  
2021 ◽  
Vol 23 (11) ◽  
pp. 1392
Author(s):  
Zhiping Xu ◽  
Lin Wang ◽  
Shaohua Hong

In this paper, a joint early stopping criterion based on cross entropy (CE), named joint CE criterion, is presented for double-protograph low-density parity-check (DP-LDPC) codes-based joint source-channel coding (JSCC) systems in images transmission to reduce the decoding complexity and decrease the decoding delay. The proposed early stopping criterion adopts the CE from the output likelihood ratios (LLRs) of the joint decoder. Moreover, a special phenomenon named asymmetry oscillation-like convergence (AOLC) in the changing process of CE is uncovered in the source decoder and channel decoder of this system meanwhile, and the proposed joint CE criterion can reduce the impact from the AOLC phenomenon. Comparing to the counterparts, the results show that the joint CE criterion can perform well in the decoding complexity and decoding latency in the low–moderate signal-to-noise ratio (SNR) region and achieve performance improvement in the high SNR region with appropriate parameters, which also demonstrates that this system with joint CE is a low-latency and low-power system.


Electronics ◽  
2021 ◽  
Vol 10 (16) ◽  
pp. 1973
Author(s):  
Daniel S. Soper

Selecting a final machine learning (ML) model typically occurs after a process of hyperparameter optimization in which many candidate models with varying structural properties and algorithmic settings are evaluated and compared. Evaluating each candidate model commonly relies on k-fold cross validation, wherein the data are randomly subdivided into k folds, with each fold being iteratively used as a validation set for a model that has been trained using the remaining folds. While many research studies have sought to accelerate ML model selection by applying metaheuristic and other search methods to the hyperparameter space, no consideration has been given to the k-fold cross validation process itself as a means of rapidly identifying the best-performing model. The current study rectifies this oversight by introducing a greedy k-fold cross validation method and demonstrating that greedy k-fold cross validation can vastly reduce the average time required to identify the best-performing model when given a fixed computational budget and a set of candidate models. This improved search time is shown to hold across a variety of ML algorithms and real-world datasets. For scenarios without a computational budget, this paper also introduces an early stopping algorithm based on the greedy cross validation method. The greedy early stopping method is shown to outperform a competing, state-of-the-art early stopping method both in terms of search time and the quality of the ML models selected by the algorithm. Since hyperparameter optimization is among the most time-consuming, computationally intensive, and monetarily expensive tasks in the broader process of developing ML-based solutions, the ability to rapidly identify optimal machine learning models using greedy cross validation has obvious and substantial benefits to organizations and researchers alike.


Sign in / Sign up

Export Citation Format

Share Document