Implementing an Algorithm: Performance Considerations and a Case Study

Author(s):  
Uwe Suhl
2015 ◽  
Vol 54 (7) ◽  
pp. 1637-1662 ◽  
Author(s):  
Jason M. Apke ◽  
Daniel Nietfeld ◽  
Mark R. Anderson

AbstractEnhanced temporal and spatial resolution of the Geostationary Operational Environmental Satellite–R Series (GOES-R) will allow for the use of cloud-top-cooling-based convection-initiation (CI) forecasting algorithms. Two such algorithms have been created on the current generation of GOES: the University of Wisconsin cloud-top-cooling algorithm (UWCTC) and the University of Alabama in Huntsville’s satellite convection analysis and tracking algorithm (SATCAST). Preliminary analyses of algorithm products have led to speculation over preconvective environmental influences on algorithm performance. An objective validation approach is developed to separate algorithm products into positive and false indications. Seventeen preconvective environmental variables are examined for the positive and false indications to improve algorithm output. The total dataset consists of two time periods in the late convective season of 2012 and the early convective season of 2013. Data are examined for environmental relationships using principal component analysis (PCA) and quadratic discriminant analysis (QDA). Data fusion by QDA is tested for SATCAST and UWCTC on five separate case-study days to determine whether application of environmental variables improves satellite-based CI forecasting. PCA and significance testing revealed that positive indications favored environments with greater vertically integrated instability (CAPE), less stability (CIN), and more low-level convergence. QDA improved both algorithms on all five case studies using significantly different variables. This study provides an examination of environmental influences on the performance of GOES-R Proving Ground CI forecasting algorithms and shows that integration of QDA in the cloud-top-cooling-based algorithms using environmental variables will ultimately generate a more skillful product.


Author(s):  
Tania Turrubiates Lopez ◽  
Elisa Schaeffer ◽  
Dalia Domiguez-Diaz ◽  
German Dominguez-Carrillo

2002 ◽  
Vol 34 (3) ◽  
pp. 297-312 ◽  
Author(s):  
MARNE C. CARIO ◽  
JOHN J. CLIFFORD ◽  
RAYMOND R. HILL ◽  
IAEHWAN YANG ◽  
KEJIAN YANG ◽  
...  

Molecules ◽  
2021 ◽  
Vol 26 (16) ◽  
pp. 4757
Author(s):  
William E. Hackett ◽  
Joseph Zaia

Protein glycosylation that mediates interactions among viral proteins, host receptors, and immune molecules is an important consideration for predicting viral antigenicity. Viral spike proteins, the proteins responsible for host cell invasion, are especially important to be examined. However, there is a lack of consensus within the field of glycoproteomics regarding identification strategy and false discovery rate (FDR) calculation that impedes our examinations. As a case study in the overlap between software, here as a case study, we examine recently published SARS-CoV-2 glycoprotein datasets with four glycoproteomics identification software with their recommended protocols: GlycReSoft, Byonic, pGlyco2, and MSFragger-Glyco. These software use different Target-Decoy Analysis (TDA) forms to estimate FDR and have different database-oriented search methods with varying degrees of quantification capabilities. Instead of an ideal overlap between software, we observed different sets of identifications with the intersection. When clustering by glycopeptide identifications, we see higher degrees of relatedness within software than within glycosites. Taking the consensus between results yields a conservative and non-informative conclusion as we lose identifications in the desire for caution; these non-consensus identifications are often lower abundance and, therefore, more susceptible to nuanced changes. We conclude that present glycoproteomics softwares are not directly comparable, and that methods are needed to assess their overall results and FDR estimation performance. Once such tools are developed, it will be possible to improve FDR methods and quantify complex glycoproteomes with acceptable confidence, rather than potentially misleading broad strokes.


2020 ◽  
pp. 35-58
Author(s):  
Daphne Leong

How can a performer’s voice complement that of a theorist in the analysis of a musical work? This chapter takes the opening cadenza of Ravel’s Concerto for the Left Hand as a case study. Certain performance considerations—embodied facets, instrumental affordances, and affective implications—comprise warp and weft not only of the Concerto’s execution and interpretation, but also of its structure and meaning. The chapter explores the cadenza: visual and kinesthetic aspects, rhetorical and tonal function, form and structure, rhythmic features and performance issues. The analysis is informed by the authors’ experiences of performing the Concerto and by historical recordings of the work. Video performances and audio examples complement the written text.


2020 ◽  
Vol 2020 ◽  
pp. 1-9
Author(s):  
Yasin Kirelli ◽  
Seher Arslankaya

As the usage of social media has increased, the size of shared data has instantly surged and this has been an important source of research for environmental issues as it has been with popular topics. Sentiment analysis has been used to determine people's sensitivity and behavior in environmental issues. However, the analysis of Turkish texts has not been investigated much in literature. In this article, sentiment analysis of Turkish tweets about global warming and climate change is determined by machine learning methods. In this regard, by using algorithms that are determined by supervised methods (linear classifiers and probabilistic classifiers) with trained thirty thousand randomly selected Turkish tweets, sentiment intensity (positive, negative, and neutral) has been detected and algorithm performance ratios have been compared. This study also provides benchmarking results for future sentiment analysis studies on Turkish texts.


Author(s):  
Kai Shi ◽  
Huiqun Yu ◽  
Guisheng Fan ◽  
Jianmei Guo ◽  
Liqiong Chen ◽  
...  

An effective method for addressing the configuration optimization problem (COP) in Software Product Lines (SPLs) is to deploy a multi-objective evolutionary algorithm, for example, the state-of-the-art SATIBEA. In this paper, an improved hybrid algorithm, called SATIBEA-LSSF, is proposed to further improve the algorithm performance of SATIBEA, which is composed of a multi-children generating strategy, an enhanced mutation strategy with local searching and an elite inheritance mechanism. Empirical results on the same case studies demonstrate that our algorithm significantly outperforms the state-of-the-art for four out of five SPLs on a quality Hypervolume indicator and the convergence speed. To verify the effectiveness and robustness of our algorithm, the parameter sensitivity analysis is discussed and three observations are reported in detail.


2012 ◽  
Vol 20 (2) ◽  
pp. 277-299 ◽  
Author(s):  
R. Morgan ◽  
M. Gallagher

In this paper we extend a previously proposed randomized landscape generator in combination with a comparative experimental methodology to study the behavior of continuous metaheuristic optimization algorithms. In particular, we generate two-dimensional landscapes with parameterized, linear ridge structure, and perform pairwise comparisons of algorithms to gain insight into what kind of problems are easy and difficult for one algorithm instance relative to another. We apply this methodology to investigate the specific issue of explicit dependency modeling in simple continuous estimation of distribution algorithms. Experimental results reveal specific examples of landscapes (with certain identifiable features) where dependency modeling is useful, harmful, or has little impact on mean algorithm performance. Heat maps are used to compare algorithm performance over a large number of landscape instances and algorithm trials. Finally, we perform a meta-search in the landscape parameter space to find landscapes which maximize the performance between algorithms. The results are related to some previous intuition about the behavior of these algorithms, but at the same time lead to new insights into the relationship between dependency modeling in EDAs and the structure of the problem landscape. The landscape generator and overall methodology are quite general and extendable and can be used to examine specific features of other algorithms.


Sign in / Sign up

Export Citation Format

Share Document