ranking procedures
Recently Published Documents


TOTAL DOCUMENTS

73
(FIVE YEARS 7)

H-INDEX

12
(FIVE YEARS 0)

2021 ◽  
Author(s):  
Ivan Karpenko ◽  
Ihor Ischenko ◽  
Olha Nikolenko ◽  
Felipe Rodrigues ◽  
Serhii Levonyuk ◽  
...  

Abstract The Ukrainian sector of the Western Black Sea (WBS) is one of the last remaining exploration frontiers in Europe. This area, which includes shelf to deepwater environments, is underexplored with no drilling of targets in water depths exceeding 100 meters. That is why, the Ukrainian sector of the WBS is attractive for exploration, especially in the context of new play types and targets such as biogenic gas. These hydrocarbon formations have been proven by neighboring Romania and Turkey in the areas adjacent to Ukrainian waters. Therefore, a rigorous Basin Analysis program has been initiated to assess the petroleum systems and play risks in the entire Ukrainian sector of the WBS. The goals of this program are: 1) to establish a regional geoscience foundation following best industrial practices in exploration; 2) to enable establishing more accurate risking and ranking procedures for an exploration portfolio and 3) to provide critical support for the analysis of a new generation of seismic data that is currently being acquired. In this paper the initial scope of work is presented.


Author(s):  
Omer Ozturk ◽  
Olena Kravchuk

AbstractThis paper presents novel estimators for a judgment post-stratified (JPS) sample, which combine the ranking information from different methods or rankers. A JPS sample divides the units in the original simple random sample (SRS) into several ranking groups based on the relative positions (ranks) of the units in their individual small comparison sets. Ranks in the comparison sets may be assigned with several different ranking procedures. When considered separately, each ranking method leads to a different JPS sample estimator of the population mean or total. Here we introduce equally or unequally weighted estimators, which combine the ranking information from multiple sources. The unequal weights utilize the standard errors of the individual ranking methods estimators. The weighted estimators provide a substantial improvement over an SRS estimator and a JPS estimator based on a single ranking method. The new estimators are applied to crop establishment phenotypic data from an agricultural field experiment.Supplementary materials accompanying this paper appear online.


Author(s):  
Graziano Fiorillo ◽  
Hani Nassif

Bridges are critical for the mobility of our society and its economic growth. Available funds for bridge repair, maintenance, and rehabilitation are limited. The Moving Ahead for Progress in the 21st Century Act (MAP-21) introduced several new parameters for improving the management of bridge assets, such as bridge element evaluation, life-cycle analysis, and risk-based performance indicators. Risk-based methods account for the uncertainties embedded into engineering variables and long-term evaluations. The objective of this paper is to identify, assess, and quantify structural risk components to bridges using probabilistic risk methodologies and data from the National Bridge Inventory database. The aim is to simplify the implementation of risk-based ranking procedures into bridge management system packages according to the MAP-21 vision. Therefore, machine learning techniques are employed to facilitate the introduction of probabilistic risk methods into bridge management systems. The procedure is described for seven hazards that are pertinent to bridges in New Jersey: overloading, fatigue, seismic, flooding, scour, vehicle and vessel collision. Risk values are computed in monetary terms to homogenize the comparison among bridges for different hazards. The analysis is performed on 5,534 bridges, showing that seismic events and fatigue resulting from truck overloading are the most dominant hazards in New Jersey, for which about 97.0% and 29.0% of bridges show some level of risk. The main limitation of the proposed framework is the lack of accurate data from bridge inventories necessary to thoroughly perform a fully structural probabilistic analysis of bridges and to minimize engineering judgment.


2020 ◽  
Vol 29 (3) ◽  
pp. 289-299
Author(s):  
Peter Biegelbauer ◽  
Thomas Palfinger ◽  
Sabine Mayer

Abstract Innovation agencies, that is organizations with the primary focus of funding applied research and technological development, evaluate project proposals to select the most promising proposals for funding. At the moment, there is only little verified knowledge available on project evaluation and selection processes of innovation agencies. We want to show how projects are evaluated and selected in these organizations. We want to also make a contribution for better understanding the variety of the utilized processes by pointing out the reasoning behind some of the most important practices. This article therefore focuses on the following questions: How are projects selected in innovation agencies? What are the employed procedures and practices? Are there differences in procedures and practices and what would be the reason for these differences? The basis for answering these questions is a study produced for the European Association of National Innovation Agencies, Taftie. There we have analysed the project selection procedures of 18 programmes run by 12 European innovation agencies. To do so, we have produced an overview of existing selection procedures of the innovation agencies, analysed, and compared the procedures along the stages of a typical selection process. The key points of interest were role of evaluators, selection criteria, ranking procedures, and general process issues.


2020 ◽  
pp. 001316442092845
Author(s):  
Wolfgang Lenhard ◽  
Alexandra Lenhard

The interpretation of psychometric test results is usually based on norm scores. We compared semiparametric continuous norming (SPCN) with conventional norming methods by simulating results for test scales with different item numbers and difficulties via an item response theory approach. Subsequently, we modeled the norm scores based on random samples with varying sizes either with a conventional ranking procedure or SPCN. The norms were then cross-validated by using an entirely representative sample of N = 840,000 for which different measures of norming error were computed. This process was repeated 90,000 times. Both approaches benefitted from an increase in sample size, with SPCN reaching optimal results with much smaller samples. Conventional norming performed worse on data fit, age-related errors, and number of missings in the norm tables. The data fit in conventional norming of fixed subsample sizes varied with the granularity of the age brackets, calling into question general recommendations for sample sizes in test norming. We recommend that test norms should be based on statistical models of the raw score distributions instead of simply compiling norm tables via conventional ranking procedures.


2016 ◽  
Vol 144 (3-4) ◽  
pp. 223-240 ◽  
Author(s):  
Barbara Sandrasagra ◽  
Michael Soltys
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document