scholarly journals USING A LAGRANGIAN HEURISTIC FOR A COMBINATORIAL AUCTION PROBLEM

2006 ◽  
Vol 15 (03) ◽  
pp. 481-489 ◽  
Author(s):  
YUNSONG GUO ◽  
ANDREW LIM ◽  
BRIAN RODRIGUES ◽  
JIQING TANG

Combinatorial auctions allow bidders to bid for items leading to more efficient allocations, but determining winners in auctions is [Formula: see text]-complete. In this work, a simple yet effective Lagrangian relaxation based heuristic algorithm is presented. Extensive computational experiments using standard benchmark data (CATS) as well as newly generated more realistic test sets were conducted which showed the heuristic was able to provide optimal solutions for most test cases and is within 1% from the optimums for the rest within very short times. Experiements comparing CPLEX 8.0, the fastest current algorithm, showed the heuristic was able to provide equally godd or better solutions often requring less than 1% of the time required by CPLEX 8.0.

2020 ◽  
Vol 8 (6) ◽  
pp. 4466-4473

Test data generation is the task of constructing test cases for predicting the acceptability of novel or updated software. Test data could be the original test suite taken from previous run or imitation data generated afresh specifically for this purpose. The simplest way of generating test data is done randomly but such test cases may not be competent enough in detecting all defects and bugs. In contrast, test cases can also be generated automatically and this has a number of advantages over the conventional manual method. Genetic Algorithms, one of the automation techniques, are iterative algorithms and apply basic operations repeatedly in greed for optimal solutions or in this case, test data. By finding out the most error-prone path using such test cases one can reduce the software development cost and improve the testing efficiency. During the evolution process such algorithms pass on the better traits to the next generations and when applied to generations of software test data they produce test cases that are closer to optimal solutions. Most of the automated test data generators developed so far work well only for continuous functions. In this study, we have used Genetic Algorithms to develop a tool and named it TG-GA (Test Data Generation using Genetic Algorithms) that searches for test data in a discontinuous space. The goal of the work is to analyze the effectiveness of Genetic Algorithms in automated test data generation and to compare its performance over random sampling particularly for discontinuous spaces.


Author(s):  
J. A. Davis

Visual representations such as Free body diagrams are an important part of solving engineering mechanics problems. Automatic Assessment of these types of images is difficult due to the involvement of multiple object types and to their contextual nature. Using a probabilistic approach, an algorithm was created to automatically categorize groups of characters in labels from images into specific object types including: variables, assignment operators, values, units, or words.  Using these categories, the algorithm was then able to determine whether the label was an identifier, a point, a dimension, a variable definition, or an equation. A series of representative test cases were chosen and results found that the current algorithm was able to correctly predict the results of all test cases. The paper discusses each step in detail and provides the resulting probability coefficients for the model.  


1998 ◽  
Vol 06 (01n02) ◽  
pp. 135-150 ◽  
Author(s):  
D. G. Simons ◽  
M. Snellen

For a selected number of shallow water test cases of the 1997 Geoacoustic Inversion Workshop we have applied Matched-Field Inversion to determine the geoacoustic and geometric (source location, water depth) parameters. A genetic algorithm has been applied for performing the optimization, whereas the replica fields have been calculated using a standard normal-mode model. The energy function to be optimized is based on the incoherent multi-frequency Bartlett processor. We have used the data sets provided at a few frequencies in the band 25–500 Hz for a vertical line array positioned at 5 km from the source. A comparison between the inverted and true parameter values is made.


Author(s):  
CATHERINE VAIRAPPAN ◽  
SHANGCE GAO ◽  
ZHENG TANG ◽  
HIROKI TAMURA

A new version of neuro-fuzzy system of feedbacks with chaotic dynamics is proposed in this work. Unlike the conventional neuro-fuzzy, improved neuro-fuzzy system with feedbacks is better able to handle temporal data series. By introducing chaotic dynamics into the feedback neuro-fuzzy system, the system has richer and more flexible dynamics to search for near-optimal solutions. In the experimental results, performance and effectiveness of the presented approach are evaluated by using benchmark data series. Comparison with other existing methods shows the proposed method for the neuro-fuzzy feedback is able to predict the time series accurately.


VLSI Design ◽  
1995 ◽  
Vol 3 (1) ◽  
pp. 93-98 ◽  
Author(s):  
Youssef Saab

Partitioning is an important problem in the design automation of integrated circuits. This problem in many of its formulation is NP-Hard, and several heuristic methods have been proposed for its solution. To evaluate the effectiveness of the various partitioning heuristics, it is desirable to have test cases with known optimal solutions that are as “random looking” as possible. In this paper, we describe several methods for the construction of such test cases. All our methods except one use the theory of network flow. The remaining method uses a relationship between a partitioning problem and the geometric clustering problem. The latter problem can be solved in polynomial time in any fixed dimension.


The software testing is considered as the most powerful and important phase. Effective testing process will leads to more accurate and reliable results and high quality software products. Random testing (RT) is a major software testing strategy and their effortlessness makes them conceivable as the most efficient testing strategies concerning the time required for experiment determination, its significant drawback of RT is defect detection efficacy. This draw back has been beat by Adaptive Testing (AT), however AT is enclosed of computational complexity. One most important method for improving RT is Adaptive random testing (ART). Another class of testing strategies is partition testing is one of the standard software program checking out strategies, which involves dividing the enter domain up into a set number of disjoint partitions, and selecting take a look at cases from inside every partition The hybrid approach is a combination of AT and RPT that is already existing called as ARPT strategy. In ARPT the random partitioning is improved by introducing different clustering algorithms solves the parameter space of problem between the target method and objective function of the test data. In this way random partitioning is improved to reduce the time conception and complexity in ARPT testing strategies. The parameters of enhanced ARPT testing approaches are optimized by utilizing different optimization algorithms. The computational complexity of Optimized Improved ARPT (OIARPT) testing strategies is reduced by selecting the best test cases using Support Vector Machine (SVM). In this paper the testing strategies of Optimized Improved ARPT with SVM are unified and named as Unified ARPT (UARPT) which enhances the testing performance and reduces the time complexity to test software.


Author(s):  
ZHONGSHENG QIAN

The specification-based testing can be employed to evaluate software functionalities without knowing program code. Decisions are the primary form of the pre- and post-conditions in formal specifications. This work expatiates on logic coverage criteria for specification-based testing at great length. It proposes and then expounds mask logic coverage criteria to solve the problems which existing determinant logic coverage criteria cannot solve. A feasible test case generation algorithm based on mask logic coverage criteria is developed. The test cases satisfying mask logic coverage criteria can detect those errors caused by the mask property of conditions. An experiment is conducted to compare MC/DC, RC/DC and two mask logic coverage criteria (RMCC and GMCC) on their test effectiveness and fault detection ability. It also elaborates on the constraint among conditions, how to decompose and compose a complicated decision, and the relationship among decisions. All these can respectively clarify the coupling problem among conditions, the multiple occurrences of a condition in a decision, and the location of a decision in a specification or program. Additionally, coverage criteria including full true decision coverage, full false decision coverage, all sub-decisions coverage, unique condition true coverage and unique condition false coverage are proposed. The test sets satisfying these criteria can detect respectively different types of errors. Finally, the hierarchical subsumption relation is established among these presented coverage criteria and some existing ones, and various applicable scenarios for different coverage criteria are suggested.


Author(s):  
PRATEEVA MAHALI ◽  
ARUP ABHINNA ACHARYA

With the exponential growth in size and complexity of softwares, the testing activity is no more limited to testing phase of SDLC (Software Development Life Cycle). Testing process has been made iterative and incremental in Object Oriented development scenario. This leads to increase in effort and time required for testing as well as explosion in test case. But when it comes to regression testing, it has the additional issue of test case retesting which further increasing the effort and time. So a suitable prioritization technique should be used to address these issues. In this paper we had given a proposal which is based on prioritization of test cases using GA (Genetic Algorithm). This process is found to be very effective during regression testing. In this paper we found an optimized independent path having maximum critical path value, which further leads to prioritization of test cases. The three component of regression testing i.e effort, time, cost will be gradually reduce by using this approach.


2019 ◽  
Vol 4 (3) ◽  
pp. 291
Author(s):  
Farid Jauhari ◽  
Wayan Firdaus Mahmudy ◽  
Achmad Basuki

Proportional tuition fees assessment is an optimization process to find a compromise point between student willingness to pay and institution income. Using a genetic algorithm to find optimal solutions requires effective chromosome representations, parameters, and operator genetic to obtain efficient search. This paper proposes a new chromosome representation and also finding efficient genetic parameters to solve the proportional tuition fees assessment problem. The results of applying the new chromosome representation are compared with another chromosome representation in the previous study. The evaluations show that the proposed chromosome representation obtains better results than the other in both execution time required and the quality of the solutions.


2012 ◽  
Vol 34 (1) ◽  
pp. 83-101 ◽  
Author(s):  
Martina Navarro ◽  
Nelson Miyamoto ◽  
John van der Kamp ◽  
Edgard Morya ◽  
Ronald Ranvaud ◽  
...  

We investigated the effects of high pressure on the point of no return or the minimum time required for a kicker to respond to the goalkeeper’s dive in a simulated penalty kick task. The goalkeeper moved to one side with different times available for the participants to direct the ball to the opposite side in low-pressure (acoustically isolated laboratory) and high-pressure situations (with a participative audience). One group of participants showed a significant lengthening of the point of no return under high pressure. With less time available, performance was at chance level. Unexpectedly, in a second group of participants, high pressure caused a qualitative change in which for short times available participants were inclined to aim in the direction of the goalkeeper’s move. The distinct effects of high pressure are discussed within attentional control theory to reflect a decreasing efficiency of the goal-driven attentional system, slowing down performance, and a decreasing effectiveness in inhibiting stimulus-driven behavior.


Sign in / Sign up

Export Citation Format

Share Document