Assessment and selection of management consultants: A comparative cognitive study between small- and large-scale companies.

Author(s):  
Annick Van Rossem ◽  
Dominique Hortense
1996 ◽  
Vol 76 (06) ◽  
pp. 0939-0943 ◽  
Author(s):  
B Boneu ◽  
G Destelle ◽  

SummaryThe anti-aggregating activity of five rising doses of clopidogrel has been compared to that of ticlopidine in atherosclerotic patients. The aim of this study was to determine the dose of clopidogrel which should be tested in a large scale clinical trial of secondary prevention of ischemic events in patients suffering from vascular manifestations of atherosclerosis [CAPRIE (Clopidogrel vs Aspirin in Patients at Risk of Ischemic Events) trial]. A multicenter study involving 9 haematological laboratories and 29 clinical centers was set up. One hundred and fifty ambulatory patients were randomized into one of the seven following groups: clopidogrel at doses of 10, 25, 50,75 or 100 mg OD, ticlopidine 250 mg BID or placebo. ADP and collagen-induced platelet aggregation tests were performed before starting treatment and after 7 and 28 days. Bleeding time was performed on days 0 and 28. Patients were seen on days 0, 7 and 28 to check the clinical and biological tolerability of the treatment. Clopidogrel exerted a dose-related inhibition of ADP-induced platelet aggregation and bleeding time prolongation. In the presence of ADP (5 \lM) this inhibition ranged between 29% and 44% in comparison to pretreatment values. The bleeding times were prolonged by 1.5 to 1.7 times. These effects were non significantly different from those produced by ticlopidine. The clinical tolerability was good or fair in 97.5% of the patients. No haematological adverse events were recorded. These results allowed the selection of 75 mg once a day to evaluate and compare the antithrombotic activity of clopidogrel to that of aspirin in the CAPRIE trial.


2021 ◽  
Vol 13 (6) ◽  
pp. 3571
Author(s):  
Bogusz Wiśnicki ◽  
Dorota Dybkowska-Stefek ◽  
Justyna Relisko-Rybak ◽  
Łukasz Kolanda

The paper responds to research problems related to the implementation of large-scale investment projects in waterways in Europe. As part of design and construction works, it is necessary to indicate river ports that play a major role within the European transport network as intermodal nodes. This entails a number of challenges, the cardinal one being the optimal selection of port locations, taking into account the new transport, economic, and geopolitical situation that will be brought about by modernized waterways. The aim of the paper was to present an original methodology for determining port locations for modernized waterways based on non-cost criteria, as an extended multicriteria decision-making method (MCDM) and employing GIS (Geographic Information System)-based tools for spatial analysis. The methodology was designed to be applicable to the varying conditions of a river’s hydroengineering structures (free-flowing river, canalized river, and canals) and adjustable to the requirements posed by intermodal supply chains. The method was applied to study the Odra River Waterway, which allowed the formulation of recommendations regarding the application of the method in the case of different river sections at every stage of the research process.


2021 ◽  
Vol 22 (15) ◽  
pp. 7773
Author(s):  
Neann Mathai ◽  
Conrad Stork ◽  
Johannes Kirchmair

Experimental screening of large sets of compounds against macromolecular targets is a key strategy to identify novel bioactivities. However, large-scale screening requires substantial experimental resources and is time-consuming and challenging. Therefore, small to medium-sized compound libraries with a high chance of producing genuine hits on an arbitrary protein of interest would be of great value to fields related to early drug discovery, in particular biochemical and cell research. Here, we present a computational approach that incorporates drug-likeness, predicted bioactivities, biological space coverage, and target novelty, to generate optimized compound libraries with maximized chances of producing genuine hits for a wide range of proteins. The computational approach evaluates drug-likeness with a set of established rules, predicts bioactivities with a validated, similarity-based approach, and optimizes the composition of small sets of compounds towards maximum target coverage and novelty. We found that, in comparison to the random selection of compounds for a library, our approach generates substantially improved compound sets. Quantified as the “fitness” of compound libraries, the calculated improvements ranged from +60% (for a library of 15,000 compounds) to +184% (for a library of 1000 compounds). The best of the optimized compound libraries prepared in this work are available for download as a dataset bundle (“BonMOLière”).


Land ◽  
2021 ◽  
Vol 10 (3) ◽  
pp. 295
Author(s):  
Yuan Gao ◽  
Anyu Zhang ◽  
Yaojie Yue ◽  
Jing’ai Wang ◽  
Peng Su

Suitable land is an important prerequisite for crop cultivation and, given the prospect of climate change, it is essential to assess such suitability to minimize crop production risks and to ensure food security. Although a variety of methods to assess the suitability are available, a comprehensive, objective, and large-scale screening of environmental variables that influence the results—and therefore their accuracy—of these methods has rarely been explored. An approach to the selection of such variables is proposed and the criteria established for large-scale assessment of land, based on big data, for its suitability to maize (Zea mays L.) cultivation as a case study. The predicted suitability matched the past distribution of maize with an overall accuracy of 79% and a Kappa coefficient of 0.72. The land suitability for maize is likely to decrease markedly at low latitudes and even at mid latitudes. The total area suitable for maize globally and in most major maize-producing countries will decrease, the decrease being particularly steep in those regions optimally suited for maize at present. Compared with earlier research, the method proposed in the present paper is simple yet objective, comprehensive, and reliable for large-scale assessment. The findings of the study highlight the necessity of adopting relevant strategies to cope with the adverse impacts of climate change.


1988 ◽  
Vol 32 (17) ◽  
pp. 1179-1182 ◽  
Author(s):  
P. Jay Merkle ◽  
Douglas B. Beaudet ◽  
Robert C. Williges ◽  
David W. Herlong ◽  
Beverly H. Williges

This paper describes a systematic methodology for selecting independent variables to be considered in large-scale research problems. Five specific procedures including brainstorming, prototype interface representation, feasibility/relevance analyses, structured literature reviews, and user subjective ratings are evaluated and incorporated into an integrated strategy. This methodology is demonstrated in the context of designing the user interface for a telephone-based information inquiry system. The procedure was successful in reducing an initial set of 95 independent variables to a subset of 19 factors that warrant subsequent detailed analysis. These results are discussed in terms of a comprehensive sequential research methodology useful for investigating human factors problems.


Author(s):  
Brian Bush ◽  
Laura Vimmerstedt ◽  
Jeff Gonder

Connected and automated vehicle (CAV) technologies could transform the transportation system over the coming decades, but face vehicle and systems engineering challenges, as well as technological, economic, demographic, and regulatory issues. The authors have developed a system dynamics model for generating, analyzing, and screening self-consistent CAV adoption scenarios. Results can support selection of scenarios for subsequent computationally intensive study using higher-resolution models. The potential for and barriers to large-scale adoption of CAVs have been analyzed using preliminary quantitative data and qualitative understandings of system relationships among stakeholders across the breadth of these issues. Although they are based on preliminary data, the results map possibilities for achieving different levels of CAV adoption and system-wide fuel use and demonstrate the interplay of behavioral parameters such as how consumers value their time versus financial parameters such as operating cost. By identifying the range of possibilities, estimating the associated energy and transportation service outcomes, and facilitating screening of scenarios for more detailed analysis, this work could inform transportation planners, researchers, and regulators.


2017 ◽  
Vol 7 (1) ◽  
pp. 100
Author(s):  
Wen-Tsung Wu ◽  
Chie-Bein Chen

This study investigates the decision-making issues in the selection of destinations for large-scale exhibitions by the cultural and creative industry. We use the Rubber Duck China Tour by the Dutch artist Florentijn Hofman as an example and adopt the analytic network process technique to evaluate destination options for the exhibition, as well as to explore the impacts of the evaluation of destination feasibilities on exhibition investment. The results show that power, a high benefit-cost ratio, first-tier cities, integration with local communities, and a rich and interesting theme are the top five factors that curators should consider when planning exhibitions. Considering the priority among cities of various tiers, first-tier cities are the most favorable, followed by fourth-tier, third-tier, and second-tier cities. The decision-making model provides curators with a reliable reference for selecting destinations for future exhibitions.


Complexity ◽  
2018 ◽  
Vol 2018 ◽  
pp. 1-16 ◽  
Author(s):  
Yiwen Zhang ◽  
Yuanyuan Zhou ◽  
Xing Guo ◽  
Jintao Wu ◽  
Qiang He ◽  
...  

The K-means algorithm is one of the ten classic algorithms in the area of data mining and has been studied by researchers in numerous fields for a long time. However, the value of the clustering number k in the K-means algorithm is not always easy to be determined, and the selection of the initial centers is vulnerable to outliers. This paper proposes an improved K-means clustering algorithm called the covering K-means algorithm (C-K-means). The C-K-means algorithm can not only acquire efficient and accurate clustering results but also self-adaptively provide a reasonable numbers of clusters based on the data features. It includes two phases: the initialization of the covering algorithm (CA) and the Lloyd iteration of the K-means. The first phase executes the CA. CA self-organizes and recognizes the number of clusters k based on the similarities in the data, and it requires neither the number of clusters to be prespecified nor the initial centers to be manually selected. Therefore, it has a “blind” feature, that is, k is not preselected. The second phase performs the Lloyd iteration based on the results of the first phase. The C-K-means algorithm combines the advantages of CA and K-means. Experiments are carried out on the Spark platform, and the results verify the good scalability of the C-K-means algorithm. This algorithm can effectively solve the problem of large-scale data clustering. Extensive experiments on real data sets show that the accuracy and efficiency of the C-K-means algorithm outperforms the existing algorithms under both sequential and parallel conditions.


Sign in / Sign up

Export Citation Format

Share Document