An Energy-Constrained Parameterization of Eddy Buoyancy Flux

2008 ◽  
Vol 38 (8) ◽  
pp. 1807-1819 ◽  
Author(s):  
Paola Cessi

Abstract A parameterization for eddy buoyancy fluxes for use in coarse-grid models is developed and tested against eddy-resolving simulations. The development is based on the assumption that the eddies are adiabatic (except near the surface) and the observation that the flux of buoyancy is affected by barotropic, depth-independent eddies. Like the previous parameterizations of Gent and McWilliams (GM) and Visbeck et al. (VMHS), the horizontal flux of a tracer is proportional to the local large-scale horizontal gradient of the tracer through a transfer coefficient assumed to be given by the product of a typical eddy velocity scale and a typical mixing length. The proposed parameterization differs from GM and VMHS in the selection of the eddy velocity scale, which is based on the kinetic energy balance of baroclinic eddies. The three parameterizations are compared to eddy-resolving computations in a variety of forcing configurations and for several sets of parameters. The VMHS and the energy balance parameterizations perform best in the tests considered here.

1974 ◽  
Vol 55 (7) ◽  
pp. 768-778 ◽  
Author(s):  
Ernest C. Kung ◽  
Phillip J. Smith

Currently available diagnostic studies on kinetic energy balance in the general circulation are reviewed as one of the basic scientific problems in GARP. The kinetic energy equation and several different approaches in the evaluation of energy variables are discussed in relation to real and modeled atmospheric data. Energy balance problems in the middle latitudes are examined in terms of linkages between processes from energy conversion to dissipation, balance within various systems of circulation, and interactions with sub-synoptic scale disturbances. The kinetic energy budget in large-scale disturbances and the general flow of the tropical circulation are contrasted with those in the middle latitudes. By clarifying the current essential problems in the energetics of the middle latitudes and tropics, ongoing diagnostic studies at the University of Missouri—Columbia and Purdue University are identified in the context of the GARP.


2006 ◽  
Vol 36 (4) ◽  
pp. 720-738 ◽  
Author(s):  
Andrew F. Thompson ◽  
William R. Young

Abstract The eddy heat flux generated by the statistically equilibrated baroclinic instability of a uniform, horizontal temperature gradient is studied using a two-mode f-plane quasigeostrophic model. An overview of the dependence of the eddy diffusivity D on the bottom friction κ, the deformation radius λ, the vertical variation of the large-scale flow U, and the domain size L is provided by numerical simulations at 70 different values of the two nondimensional control parameters κλ/U and L/λ. Strong, axisymmetric, well-separated baroclinic vortices dominate both the barotropic vorticity and the temperature fields. The core radius of a single vortex is significantly larger than λ but smaller than the eddy mixing length ℓmix. On the other hand, the typical vortex separation is comparable to ℓmix. Anticyclonic vortices are hot, and cyclonic vortices are cold. The motion of a single vortex is due to barotropic advection by other distant vortices, and the eddy heat flux is due to the systematic migration of hot anticyclones northward and cold cyclones southward. These features can be explained by scaling arguments and an analysis of the statistically steady energy balance. These arguments result in a relation between D and ℓmix. Earlier scaling theories based on coupled Kolmogorovian cascades do not account for these coherent structures and are shown to be unreliable. All of the major properties of this dilute vortex gas are exponentially sensitive to the strength of the bottom drag. As the bottom drag decreases, both the vortex cores and the vortex separation become larger. Provided that ℓmix remains significantly smaller than the domain size, then local mixing length arguments are applicable, and our main empirical result is ℓmix ≈ 4λ exp(0.3U/κλ).


1996 ◽  
Vol 76 (06) ◽  
pp. 0939-0943 ◽  
Author(s):  
B Boneu ◽  
G Destelle ◽  

SummaryThe anti-aggregating activity of five rising doses of clopidogrel has been compared to that of ticlopidine in atherosclerotic patients. The aim of this study was to determine the dose of clopidogrel which should be tested in a large scale clinical trial of secondary prevention of ischemic events in patients suffering from vascular manifestations of atherosclerosis [CAPRIE (Clopidogrel vs Aspirin in Patients at Risk of Ischemic Events) trial]. A multicenter study involving 9 haematological laboratories and 29 clinical centers was set up. One hundred and fifty ambulatory patients were randomized into one of the seven following groups: clopidogrel at doses of 10, 25, 50,75 or 100 mg OD, ticlopidine 250 mg BID or placebo. ADP and collagen-induced platelet aggregation tests were performed before starting treatment and after 7 and 28 days. Bleeding time was performed on days 0 and 28. Patients were seen on days 0, 7 and 28 to check the clinical and biological tolerability of the treatment. Clopidogrel exerted a dose-related inhibition of ADP-induced platelet aggregation and bleeding time prolongation. In the presence of ADP (5 \lM) this inhibition ranged between 29% and 44% in comparison to pretreatment values. The bleeding times were prolonged by 1.5 to 1.7 times. These effects were non significantly different from those produced by ticlopidine. The clinical tolerability was good or fair in 97.5% of the patients. No haematological adverse events were recorded. These results allowed the selection of 75 mg once a day to evaluate and compare the antithrombotic activity of clopidogrel to that of aspirin in the CAPRIE trial.


2021 ◽  
Vol 13 (6) ◽  
pp. 3571
Author(s):  
Bogusz Wiśnicki ◽  
Dorota Dybkowska-Stefek ◽  
Justyna Relisko-Rybak ◽  
Łukasz Kolanda

The paper responds to research problems related to the implementation of large-scale investment projects in waterways in Europe. As part of design and construction works, it is necessary to indicate river ports that play a major role within the European transport network as intermodal nodes. This entails a number of challenges, the cardinal one being the optimal selection of port locations, taking into account the new transport, economic, and geopolitical situation that will be brought about by modernized waterways. The aim of the paper was to present an original methodology for determining port locations for modernized waterways based on non-cost criteria, as an extended multicriteria decision-making method (MCDM) and employing GIS (Geographic Information System)-based tools for spatial analysis. The methodology was designed to be applicable to the varying conditions of a river’s hydroengineering structures (free-flowing river, canalized river, and canals) and adjustable to the requirements posed by intermodal supply chains. The method was applied to study the Odra River Waterway, which allowed the formulation of recommendations regarding the application of the method in the case of different river sections at every stage of the research process.


2021 ◽  
Vol 22 (15) ◽  
pp. 7773
Author(s):  
Neann Mathai ◽  
Conrad Stork ◽  
Johannes Kirchmair

Experimental screening of large sets of compounds against macromolecular targets is a key strategy to identify novel bioactivities. However, large-scale screening requires substantial experimental resources and is time-consuming and challenging. Therefore, small to medium-sized compound libraries with a high chance of producing genuine hits on an arbitrary protein of interest would be of great value to fields related to early drug discovery, in particular biochemical and cell research. Here, we present a computational approach that incorporates drug-likeness, predicted bioactivities, biological space coverage, and target novelty, to generate optimized compound libraries with maximized chances of producing genuine hits for a wide range of proteins. The computational approach evaluates drug-likeness with a set of established rules, predicts bioactivities with a validated, similarity-based approach, and optimizes the composition of small sets of compounds towards maximum target coverage and novelty. We found that, in comparison to the random selection of compounds for a library, our approach generates substantially improved compound sets. Quantified as the “fitness” of compound libraries, the calculated improvements ranged from +60% (for a library of 15,000 compounds) to +184% (for a library of 1000 compounds). The best of the optimized compound libraries prepared in this work are available for download as a dataset bundle (“BonMOLière”).


Sign in / Sign up

Export Citation Format

Share Document