Free and Regulated Competition: Two Forms of the Competitive Principle Implementation

Author(s):  
Yuriy V. Taranukha ◽  

Although interference in the competitive mechanism is a fact of economic reality, the controversy around free and regulated competition does not subside. In contrast to opposing these forms of competition to each other, the author views them as ensuring re-creation of competitive relations at qualitatively different stages of competition development. Based on the reproduction approach, the objective nature of these forms’ existence is revealed, and the reasons for transition from free to regulated competition are shown. Free competition is interpreted as a way of the competitors selection mechanism, acting without regard to position occupied by competitors in the market. Regulated competition is characterised by external interference, carried out by influencing the mechanism and results of the competitive process, in competitive selection. Considering free and regulated competition as a means of maintaining the competitive principle and restoring rivalry relationships, the author concludes that each form reflects the specifics of the implementation of this principle at different stages of the competitive system development. At the same time, the transition from free competition to its regulated form is interpreted as a way of resolving the internal contradiction within the competition, and at the same time as an evidence of its evolution. This interpretation is not only of theoretical significance related to justification of the need to regulate competition. It is a methodological key to determine areas and boundaries of intervention in the competitive process for competitive policy. The current stage of competition development is characterised by a high rate of change, ultimate and unpredictable, and requires a transition to new regulatory measures affecting the competitive principle itself. The study of this side of regulated competition seems to be the most promising. This article focuses on the subjective side of macrocompetition represented by institutions (antitrust laws) and actors (rivals and regulators). However, the evolution of competitive conditions and competitors also change the content of the competitive principle. This requires regulation of competition from within.

Energies ◽  
2021 ◽  
Vol 14 (6) ◽  
pp. 1738
Author(s):  
Vanessa Neves Höpner ◽  
Volmir Eugênio Wilhelm

The use of static frequency converters, which have a high switching frequency, generates voltage pulses with a high rate of change over time. In combination with cable and motor impedance, this generates repetitive overvoltage at the motor terminals, influencing the occurrence of partial discharges between conductors, causing degradation of the insulation of electric motors. Understanding the effects resulting from the frequency converter–electric motor interaction is essential for developing and implementing insulation systems with characteristics that support the most diverse applications, have an operating life under economically viable conditions, and promote energy efficiency. With this objective, a search was carried out in three recognized databases. Duplicate articles were eliminated, resulting in 1069 articles, which were systematically categorized and reviewed, resulting in 481 articles discussing the causes of degradation in the insulation of electric motors powered by frequency converters. A bibliographic portfolio was built and evaluated, with 230 articles that present results on the factors that can be used in estimating the life span of electric motor insulation. In this structure, the historical evolution of the collected information, the authors who conducted the most research on the theme, and the relevance of the knowledge presented in the works were considered.


Critical Care ◽  
2021 ◽  
Vol 25 (1) ◽  
Author(s):  
Stefan Schmidt ◽  
Jana-Katharina Dieks ◽  
Michael Quintel ◽  
Onnen Moerer

Abstract Background The use of ultrasonography in the intensive care unit (ICU) is steadily increasing but is usually restricted to examinations of single organs or organ systems. In this study, we combine the ultrasound approaches the most relevant to ICU to design a whole-body ultrasound (WBU) protocol. Recommendations and training schemes for WBU are sparse and lack conclusive evidence. Our aim was therefore to define the range and prevalence of abnormalities detectable by WBU to develop a simple and fast bedside examination protocol, and to evaluate the value of routine surveillance WBU in ICU patients. Methods A protocol for focused assessments of sonographic abnormalities of the ocular, vascular, pulmonary, cardiac and abdominal systems was developed to evaluate 99 predefined sonographic entities on the day of admission and on days 3, 6, 10 and 15 of the ICU admission. The study was a clinical prospective single-center trial in 111 consecutive patients admitted to the surgical ICUs of a tertiary university hospital. Results A total of 3003 abnormalities demonstrable by sonography were detected in 1275 individual scans of organ systems and 4395 individual single-organ examinations. The rate of previously undetected abnormalities ranged from 6.4 ± 4.2 on the day of admission to 2.9 ± 1.8 on day 15. Based on the sonographic findings, intensive care therapy was altered following 45.1% of examinations. Mean examination time was 18.7 ± 3.2 min, or 1.6 invested minutes per detected abnormality. Conclusions Performing the WBU protocol led to therapy changes in 45.1% of the time. Detected sonographic abnormalities showed a high rate of change in the course of the serial assessments, underlining the value of routine ultrasound examinations in the ICU. Trial registration The study was registered in the German Clinical Trials Register (DRKS, 7 April 2017; retrospectively registered) under the identifier DRKS00010428.


Actuators ◽  
2020 ◽  
Vol 9 (3) ◽  
pp. 77 ◽  
Author(s):  
Erik Enders ◽  
Georg Burkhard ◽  
Nathan Munzinger

Active suspension systems help to deliver superior ride comfort and can be used to resolve the objective conflict between ride comfort and road-holding. Currently, there exists no method for analyzing the influence of actuator limitations, such as maximum force and maximum rate of change, on the achievable ride comfort. This research paper presents a method that is capable of doing this. It uses model predictive control to eliminate the influence of feedback controller performance and to integrate both actuator limitations and necessary constraints on dynamic wheel-load variation and suspension travel. Various scenarios are simulated, such as driving over a speed bump and inner city driving, as well as driving on a country road and motorway driving, using a state-of-the-art quarter-car model, parameterized for a luxury class vehicle. It is analyzed how comfort, or in one scenario road-holding, can be improved with consideration for the actuator limitations. The results indicate that actuator rate limitation has a strong influence on vertical vehicle dynamics control system performance, and that relatively small maximum forces of around 1000 to 2000 N are sufficient to successfully reject disturbances from road irregularities, provided the actuator is capable of supplying the forces at a sufficiently high rate of change.


Landslides ◽  
2020 ◽  
Vol 17 (10) ◽  
pp. 2469-2481
Author(s):  
Judith Uwihirwe ◽  
Markus Hrachowitz ◽  
Thom A. Bogaard

Abstract Regional empirical-statistical thresholds indicating the precipitation conditions initiating landslides are of crucial importance for landslide early warning system development. The objectives of this research were to use landslide and precipitation data in an empirical-statistical approach to (1) identify precipitation-related variables with the highest explanatory power for landslide occurrence and (2) define both trigger and trigger-cause based thresholds for landslides in Rwanda, Central-East Africa. Receiver operating characteristics (ROC) and area under the curve (AUC) metrics were used to test the suitability of a suite of precipitation-related explanatory variables. A Bayesian probabilistic approach, maximum true skill statistics and the minimum radial distance were used to determine the most informative threshold levels above which landslide are high likely to occur. The results indicated that the event precipitation volumes E, cumulative 1-day rainfall (RD1) that coincide with the day of landslide occurrence and 10-day antecedent precipitation are variables with the highest discriminatory power to distinguish landslide from no landslide conditions. The highest landslide prediction capability in terms of true positive alarms was obtained from single rainfall variables based on trigger-based thresholds. However, that predictive capability was constrained by the high rate of false positive alarms and thus the elevated probability to neglect the contribution of additional causal factors that lead to the occurrence of landslides and which can partly be accounted for by the antecedent precipitation indices. Further combination of different variables into trigger-cause pairs and the use of suitable thresholds in bilinear format improved the prediction capacity of the real trigger-based thresholds.


1978 ◽  
Vol 100 (2) ◽  
pp. 239-245 ◽  
Author(s):  
D. S. Weaver ◽  
F. A. Adubi ◽  
N. Kouwen

The flow induced vibrations of a check valve with a spring damper to prevent slamming have been studied experimentally. Both prototype and two-dimensional model experiments were conducted to develop an understanding of the mechanism of self-excitation. The phenomenon is shown to be caused by the high rate of change of discharge at small angles of valve opening and the hysteretic hydrodynamic loading resulting from fluid inertia. As the discharge-displacement characteristics of the valve are dependent on its geometry, modifications of this geometry were examined and one found which eliminated the vibrations entirely. The phenomenon studied is considered to be the same as that causing vibrations in numerous other flow control devices when operating at small openings.


2020 ◽  
Vol 12 (3) ◽  
pp. 88-102 ◽  
Author(s):  
Octavio M Palacios-Gimenez ◽  
Diogo Milani ◽  
Hojun Song ◽  
Dardo A Marti ◽  
Maria D López-León ◽  
...  

Abstract Satellite DNA (satDNA) is an abundant class of tandemly repeated noncoding sequences, showing high rate of change in sequence, abundance, and physical location. However, the mechanisms promoting these changes are still controversial. The library model was put forward to explain the conservation of some satDNAs for long periods, predicting that related species share a common collection of satDNAs, which mostly experience quantitative changes. Here, we tested the library model by analyzing three satDNAs in ten species of Schistocerca grasshoppers. This group represents a valuable material because it diversified during the last 7.9 Myr across the American continent from the African desert locust (Schistocerca gregaria), and this thus illuminates the direction of evolutionary changes. By combining bioinformatic and cytogenetic, we tested whether these three satDNA families found in S. gregaria are also present in nine American species, and whether differential gains and/or losses have occurred in the lineages. We found that the three satDNAs are present in all species but display remarkable interspecies differences in their abundance and sequences while being highly consistent with genus phylogeny. The number of chromosomal loci where satDNA is present was also consistent with phylogeny for two satDNA families but not for the other. Our results suggest eminently chance events for satDNA evolution. Several evolutionary trends clearly imply either massive amplifications or contractions, thus closely fitting the library model prediction that changes are mostly quantitative. Finally, we found that satDNA amplifications or contractions may influence the evolution of monomer consensus sequences and by chance playing a major role in driftlike dynamics.


1988 ◽  
Vol 1 (1) ◽  
pp. 4-18 ◽  
Author(s):  
Ewan Ferlie ◽  
Lorna McKee

‘The NHS needs the ability to move much more quickly’ (The Griffiths Report, 1983, p13) This paper grew out of preliminary research undertaken for the research project on which we both work, entitled the Management of Change in the NHS. The project is based in the Centre for Corporate Strategy and Change at the University of Warwick, and is directed by Professor Andrew Pettigrew, who has previously undertaken a longitudinal study of strategic change in ICI (Pettigrew, 1985), and also a pilot study within the NHS which identified the implementation of strategic intent as the jugular problem confronting NHS managers. But a central research problem is why it is that some Health Districts manage to achieve a faster rate of change than others. Hence there was a need to trace the evolution of local systems through time, with the result that the historical analysis of changing is a key aspect of this research. The project is financed jointly by the NHSTA and a consortium of eight of the English Regions and ten case study districts are included. The research design focusses on strategic service changes in both the acute and priority group sectors and incorporates developments and contractions. The choice of strategic changes was informed by a detailed review of the most recent regional strategic plans and the review itself prompted this paper. It led us to a number of observations about the content of the change agenda. First, there is a high rate of change projected in the current strategic round and earlier studies of incrementalist approaches to change may have to be revised (Hunter, 1980; Ham, 1981). Secondly, these regional change agendas to a great extent reflect national/central policy and the pattern is one of uniformity. These standard agendas include RAWP; the construction of a DGH network; the run-down of long-stay mental illness/handicap hospitals; cost improvements and an increase in health promotion activity. Thirdly, alongside the top-down mechanisms to secure implementation of national objectives, another mode of planning emerges which more closely approaches the concept of ‘local learning’ (Glennester et al, 1983) where organisations seek to explore possible forces for change and how they might respond. Planning here is seen as a means of ‘problem-sensing’ and awareness building (Quinn, 1980) and getting new issues onto the agenda (Pettigrew, 1985). The paper will explore the content of the change agenda in detail and the nature of the planning process. It will discuss an alternative methodology, scenario-building and sketches some themes which could form the basis of future health care scenarios. It argues that the standard national agenda is reaching exhaustion and that there is inadequate succession planning for ‘sunrise issues’.


mSystems ◽  
2016 ◽  
Vol 1 (3) ◽  
Author(s):  
Cristina M. Herren ◽  
Kyle C. Webert ◽  
Katherine D. McMahon

ABSTRACT There are many reasons why microbial community composition is difficult to model. For example, the high diversity and high rate of change of these communities make it challenging to identify causes of community turnover. Furthermore, the processes that shape community composition can be either deterministic, which cause communities to converge upon similar compositions, or stochastic, which increase variability in community composition. However, modeling microbial community composition is possible only if microbes show repeatable responses to extrinsic forcing. In this study, we hypothesized that environmental stress acts as a deterministic force that shapes microbial community composition. Other studies have investigated if disturbances can alter microbial community composition, but relatively few studies ask about the repeatability of the effects of disturbances. Mechanistic models implicitly assume that communities show consistent responses to stressors; here, we define and quantify microbial variability to test this assumption. A central pursuit of microbial ecology is to accurately model changes in microbial community composition in response to environmental factors. This goal requires a thorough understanding of the drivers of variability in microbial populations. However, most microbial ecology studies focus on the effects of environmental factors on mean population abundances, rather than on population variability. Here, we imposed several experimental disturbances upon periphyton communities and analyzed the variability of populations within disturbed communities compared with those in undisturbed communities. We analyzed both the bacterial and the diatom communities in the periphyton under nine different disturbance regimes, including regimes that contained multiple disturbances. We found several similarities in the responses of the two communities to disturbance; all significant treatment effects showed that populations became less variable as the result of environmental disturbances. Furthermore, multiple disturbances to these communities were often interactive, meaning that the effects of two disturbances could not have been predicted from studying single disturbances in isolation. These results suggest that environmental factors had repeatable effects on populations within microbial communities, thereby creating communities that were more similar as a result of disturbances. These experiments add to the predictive framework of microbial ecology by quantifying variability in microbial populations and by demonstrating that disturbances can place consistent constraints on the abundance of microbial populations. Although models will never be fully predictive due to stochastic forces, these results indicate that environmental stressors may increase the ability of models to capture microbial community dynamics because of their consistent effects on microbial populations. IMPORTANCE There are many reasons why microbial community composition is difficult to model. For example, the high diversity and high rate of change of these communities make it challenging to identify causes of community turnover. Furthermore, the processes that shape community composition can be either deterministic, which cause communities to converge upon similar compositions, or stochastic, which increase variability in community composition. However, modeling microbial community composition is possible only if microbes show repeatable responses to extrinsic forcing. In this study, we hypothesized that environmental stress acts as a deterministic force that shapes microbial community composition. Other studies have investigated if disturbances can alter microbial community composition, but relatively few studies ask about the repeatability of the effects of disturbances. Mechanistic models implicitly assume that communities show consistent responses to stressors; here, we define and quantify microbial variability to test this assumption. Author Video: An author video summary of this article is available.


2020 ◽  
Author(s):  
Ondřej Mottl ◽  
John-Arvid Grytnes ◽  
Alistair W.R. Seddon ◽  
Manuel J. Steinbauer ◽  
Kuber P. Bhatta ◽  
...  

AbstractDynamics in the rate of compositional change beyond the time of human observation are uniquely preserved in palaeoecological sequences from peat or lake sediments. Changes in sedimentation rates and sampling strategies result in an uneven distribution of time intervals within stratigraphical data, which makes assessing rates of compositional change and the detection of periods with a high rate-of-change (RoC) or ‘peak-points’ challenging. Despite these known issues and their importance, and the frequent use of RoC in palaeoecology, there has been relatively little exploration of differing approaches to quantifying RoC.Here, we introduce R-Ratepol (an easy to use R package) that provides a robust numerical technique for detecting and summarising RoC patterns in complex multivariate time-ordered stratigraphical sequences. We compare the performance of common methods of estimating RoC using simulated pollen-stratigraphical data with known patterns of compositional change and temporal resolution. In addition, we apply our new methodology to four representative European pollen sequences.Simulated data show large differences in the successful detection of known patterns in RoC peak-point detection depending on the smoothing methods and dissimilarity coefficients used, and the level density and their taxonomic richness. Building on these results, we propose a new method of binning with a moving window in combination with a generalised additive model for peak-point detection. The method shows a 22% increase in the correct detection of peak-points and 4% lower occurrence of false positives compared to the more traditional way of peak selection by individual levels, as well as achieving a reasonable compromise between type I and type II errors. The four representative pollen sequences from Europe show that our methodological combination also performs well in detecting periods of significant compositional change including the onset of human activity, early land-use transformation, and changes in fire frequency.Expanding the approach using R-Ratepol to the increasingly available stratigraphical data on pollen, chironomids, or diatoms will allow future palaeoecological and macroecological studies to quantify, and then attribute, major changes in biotic composition across broad spatial areas through time.


Sign in / Sign up

Export Citation Format

Share Document