scholarly journals Hosting models comparison of ASP.NET Core application

2018 ◽  
Vol 8 ◽  
pp. 258-262
Author(s):  
Kamil Zdanikowski ◽  
Beata Pańczyk

The article presents hosting models comparison of ASP.NET Core application. Available hosting models were described and compared and then performance comparison was carried out. For each model the same test scenarios were executed and their performance was determined by number of requests per second which host was able to process. The results obtained show that standard model is the least efficient one and using one of the other configurations, for example, IIS with Kestrel (in-process), Kestrel or HTTP.sys might provide even several times better performance compared to standard model.


1988 ◽  
Vol 2 (3) ◽  
pp. 45-50 ◽  
Author(s):  
Hayne Leland ◽  
Mark Rubinstein

Six months after the market crash of October 1987, we are still sifting through the debris searching for its cause. Two theories of the crash sound plausible -- one based on a market panic and the other based on large trader transactions -- though there is other evidence that is difficult to reconcile. If we are to believe the market panic theory or the Brady Commission's theory that the crash was primarily caused by a few large traders, we must strongly reject the standard model. We need to build models of financial equilibrium which are more sensitive to real life trading mechanisms, which account more realistically for the formation of expectations, and which recognize that, at any one time, there is a limited pool of investors available with the ability to evaluate stocks and take appropriate action in the market.



2020 ◽  
Vol 2020 (3) ◽  
Author(s):  
Junichi Haruna ◽  
Hikaru Kawai

Abstract In the standard model, the weak scale is the only parameter with mass dimensions. This means that the standard model itself cannot explain the origin of the weak scale. On the other hand, from the results of recent accelerator experiments, except for some small corrections, the standard model has increased the possibility of being an effective theory up to the Planck scale. From these facts, it is naturally inferred that the weak scale is determined by some dynamics from the Planck scale. In order to answer this question, we rely on the multiple point criticality principle as a clue and consider the classically conformal $\mathbb{Z}_2\times \mathbb{Z}_2$ invariant two-scalar model as a minimal model in which the weak scale is generated dynamically from the Planck scale. This model contains only two real scalar fields and does not contain any fermions or gauge fields. In this model, due to a Coleman–Weinberg-like mechanism, the one-scalar field spontaneously breaks the $ \mathbb{Z}_2$ symmetry with a vacuum expectation value connected with the cutoff momentum. We investigate this using the one-loop effective potential, renormalization group and large-$N$ limit. We also investigate whether it is possible to reproduce the mass term and vacuum expectation value of the Higgs field by coupling this model with the standard model in the Higgs portal framework. In this case, the one-scalar field that does not break $\mathbb{Z}_2$ can be a candidate for dark matter and have a mass of about several TeV in appropriate parameters. On the other hand, the other scalar field breaks $\mathbb{Z}_2$ and has a mass of several tens of GeV. These results will be verifiable in near-future experiments.



1995 ◽  
Vol 10 (10) ◽  
pp. 845-852 ◽  
Author(s):  
M. CONSOLI ◽  
Z. HIOKI

We perform a detailed comparison of the present LEP data with the one-loop standard model predictions. It is pointed out that for mt = 174 GeV the "bulk" of the data prefers a rather large value of the Higgs mass in the range of 500–1000 GeV, in agreement with the indications from the W mass. On the other hand, to accommodate a light Higgs it is crucial to include the more problematic data for the τ FB asymmetry. We discuss further improvements on the data which are required to obtain a firm conclusion.



clause whereby it was a condition of acceptance that goods would be charged at prices ruling at the date of delivery. The defendant buyers replied on 27 May 1969, giving an order with differences from the sellers’ quotation and with their own terms and conditions, which had no price variation clause. The order had a tear-off acknowledgment for signature and return which accepted the order ‘on the terms and conditions thereon’. On 5 June 1969, the sellers, after acknowledging receipt of the order on 4 June, returned the acknowledgment form duly completed with a covering letter stating that delivery was to be ‘in accordance with our revised quotation of 23 May for delivery in ... March/April 1970’. The machine was ready by about September 1970, but the buyers could not accept delivery until November 1970. The sellers invoked the price increase clause and claimed £2,892 for the increase due to the rise in costs between 27 May 1969 and 1 April 1970, when the machine should have been delivered. Thesiger J gave judgment for the sellers for £2,892 and interest. The buyers appealed. The Court of Appeal unanimously reversed the first instance decision, all three judges feeling that the conclusive act was the sellers’ return of the tear-off acknowledgment slip. However, the reasons given by the judges for arriving at their decision differed. Bridge LJ and Lawton LJ broadly applied the standard model of ‘offer – counter-offer – acceptance’ to this ‘battle of the forms’, although both of them were clearly aware of the difficulties that this would cause. Lord Denning’s approach, not untypically, ranged much more widely. Unlike the other two judges, who can be seen to adopt a broadly ‘last shot’ theory (that is, that the ‘battle’ is won by the person who submits their terms last), Lord Denning was prepared to countenance a number of other possibilities. The following passages serve to indicate these divergences in approach: Butler Machine Tool Co Ltd v Ex-Cell-O Corpn (England) Ltd [1979] 1 WLR 401, CA, p 402

1995 ◽  
pp. 118-124


Sensors ◽  
2019 ◽  
Vol 19 (24) ◽  
pp. 5434 ◽  
Author(s):  
Hani H. Tawfik ◽  
Karim Allidina ◽  
Frederic Nabki ◽  
Mourad N. El-Gamal

This paper presents a novel dual-level capacitive microcantilever-based thermal detector that is implemented in the commercial surface micromachined PolyMUMPs technology. The proposed design is implemented side-by-side with four different single-level designs to enable a design-to-design performance comparison. The dual-level design exhibits a rate of capacitance change per degree Celsius that is over three times higher than that of the single-level designs and has a base capacitance that is more than twice as large. These improvements are achieved because the dual-level architecture allows a 100% electrode-to-detector area, while single-level designs are shown to suffer from an inherent trade-off between sensitivity and base capacitance. In single-level designs, either the number of the bimorph beams or the capacitance electrode can be increased for a given sensor area. The former is needed for a longer effective length of the bimorph for higher thermomechanical sensitivity (i.e., larger tilting angels per degree Celsius), while the latter is desired to relax the read-out integrated-circuits requirements. This thermomechanical response-to-initial capacitance trade-off is mitigated by the dual-level design, which dedicates one structural layer to serve as the upper electrode of the detector, while the other layer contains as many bimorph beams as desired, independently of the former’s area.



2008 ◽  
Vol 23 (21) ◽  
pp. 3296-3299 ◽  
Author(s):  
C. S. KIM ◽  
SECHUL OH ◽  
YEO WOONG YOON

Due to re-parametrization invariance of decay amplitudes, any single new physics (NP) amplitude arising through either the electro-weak penguin (EWP) or the color-suppressed tree amplitude can be embedded simultaneously into both the color-suppressed tree and the EWP contribution in B → Kπ decays. We present a systematic method to extract each standard model (SM)-like hadronic parameter as well as new physics parameters in analytic way, so that one can pinpoint them once experimental data are given. Using the currently available experimental data for B → Kπ modes, we find two possible analytic results: one showing the large SM-like color-suppressed tree contribution and the other showing the large SM-like EWP contribution. The magnitude of the NP amplitude and its weak phase are quite large. For instance, we find |PNP/P′| = 0.39 ± 0.13, φNP = 92° ± 15° and δNP = 7° ± 26°, which are the ratio of the NP-to-SM contribution, the weak and the strong phase of the NP amplitude, respectively. We also investigate the dependence of the NP contribution on the weak phase γ and the mixing induced CP asymmetry of B0 → KSπ0, respectively





2015 ◽  
Vol 77 (22) ◽  
Author(s):  
Sayed Muchallil ◽  
Fitri Arnia ◽  
Khairul Munadi ◽  
Fardian Fardian

Image denoising plays an important role in image processing.  It is also part of the pre-processing technique in a binarization complete procedure that consists of pre-processing, thresholding, and post-processing.  Our previous research has confirmed that the Discrete Cosine Transform (DCT)-based filtering as the new pre-processing process improved the performance of binarization output in terms of recall and precision. This research compares three classical denoising methods; Gaussian, mean, and median filtering with the DCT-based filtering. The noisy ancient document images are filtered using those classical filtering methods. The outputs of this process are used as the input for Otsu, Niblack, Sauvola and NICK binarization methods. Then the resulted binary images of the three classical methods are compared with those of DCT-based filtering. The performance of all denoising algorithms is evaluated by calculating recall and precision of the resulted binary images.  The result of this research is that the DCT based filtering resulted in the highest recall and precision as compared to the other methods. 



2016 ◽  
Vol 31 (16) ◽  
pp. 1630015 ◽  
Author(s):  
Robert Delbourgo

Local events are characterized by “where”, “when” and “what”. Just as (bosonic) spacetime forms the backdrop for location and time, (fermionic) property space can serve as the backdrop for the attributes of a system. With such a scenario I shall describe a scheme that is capable of unifying gravitation and the other forces of nature. The generalized metric contains the curvature of spacetime and property separately, with the gauge fields linking the bosonic and fermionic arenas. The super-Ricci scalar can then automatically yield the spacetime Lagrangian of gravitation and the Standard Model (plus a cosmological constant) upon integration over property coordinates.



2014 ◽  
Vol 971-973 ◽  
pp. 1680-1683
Author(s):  
Miao He ◽  
Li Yu Tian ◽  
Xiong Jun Fu ◽  
Yun Chen Jiang

In wideband radar situation, target-spread and all scattering points back wave could be considered as the pulse train of random parameters. The wideband radar target and built the related model. Then it gave two methods of target detection, one is Energy Accumulation and the other is the IPTRP. It also presented the simulation result of these two methods performance curves. It showed that the IPTRP improved by more than 3dB in the same SNR.



Sign in / Sign up

Export Citation Format

Share Document