Projection of near-future anthropogenic PM2.5 over India using statistical approach

2018 ◽  
Vol 186 ◽  
pp. 178-188 ◽  
Author(s):  
Abhishek Upadhyay ◽  
Sagnik Dey ◽  
Pramila Goyal ◽  
S.K. Dash
1998 ◽  
Vol 38 (6) ◽  
pp. 209-217 ◽  
Author(s):  
Jianhua Lei ◽  
Sveinung Sægrov

This paper demonstrates the statistical approach for describing failures and lifetimes of water mains. The statistical approach is based on pipe inventory data and the maintenance data registered in the data base. The approach consists of data pre-processing and statistical analysis. Two classes of statistical models are applied, namely counting process models and lifetime models. With lifetime models, one can estimate the probability which a pipe will fail within a time horizon. With counting process models one can see the deteriorating (or improving) trend in time of a group of “identical” pipes and their rates of occurrence of failure (ROCOF). The case study with the data base from Trondheim municipality (Norway) demonstrates the applicability of the statistical approach and leads to the following results: 1). In the past 20 years, Trondheim municipality has experienced approximately 250 to 300 failures per year. However, the number of failures per year will significantly increase in the near future unless better maintenance practice is implemented now. 2). Unprotected ductile iron pipes have a higher probability of failures than other materials. The average lifetime of unprotected ductile iron pipes is approximately 30 to 40 years shorter than the lifetime of a cast iron pipe. 3). Pipes installed 1963 and 1975 are most likely to fail in the future; 4) The age of a pipe does not play a significant role for the remaining lifetime of the pipe; 5). After 2 to 3 failures, a pipe enters a fast-failure stage (i.e., frequent multiple between failures).


2014 ◽  
Vol 10 (S306) ◽  
pp. 359-361 ◽  
Author(s):  
Jesús Varela ◽  
David Cristóbal-Hornillos ◽  
Javier Cenarro ◽  
Alessadro Ederoclite ◽  
David Muniesa ◽  
...  

AbstractThe success of many cosmological surveys in the near future is highly grounded on the quality of their photometry. The Javalambre-PAU Astrophysical Survey (J-PAS) will image more than 8500 deg2 of the Northern Sky Hemisphere in 54 narrow + 2 medium/broad optical bands plus Sloan u, g and r bands. The main goal of J-PAS is to provide the best constrains on the cosmological parameters before the arrival of projects like Euclid or LSST. To achieve this goal the uncertainty in photo-z cannot be larger than 0.3% for several millions of galaxies and this is highly dependent on the photometric accuracy.The photometric calibration of J-PAS will imply the intensive use of huge amounts of data and the use of statistical tools is unavoidable. Here, we present some of the key steps in the photometric calibration of J-PAS that will demand a suitable statistical approach.


2021 ◽  
Vol 2021 (3) ◽  
Author(s):  
Guo-yuan Huang ◽  
Shun Zhou

Abstract In the near future, the neutrinoless double-beta (0νββ) decay experiments will hopefully reach the sensitivity of a few meV to the effective neutrino mass |mββ|. In this paper, we tentatively examine the sensitivity of future 0νββ-decay experiments to neutrino masses and Majorana CP phases by following the Bayesian statistical approach. Provided experimental setups corresponding to the experimental sensitivity of |mββ| ≃ 1 meV, the null observation of 0νββ decays in the case of normal neutrino mass ordering leads to a very competitive bound on the lightest neutrino mass m1. Namely, the 95% credible interval in the Bayesian approach turns out to be 1.6 meV ≲ m1 ≲ 7.3 meV or 0.3 meV ≲ m1 ≲ 5.6 meV when the uniform prior on m1/eV or on log10(m1/eV) is adopted. Moreover, one of two Majorana CP phases is strictly constrained, i.e., 140° ≲ ρ ≲ 220° for both scenarios of prior distributions of m1. In contrast, if a relatively worse experimental sensitivity of |mββ| ≃ 10 meV is assumed, the constraint on the lightest neutrino mass becomes accordingly 0.6 meV ≲ m1 ≲ 26 meV or 0 ≲ m1 ≲ 6.1 meV, while two Majorana CP phases will be essentially unconstrained. In the same statistical framework, the prospects for the determination of neutrino mass ordering and the discrimination between Majorana and Dirac nature of massive neutrinos in the 0νββ-decay experiments are also discussed. Given the experimental sensitivity of |mββ| ≃ 10 meV (or 1 meV), the strength of evidence to exclude the Majorana nature under the null observation of 0νββ decays is found to be inconclusive (or strong), no matter which of two priors on m1 is taken.


1966 ◽  
Vol 24 ◽  
pp. 116-117
Author(s):  
P.-I. Eriksson

Nowadays more and more of the reductions of astronomical data are made with electronic computers. As we in Uppsala have an IBM 1620 at the University, we have taken it to our help with reductions of spectrophotometric data. Here I will briefly explain how we use it now and how we want to use it in the near future.


Author(s):  
W.J. de Ruijter ◽  
P. Rez ◽  
David J. Smith

There is growing interest in the on-line use of computers in high-resolution electron n which should reduce the demands on highly skilled operators and thereby extend the r of the technique. An on-line computer could obviously perform routine procedures hand, or else facilitate automation of various restoration, reconstruction and enhan These techniques are slow and cumbersome at present because of the need for cai micrographs and off-line processing. In low resolution microscopy (most biologic; primary incentive for automation and computer image analysis is to create a instrument, with standard programmed procedures. In HREM (materials researc computer image analysis should lead to better utilization of the microscope. Instru (improved lens design and higher accelerating voltages) have improved the interpretab the level of atomic dimensions (approximately 1.6 Å) and instrumental resolutior should become feasible in the near future.


2019 ◽  
Vol 63 (6) ◽  
pp. 757-771 ◽  
Author(s):  
Claire Francastel ◽  
Frédérique Magdinier

Abstract Despite the tremendous progress made in recent years in assembling the human genome, tandemly repeated DNA elements remain poorly characterized. These sequences account for the vast majority of methylated sites in the human genome and their methylated state is necessary for this repetitive DNA to function properly and to maintain genome integrity. Furthermore, recent advances highlight the emerging role of these sequences in regulating the functions of the human genome and its variability during evolution, among individuals, or in disease susceptibility. In addition, a number of inherited rare diseases are directly linked to the alteration of some of these repetitive DNA sequences, either through changes in the organization or size of the tandem repeat arrays or through mutations in genes encoding chromatin modifiers involved in the epigenetic regulation of these elements. Although largely overlooked so far in the functional annotation of the human genome, satellite elements play key roles in its architectural and topological organization. This includes functions as boundary elements delimitating functional domains or assembly of repressive nuclear compartments, with local or distal impact on gene expression. Thus, the consideration of satellite repeats organization and their associated epigenetic landmarks, including DNA methylation (DNAme), will become unavoidable in the near future to fully decipher human phenotypes and associated diseases.


1969 ◽  
Vol 8 (02) ◽  
pp. 84-90 ◽  
Author(s):  
A. W. Pratt ◽  
M. Pacak

The system for the identification and subsequent transformation of terminal morphemes in medical English is a part of the information system for processing pathology data which was developed at the National Institutes of Health.The recognition and transformation of terminal morphemes is restricted to classes of adjectivals including the -ING and -ED forms, nominals and homographic adjective/noun forms.The adjective-to-noun and noun-to-noun transforms consist basically of a set of substitutions of adjectival and certain nominal suffixes by a set of suffixes which indicate the corresponding nominal form(s).The adjectival/nominal suffix has a polymorphosyntactic transformational function if it has the property of being transformed into more than one nominalizing suffix (e.g., the adjectival suffix -IC can be substituted by a set of nominalizing suffixes -Ø, -A, -E, -Y, -IS, -IA, -ICS): the adjectival suffix has a monomorphosyntactic transformational property if there is only one admissible transform (e.g., -CIC → -X).The morphological segmentation and the subsequent transformations are based on the following principles:a. The word form is segmented according to the principle of »double consonant cut,« i.e., terminal characters following the last set of double consonants are analyzed and treated as a potential suffix. For practical purposes only such terminal suffixes of a maximum length of four have been analyzed.b. The principle that the largest segment of a word form common to both adjective and noun or to both noun stems is retained as a word base for transformational operations, and the non-identical segment is considered to be a »suffix.«The backward right-to-left character search is initiated by the identification of the terminal grapheme of the given word form and is extended to certain admissible sequences of immediately preceding graphemes.The nodes which represent fixed sequences of graphemes are labeled according to their recognition and/or transformation properties.The tree nodes are divided into two groups:a. productive or activatedb. non-productive or non-activatedThe productive (activated) nodes are sequences of sets of graphemes which possess certain properties, such as the indication about part-of-speech class membership, the transformation properties, or both. The non-productive (non-activated) nodes have the function of connectors, i.e., they specify the admissible path to the productive nodes.The computer program for the identification and transformation of the terminal morphemes is open-ended and is already operational. It will be extended to other sub-fields of medicine in the near future.


Author(s):  
M. V. Noskov ◽  
M. V. Somova ◽  
I. M. Fedotova

The article proposes a model for forecasting the success of student’s learning. The model is a Markov process with continuous time, such as the process of “death and reproduction”. As the parameters of the process, the intensities of the processes of obtaining and assimilating information are offered, and the intensity of the process of assimilating information takes into account the attitude of the student to the subject being studied. As a result of applying the model, it is possible for each student to determine the probability of a given formation of ownership of the material being studied in the near future. Thus, in the presence of an automated information system of the university, the implementation of the model is an element of the decision support system by all participants in the educational process. The examples given in the article are the results of an experiment conducted at the Institute of Space and Information Technologies of Siberian Federal University under conditions of blended learning, that is, under conditions when classroom work is accompanied by independent work with electronic resources.


Sign in / Sign up

Export Citation Format

Share Document