fixed number
Recently Published Documents


TOTAL DOCUMENTS

1209
(FIVE YEARS 378)

H-INDEX

41
(FIVE YEARS 7)

Author(s):  
Tatsuya Hiraoka ◽  
Sho Takase ◽  
Kei Uchiumi ◽  
Atsushi Keyaki ◽  
Naoaki Okazaki

We propose a method to pay attention to high-order relations among latent states to improve the conventional HMMs that focus only on the latest latent state, since they assume Markov property. To address the high-order relations, we apply an RNN to each sequence of latent states, because the RNN can represent the information of an arbitrary-length sequence with their cell: a fixed-size vector. However, the simplest way, which provides all latent sequences explicitly for the RNN, is intractable due to the combinatorial explosion of the search space of latent states. Thus, we modify the RNN to represent the history of latent states from the beginning of the sequence to the current state with a fixed number of RNN cells whose number is equal to the number of possible states. We conduct experiments on unsupervised POS tagging and synthetic datasets. Experimental results show that the proposed method achieves better performance than previous methods. In addition, the results on the synthetic dataset indicate that the proposed method can capture the high-order relations.


Author(s):  
Rajesh Singh ◽  
Pritee Singh ◽  
Kailash Kale

Reliability is an essentially important characteristic of software. The reliability of software has been assessed by considering Poisson Type occurrence of software failures and the failure intensity of one parameter say (η_1 ) Rayleigh class. Here, it is assumed that the software contains fixed number of inherent faults say (η_0 ). The scale parameter of Rayleigh density (η_1 ) and fixed number of inherent faults contained in software are the parameters of interest. The failure intensity and mean failure function of this Poisson Type Rayleigh Class (PTRC) Software Reliability Growth Model (SRGM) have been studied. The estimates of above parameters can be obtained by using maximum likelihood method. Bayesian technique has been used to about estimates of η_0 and η_1 if prior knowledge about these parameters is available. The prior knowledge about these parameters is considered in the form of non- informative priors for both the parameters. The proposed Bayes estimators are compared with their corresponding maximum likelihood estimators on the basis of risk efficiencies under squared error loss. The Monte Carlo simulation technique is used for calculating risk efficiencies. It is seen that both the proposed Bayes estimators can be preferred over corresponding MLEs for the proper choice of the values of execution time.


Author(s):  
Vyacheslav Z. Grines ◽  
Elena Ya. Gurevich ◽  
Evgenii Iv. Yakovlev

We consider a class GSD(M3) of gradient-like diffeomorphisms with surface dynamics given on closed oriented manifold M3 of dimension three. Earlier it was proved that manifolds admitting such diffeomorphisms are mapping tori under closed orientable surface of genus g, and the number of non-compact heteroclinic curves of such diffeomorphisms is not less than 12g. In this paper, we determine a class of diffeomorphisms GSDR(M3)⊂GSD(M3) that have the minimum number of heteroclinic curves for a given number of periodic points, and prove that the supporting manifold of such diffeomorphisms is a Seifert manifold. The separatrices of periodic points of diffeomorphisms from the class GSDR(M3) have regular asymptotic behavior, in particular, their closures are locally flat. We provide sufficient conditions (independent on dynamics) for mapping torus to be Seifert. At the same time, the paper establishes that for any fixed g geq1, fixed number of periodic points, and any integer n≥12g, there exists a manifold M3 and a diffeomorphism f∈GSD(M3) having exactly n non-compact heteroclinic curves.


MAUSAM ◽  
2021 ◽  
Vol 43 (2) ◽  
pp. 137-142
Author(s):  
BHUKAN LAL ◽  
Y. M. DUGGAL ◽  
PANCHU RAM

Analysis of southwest monsoon (June to September) and annual rainfall of 12 districts of  Haryana and Delhi based on fixed number of raingauge stations (36) has been made for 90-year period (1901-1990) in order to search for trends and periodicities in the rainfall. It is seen that monsoon and annual rainfall have a similar variability and is least where rainfall is maximum. It is also observed that frequency distribution of monsoon rainfall is not normal in two districts, viz., Kurukshetra and Sirsa. Positive trend is noticed in monsoon rainfall of Rohtak and Kurukshetra and annual rainfall of Delhi. Increase in the mean rainfall for 45 years period showed a gradient from first to second from the east central to the western parts of the State with a maximum value over the east central parts. Low-pass filter analysis suggests that the trend is not linear but oscillatory consisting of periods of 10years or more. The spectral analysis indicates a significant cycle of range 5.5 to 8 ..6 years mainly in eastern and southwestern districts of the State. QBO IS also observed over some districts of the State.  


2021 ◽  
Author(s):  
Zhe Liu ◽  
Weijin Qiu ◽  
Shujin Fu ◽  
Xia Zhao ◽  
Jun Xia ◽  
...  

Sequencing depth has always played an important role in the accurate detection of low-frequency mutations. The increase of sequencing depth and the reasonable setting of threshold can maximize the probability of true positive mutation, or sensitivity. Here, we found that when the threshold was set as a fixed number of positive mutated reads, the probability of both true and false-positive mutations increased with depth. However, When the number of positive mutated reads increased in an equal proportion with depth (the threshold was transformed from a fixed number to a fixed percentage of mutated reads), the true positive probability still increased while false positive probability decreased. Through binomial distribution simulation and experimental test, it is found that the "fidelity" of detected-VAFs is the cause of this phenomenon. Firstly, we used the binomial distribution to construct a model that can easily calculate the relationship between sequencing depth and probability of true positive (or false positive), which can standardize the minimum sequencing depth for different low-frequency mutation detection. Then, the effect of sequencing depth on the fidelity of NA12878 with 3% mutation frequency and circulating tumor DNA (ctDNA of 1%, 3% and 5%) showed that the increase of sequencing depth reduced the fluctuation range of detected-VAFs around the expected VAFs, that is, the fidelity was improved. Finally, based on our experiment result, the consistency of single-nucleotide variants (SNVs) between paired FF and FFPE samples of mice increased with increasing depth, suggesting that increasing depth can improve the precision and sensitivity of low-frequency mutations.


2021 ◽  
Vol 6 (2) ◽  
pp. 155-160
Author(s):  
Mykola Voloshyn ◽  
◽  
Yevhenii Vavruk

The quarantine restrictions introduced during COVID-19 are necessary to minimize the spread of coronavirus disease. These measures include a fixed number of people in the room, social distance, wearing protective equipment. These restrictions are achieved by the work of technological control workers and the police. However, people are not ideal creatures, quite often the human factor makes its adjustments. That is why in this work we have developed software for determining the protective elements on the face in real time using the Python scripting language, the open software libraries OpenCV v4.5.4, TensorFlow v2.6.0, Keras v2.6.0 and MobileNetV2 using the camera. The training program uses a prepared set of photos from KAGGLE — with a mask and without a mask. This set has been expanded by the authors to include different types of masks and their location. Using TensorFlow, Keras, MobileNetV2, a model is created to study the neural network by analyzing images. The generated neural network uses a model to determine the masks. You can preview the learning result of the network — it is presented as a graphic file. A program that uses the connected camera is then launched and the user can test the operation. This model can be easily deployed on embedded systems such as Raspberry Pi, Google Coral, and become a hardware and software automated system that can be used in crowded places — airports, shopping malls, stadiums, government agencies and more.


2021 ◽  
Author(s):  
Cathy C. Westhues ◽  
Henner Simianer ◽  
Timothy M. Beissinger

We introduce the R-package learnMET, developed as a flexible framework to enable a collection of analyses on multi-environment trial (MET) breeding data with machine learning-based models. learnMET allows the combination of genomic information with environmental data such as climate and/or soil characteristics. Notably, the package offers the possibility of incorporating weather data from field weather stations, or can retrieve global meteorological datasets from a NASA database. Daily weather data can be aggregated in daily windows based on naive (for instance, daily windows with a fixed number of days) or phenological approaches. Different machine learning methods for genomic prediction are implemented, including gradient boosted trees, random forests, stacked ensemble models, and multi-layer perceptrons. These prediction models can be evaluated via a collection of cross-validation schemes that mimic typical scenarios encountered by plant breeders working with MET experimental data in a user-friendly way. The package is fully open source and accessible on GitHub.


2021 ◽  
Vol 25 (1) ◽  
pp. 105-124
Author(s):  
Elena V. Gabrielova ◽  
Olga I. Maksimenko

The current research answers the question how Twitter users express their evaluation of topical social problems (explicitly or implicitly) and what linguistic means they use, being restricted by the allowed length of the message. The article explores how Twitter users communicate with each other and exchange ideas on social issues of great importance, express their feelings using a number of linguistic means, while being limited by a fixed number of characters, and form solidarity, being geographically distant from each other. The research is focused on the linguistic tools employed by Twitter users in order to express their personal attitude. The subject chosen for study was the migration processes in Europe and the USA. The aim of the current investigation is to determine the correlation between the attitudes of English-speaking users towards migration and the way they are expressed implicitly or explicitly. The authors make an attempt to define which tools contribute to the implicit or explicit nature of the utterances. The material includes 100 tweets of English-speaking users collected from February 1 to July 31, 2017. The choice of the time period is defined by significant events in Trumps migration policy and their consequences. The research is based on the content analysis of the material carried out by means of the Atlas.ti program. The software performs the coding of textual units, counts the frequency of codes and their correlation. The results of the research show that Twitter users tend to express their critical attitudes towards migration, rather than approve of it or sympathise with migrants. Criticism is more often expressed implicitly rather than explicitly. In order to disguise the attitude and feelings, the English-speaking users of Twitter employed irony, questions and quotations, while the explicit expression of attitudes was done by means of imperative structures. It is also worth mentioning that ellipses, contractions and abbreviations were used quite frequently due to the word limit of tweets. At the same time, the lack of knowledge about extralinguistic factors and personal characteristics of users makes the process of interpretation of tweets rather challenging. The findings of the current research suggest the necessity to take into account implicit negative attitudes while carrying out the analysis of public opinion on Twitter.


Author(s):  
Willem Trommelen ◽  
Konstantinos Gkiotsalitis ◽  
Eric C. van Berkum

In this study, we introduce a method to optimally select the crossover locations of an independent rail line from a set of possible crossover locations considering a fixed number of crossovers that must be used in the design. This optimal selection aims to minimize the cost of passenger delay. Previous research showed that including passenger delay in the decision of rail design choices could be beneficial from economic and societal perspectives. However, those studies were only able to evaluate a few alternatives, because the degraded schedules had to be determined manually. In this research, we introduced an integer nonlinear model to find the best crossover design. We further developed an algorithm to evaluate a set of crossovers and determine the cost of delays for all segments on a rail line given a set of potential disruptions. The monetized cost of passenger delays was used to analyze the tradeoff between the unreliability costs emerging from the delay of passengers in the case of disruptions, and the total number of required crossovers. Our model was applied on a light rail line in Bergen (Norway) resulting in 10% reduction in relation to passenger delays without increasing the number of crossovers; thus, ensuring that there were no additional costs.


Foods ◽  
2021 ◽  
Vol 10 (12) ◽  
pp. 3084
Author(s):  
Maria Frizzarin ◽  
Isobel Claire Gormley ◽  
Alessandro Casa ◽  
Sinéad McParland

Including all available data when developing equations to relate midinfrared spectra to a phenotype may be suboptimal for poorly represented spectra. Here, an alternative local changepoint approach was developed to predict six milk technological traits from midinfrared spectra. Neighbours were objectively identified for each predictand as those most similar to the predictand using the Mahalanobis distances between the spectral principal components, and subsequently used in partial least square regression (PLSR) analyses. The performance of the local changepoint approach was compared to that of PLSR using all spectra (global PLSR) and another LOCAL approach, whereby a fixed number of neighbours was used in the prediction according to the correlation between the predictand and the available spectra. Global PLSR had the lowest RMSEV for five traits. The local changepoint approach had the lowest RMSEV for one trait; however, it outperformed the LOCAL approach for four traits. When the 5% of the spectra with the greatest Mahalanobis distance from the centre of the global principal component space were analysed, the local changepoint approach outperformed the global PLSR and the LOCAL approach in two and five traits, respectively. The objective selection of neighbours improved the prediction performance compared to utilising a fixed number of neighbours; however, it generally did not outperform the global PLSR.


Sign in / Sign up

Export Citation Format

Share Document