homogeneous data
Recently Published Documents


TOTAL DOCUMENTS

109
(FIVE YEARS 42)

H-INDEX

12
(FIVE YEARS 4)

2021 ◽  
Vol 2123 (1) ◽  
pp. 012027
Author(s):  
A Hapsery ◽  
A B Tribhuwaneswari

Abstract Monte Carlo is a method used to generate data according to the distribution and resampling until the parameters of the method used became convergen. The purpose of this simulation is first to prove that quantile regression with the estimated sparsity function parameter can model the data according to the non-uniform distribution of the data. Secondly, it’s to prove that the quantile regression is a developed method from the linear regression. The pattern of data which is not uniform is generally referred to as heterogeneous data, while the pattern of uniform data distribution is called homogeneous data. Data in this study will be generated for small and large samples on homogeneous and heterogeneous data. Uniformity of variance will be carried out on both heterogeneous and homogeneous data types, namely 0.25,1 and 4. The parameter estimation process and data generation will be resampled 1000 times. Thus, in conclusion of the simulation studies was the parameter estimates in the classical regression will be the same as the parameter estimates in the quantile regression at quantile 0.5. In the simulation, it is decided that the quantile regression method can be used on heterogeneous and homogeneous data to the unconstrained number of samples and variances.


Author(s):  
Pengfei Zhang ◽  
Tianrui Li ◽  
Zhong Yuan ◽  
Chuan Luo ◽  
Guoqiang Wang ◽  
...  

Author(s):  
Victoria Sherman ◽  
Elissa Greco ◽  
Rosemary Martino

Background Early identification of dysphagia aims to mitigate the risk of health consequences in adults poststroke; however, the evidence from experimental trials alone is inconclusive. This meta‐analysis assessed dysphagia screening benefit from both trial and observational data. Methods and Results Seven electronic databases were searched to December 2019. Unique abstracts and full articles were screened for eligibility by 2 independent blinded raters using a priori criteria and discrepancies resolved by consensus. Included studies were summarized descriptively and assessed for methodological quality using Cochrane Risk of Bias Tool. Across studies, pooled estimates of health benefit were derived for homogeneous data using Review Manger 5.3. From the yield of 8860 citations, 30 unique articles were selected: 24 observational and 6 randomized trials. Across studies, comparisons varied: no screening versus screening, late versus earlier screening, informal versus formal screening, pre‐ versus postscreening, and pre‐ versus poststroke guidelines that included screening. Pooled estimates across comparisons favored experimental groups for pneumonia odds ratio (OR), 0.57 (95% CI, 0.45–0.72), mortality OR, 0.52 (95% CI, 0.35–0.77), dependency OR, 0.54 (95% CI, 0.35–0.85), and length of stay standardized mean difference, −0.62 (95% CI, −1.05 to −0.20). Conclusions Combining evidence from experimental and observational studies derived a significant protective health benefit of dysphagia screening following adult acute stroke for pneumonia, mortality, dependency, and length of stay.


2021 ◽  
Author(s):  
Silke Jantos ◽  
C. Sebastian Sommer

Over the last two decades, technical progress for archaeology within the Bavarian State Department of Monuments and Sites (BLfD) underwent rapid development. It began with the establishment of an information system (FIS) where the workflow of the Department of Archaeological Heritage is mapped. The next step consisted of standardisation of data capture for all institutions and companies undertaking excavation. In response, a homogeneous data model was developed and established through the application ExcaBook. In order to guarantee this solution would be used widely, an Importer was created to import data from other databases or applications. Hereafter it is also planned to process data from restoration and conservation projects, using a similar approach in order to work towards improving data exchange between all related sciences.


Author(s):  
Ch. Raja Ramesh, Et. al.

A group of different data objects is classified as similar objects is known as clusters. It is the process of finding homogeneous data items like patterns, documents etc. and then group the homogenous data items togetherothers groupsmay have dissimilar data items. Most of the clustering methods are either crisp or fuzzy and moreover member allocation to the respective clusters is strictly based on similarity measures and membership functions.Both of the methods have limitations in terms of membership. One strictly decides a sample must belong to single cluster and other anyway fuzzy i.e probability. Finally, Quality and Purity like measure are applied to understand how well clusters are created. But there is a grey area in between i.e. ‘Boundary Points’ and ‘Moderately Far’ points from the cluster centre. We considered the cluster quality [18], processing time and relevant features identification as basis for our problem statement and implemented Zone based clustering by using map reducer concept. I have implemented the process to find far points from different clusters and generate a new cluster, repeat the above process until cluster quantity is stabilized. By using this processwe can improve the cluster quality and processing time also.


Author(s):  
Aleksander K. Cherkashin ◽  

Geographical meta-analysis is a methodology for combining the results of studies of various territorial objects of different types of locations by means of logical, mathematical and statistical analysis to justify and test scientific hypotheses. Meta-analytical generalizations are based on a non-statistical approach of comparative geographical research with a transition from initial heterogeneous data sets to homogeneous data that can be statistically processed. The meta-analysis methodology is developed on a meta-theoretical basis from the standpoint of the system stratification (fibering, bundling) of the earth's reality on the manifold of the geographical environment. Locally, the same qualimetric equations for data integration and generalization describes processes and phenomena, so each situation is reduced to the properties of a typical layer (fiber) and universal equations on the connection of variables. Features of using geographical meta-analysis methods are considered on the examples of the spread of COVID-19 coronavirus diseases across countries, seasonal development of taiga nature, and gradient analysis of the factor influence on the distribution of mountain geosystems of various types (geomes). In order to compress information, we use methods for calculating integral indicators and other means of excluding influence from the environment. The revealed regularities do not depend on individual values of factors and conditions that influence the processes and relationships between the characteristics of the state of natural and socio-economic systems. They represent dependencies in a refined form.


2020 ◽  
Vol 501 (1) ◽  
pp. 866-874
Author(s):  
Ilaria Musella ◽  
Marcella Marconi ◽  
Roberto Molinaro ◽  
Giuliana Fiorentino ◽  
Vincenzo Ripepi ◽  
...  

ABSTRACT Ultra Long Period Cepheids (ULPs) are pulsating variable stars with a period longer than 80 d and have been hypothesized to be the extension of the Classical Cepheids (CCs) at higher masses and luminosities. If confirmed as standard candles, their intrinsic luminosities, ∼1 to ∼3 mag brighter than typical CCs, would allow to reach the Hubble flow and, in turn, to determine the Hubble constant, H0, in one step, avoiding the uncertainties associated with the calibration of primary and secondary indicators. To investigate the accuracy of ULPs as cosmological standard candles, we first collect all the ULPs known in the literature. The resulting sample includes 63 objects with a very large metallicity spread with 12 + log ([O/H]) ranging from 7.2 to 9.2 dex. The analysis of their properties in the VI period–Wesenheit plane and in the colour–magnitude diagram (CMD) supports the hypothesis that the ULPs are the extension of CCs at longer periods, higher masses and luminosities, even if, additional accurate and homogeneous data and a devoted theoretical scenario are needed to get firm conclusions. Finally, the three M31 ULPs, 8-0326, 8-1498, and H42, are investigated in more detail. For 8-1498 and H42, we cannot confirm their nature as ULPs, due to the inconsistency between their position in the CMD and the measured periods. For 8-0326, the light curve model fitting technique applied to the available time-series data allows us to constrain its intrinsic stellar parameters, distance, and reddening.


2020 ◽  
Vol 12 (11) ◽  
pp. 194
Author(s):  
Ivan Miguel Pires ◽  
Faisal Hussain ◽  
Nuno M. M. Garcia ◽  
Petre Lameski ◽  
Eftim Zdravevski

One class of applications for human activity recognition methods is found in mobile devices for monitoring older adults and people with special needs. Recently, many studies were performed to create intelligent methods for the recognition of human activities. However, the different mobile devices in the market acquire the data from sensors at different frequencies. This paper focuses on implementing four data normalization techniques, i.e., MaxAbsScaler, MinMaxScaler, RobustScaler, and Z-Score. Subsequently, we evaluate the impact of the normalization algorithms with deep neural networks (DNN) for the classification of the human activities. The impact of the data normalization was counterintuitive, resulting in a degradation of performance. Namely, when using the accelerometer data, the accuracy dropped from about 79% to only 53% for the best normalization approach. Similarly, for the gyroscope data, the accuracy without normalization was about 81.5%, whereas with the best normalization, it was only 60%. It can be concluded that data normalization techniques are not helpful in classification problems with homogeneous data.


Sign in / Sign up

Export Citation Format

Share Document