very large databases
Recently Published Documents


TOTAL DOCUMENTS

115
(FIVE YEARS 9)

H-INDEX

13
(FIVE YEARS 1)

Hearts ◽  
2021 ◽  
Vol 2 (3) ◽  
pp. 350-368
Author(s):  
Olaf Dössel ◽  
Giorgio Luongo ◽  
Claudia Nagel ◽  
Axel Loewe

Computer modeling of the electrophysiology of the heart has undergone significant progress. A healthy heart can be modeled starting from the ion channels via the spread of a depolarization wave on a realistic geometry of the human heart up to the potentials on the body surface and the ECG. Research is advancing regarding modeling diseases of the heart. This article reviews progress in calculating and analyzing the corresponding electrocardiogram (ECG) from simulated depolarization and repolarization waves. First, we describe modeling of the P-wave, the QRS complex and the T-wave of a healthy heart. Then, both the modeling and the corresponding ECGs of several important diseases and arrhythmias are delineated: ischemia and infarction, ectopic beats and extrasystoles, ventricular tachycardia, bundle branch blocks, atrial tachycardia, flutter and fibrillation, genetic diseases and channelopathies, imbalance of electrolytes and drug-induced changes. Finally, we outline the potential impact of computer modeling on ECG interpretation. Computer modeling can contribute to a better comprehension of the relation between features in the ECG and the underlying cardiac condition and disease. It can pave the way for a quantitative analysis of the ECG and can support the cardiologist in identifying events or non-invasively localizing diseased areas. Finally, it can deliver very large databases of reliably labeled ECGs as training data for machine learning.


Energies ◽  
2021 ◽  
Vol 14 (13) ◽  
pp. 4059
Author(s):  
Marek Kęsek ◽  
Romuald Ogrodnik

Mining machinery and equipment used in modern mining are equipped with sensors and measurement systems at the stage of their production. Measuring devices are most often components of a control system or a machine performance monitoring system. In the case of headers, the primary task of these systems is to ensure safe operation and to monitor its correctness. It is customary to collect information in very large databases and analyze it when a failure occurs. Data mining methods allow for analysis to be made during the operation of machinery and mining equipment, thanks to which it is possible to determine not only their technical condition but also the causes of any changes that have occurred. The purpose of this work is to present a method for discovering missing information based on other available parameters, which facilitates the subsequent analysis of machine performance. The primary data used in this paper are the currents flowing through the windings of four header motors. In the method, the original reconstruction of the data layout was performed using the R language function, and then the analysis of the operating states of the header was performed based on these data. Based on the rules used and determined in the analysis, the percentage structure of machine operation states was obtained, which allows for additional reporting and verification of parts of the process.


TRANSPORTES ◽  
2021 ◽  
Vol 29 (1) ◽  
pp. 212-228
Author(s):  
Juliana Mitsuyama Cardoso ◽  
Lucas Assirati ◽  
José Reynaldo Setti

This paper describes a procedure for fitting traffic stream models using very large traffic databases. The proposed approach consists of four steps: (1) an initial treatment to eliminate noisy, inaccurate data and to homogenize the information over the density range; (2) a first fitting of the model, based on the sum of squared orthogonal errors; (3) a second filter, to eliminate outliers that survived the initial data treatment; and (4) a second fitting of the model. The proposed approach was tested by fitting the Van Aerde traffic stream model to 104 thousand observations collected by a permanent traffic monitoring station on a freeway in the metropolitan region of São Paulo, Brazil. The model fitting used a genetic algorithm to search for the best values of the model parameters. The results demonstrate the effectiveness of the proposed approach.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Thiago Cesar de Oliveira ◽  
Lúcio de Medeiros ◽  
Daniel Henrique Marco Detzel

Purpose Real estate appraisals are becoming an increasingly important means of backing up financial operations based on the values of these kinds of assets. However, in very large databases, there is a reduction in the predictive capacity when traditional methods, such as multiple linear regression (MLR), are used. This paper aims to determine whether in these cases the application of data mining algorithms can achieve superior statistical results. First, real estate appraisal databases from five towns and cities in the State of Paraná, Brazil, were obtained from Caixa Econômica Federal bank. Design/methodology/approach After initial validations, additional databases were generated with both real, transformed and nominal values, in clean and raw data. Each was assisted by the application of a wide range of data mining algorithms (multilayer perceptron, support vector regression, K-star, M5Rules and random forest), either isolated or combined (regression by discretization – logistic, bagging and stacking), with the use of 10-fold cross-validation in Weka software. Findings The results showed more varied incremental statistical results with the use of algorithms than those obtained by MLR, especially when combined algorithms were used. The largest increments were obtained in databases with a large amount of data and in those where minor initial data cleaning was carried out. The paper also conducts a further analysis, including an algorithmic ranking based on the number of significant results obtained. Originality/value The authors did not find similar studies or research studies conducted in Brazil.


Author(s):  
Piotr Bednarczuk

Very large databases like data warehouse slow down over time. This is usually due to a large daily increase in the data in the individual tables, counted in millions of records per day. How do we make sure our queries do not slow down over time? Table partitioning comes in handy, and, when used correctly, can ensure the smooth operation of very large databases with billions of records, even after several years.


2020 ◽  
Vol 36 (2) ◽  
pp. 181-184
Author(s):  
Bertrand Jordan

Evidence for a “homosexuality gene” was claimed in the early 1990’s on the basis of linkage studies that, by current criteria, were woefully underpowered. Indeed, follow up studies gave contradictory results. Genome-wide association studies, and very large databases with detailed genetic and phenotypic data, have made possible a re-examination of this issue. While modest heritability (ca. 0.3) for homosexuality is confirmed, no major locus is found and the genetic influence appears extremely polygenic. Thus, there is no single gene, or even small set of genes, that have a strong influence on homosexuality.


Author(s):  
Andrew Borthwick ◽  
Stephen Ash ◽  
Bin Pang ◽  
Shehzad Qureshi ◽  
Timothy Jones

2019 ◽  
Vol 8 (4) ◽  
pp. 8083-8091

High Utility Item sets mining has attracted many researchers in recent years. But HUI mining methods involves a exponential mining space and returns a very large number of high-utility itemsets. . Temporal periodicity of itemset is considered recently as an important interesting criteria for mining high-utility itemsets in many applications. Periodic High Utility item sets mining methods has a limitation that it does not consider frequency and not suitable for large databases. To address this problem, we have proposed two efficient algorithms named FPHUI( mining periodic frequent HUIs), MFPHM(efficient mining periodic frequent HUIs) for mining periodic frequent high-utility itemsets. The first algorithm FPHUI miner generates all periodic frequent itemsets. Mining periodic frequent high-utility itemsets leads to more computational cost in very large databases. We further developed another algorithm called MFPHM to overcome this limitation. The performance of the frequent FPHUI miner is evaluated by conducting experiments on various real datasets. Experimental results show that proposed algorithms is efficient and effective.


2019 ◽  
Vol 3 ◽  
pp. 239821281882051 ◽  
Author(s):  
Chris McManus

Although most people are right-handed and have language in their left cerebral hemisphere, why that is so, and in particular why about ten per cent of people are left-handed, is far from clear. Multiple theories have been proposed, often with little in the way of empirical support, and sometimes indeed with strong evidence against them, and yet despite that have become modern urban myths, probably due to the symbolic power of right and left. One thinks in particular of ideas of being right-brained or left-brained, of suggestions that left-handedness is due to perinatal brain damage, of claims that left-handers die seven years earlier than right-handers, and of the unfalsifiable ramifications of the byzantine Geschwind-Behan-Galaburda theory. This article looks back over the past fifty years of research on brain asymmetries, exploring the different themes and approaches, sometimes in relation to the author’s own work. Taking all of the work together it is probable that cerebral asymmetries are under genetic control, probably with multiple genetic loci, only a few of which are now beginning to be found thanks to very large databases that are becoming available. Other progress is also seen in proper meta-analyses, the use of fMRI for studying multiple functional lateralisations in large number of individuals, fetal ultra-sound for assessing handedness before birth, and fascinating studies of lateralisation in an ever widening range of animal species. With luck the next fifty years will make more progress and show fewer false directions than had much of the work in the previous fifty years.


2018 ◽  
Vol 4 (1) ◽  
Author(s):  
F. Meutzner ◽  
T. Nestler ◽  
M. Zschornak ◽  
P. Canepa ◽  
G. S. Gautam ◽  
...  

AbstractCrystallography is a powerful descriptor of the atomic structure of solid-state matter and can be applied to analyse the phenomena present in functional materials. Especially for ion diffusion – one of the main processes found in electrochemical energy storage materials – crystallography can describe and evaluate the elementary steps for the hopping of mobile species from one crystallographic site to another. By translating this knowledge into parameters and search for similar numbers in other materials, promising compounds for future energy storage materials can be identified. Large crystal structure databases like the ICSD, CSD, and PCD have accumulated millions of measured crystal structures and thus represent valuable sources for future data mining and big-data approaches. In this work we want to present, on the one hand, crystallographic approaches based on geometric and crystal-chemical descriptors that can be easily applied to very large databases. On the other hand, we want to show methodologies based onab initioand electronic modelling which can simulate the structure features more realistically, incorporating also dynamic processes. Their theoretical background, applicability, and selected examples are presented.


Sign in / Sign up

Export Citation Format

Share Document