Proof of correctness of data representations

Author(s):  
C. A. R. Hoare
1972 ◽  
Vol 1 (4) ◽  
pp. 271-281 ◽  
Author(s):  
C. A. R. Hoare

Author(s):  
Antonio Giovannetti ◽  
Gianluca Susi ◽  
Paola Casti ◽  
Arianna Mencattini ◽  
Sandra Pusil ◽  
...  

AbstractIn this paper, we present the novel Deep-MEG approach in which image-based representations of magnetoencephalography (MEG) data are combined with ensemble classifiers based on deep convolutional neural networks. For the scope of predicting the early signs of Alzheimer’s disease (AD), functional connectivity (FC) measures between the brain bio-magnetic signals originated from spatially separated brain regions are used as MEG data representations for the analysis. After stacking the FC indicators relative to different frequency bands into multiple images, a deep transfer learning model is used to extract different sets of deep features and to derive improved classification ensembles. The proposed Deep-MEG architectures were tested on a set of resting-state MEG recordings and their corresponding magnetic resonance imaging scans, from a longitudinal study involving 87 subjects. Accuracy values of 89% and 87% were obtained, respectively, for the early prediction of AD conversion in a sample of 54 mild cognitive impairment subjects and in a sample of 87 subjects, including 33 healthy controls. These results indicate that the proposed Deep-MEG approach is a powerful tool for detecting early alterations in the spectral–temporal connectivity profiles and in their spatial relationships.


2021 ◽  
Vol 8 (1) ◽  
Author(s):  
Sean A. Mochocki ◽  
Gary B. Lamont ◽  
Robert C. Leishman ◽  
Kyle J. Kauffman

AbstractDatabase queries are one of the most important functions of a relational database. Users are interested in viewing a variety of data representations, and this may vary based on database purpose and the nature of the stored data. The Air Force Institute of Technology has approximately 100 data logs which will be converted to the standardized Scorpion Data Model format. A relational database is designed to house this data and its associated sensor and non-sensor metadata. Deterministic polynomial-time queries were used to test the performance of this schema against two other schemas, with databases of 100 and 1000 logs of repeated data and randomized metadata. Of these approaches, the one that had the best performance was chosen as AFIT’s database solution, and now more complex and useful queries need to be developed to enable filter research. To this end, consider the combined Multi-Objective Knapsack/Set Covering Database Query. Algorithms which address The Set Covering Problem or Knapsack Problem could be used individually to achieve useful results, but together they could offer additional power to a potential user. This paper explores the NP-Hard problem domain of the Multi-Objective KP/SCP, proposes Genetic and Hill Climber algorithms, implements these algorithms using Java, populates their data structures using SQL queries from two test databases, and finally compares how these algorithms perform.


Author(s):  
Giles Reger ◽  
David Rydeheard

AbstractParametric runtime verification is the process of verifying properties of execution traces of (data carrying) events produced by a running system. This paper continues our work exploring the relationship between specification techniques for parametric runtime verification. Here we consider the correspondence between trace-slicing automata-based approaches and rule systems. The main contribution is a translation from quantified automata to rule systems, which has been implemented in Scala. This then allows us to highlight the key differences in how the two formalisms handle data, an important step in our wider effort to understand the correspondence between different specification languages for parametric runtime verification. This paper extends a previous conference version of this paper with further examples, a proof of correctness, and an optimisation based on a notion of redundancy observed during the development of the translation.


2009 ◽  
Vol 72 (7-9) ◽  
pp. 1547-1555 ◽  
Author(s):  
Kai Labusch ◽  
Erhardt Barth ◽  
Thomas Martinetz

2021 ◽  
Author(s):  
Jacob Hendriks ◽  
Patrick Dumond

Abstract This paper demonstrates various data augmentation techniques that can be used when working with limited run-to-failure data to estimate health indicators related to the remaining useful life of roller bearings. The PRONOSTIA bearing prognosis dataset is used for benchmarking data augmentation techniques. The input to the networks are multi-dimensional frequency representations obtained by combining the spectra taken from two accelerometers. Data augmentation techniques are adapted from other machine learning fields and include adding Gaussian noise, region masking, masking noise, and pitch shifting. Augmented datasets are used in training a conventional CNN architecture comprising two convolutional and pooling layer sequences with batch normalization. Results from individually separating each bearing’s data for the purpose of validation shows that all methods, except pitch shifting, give improved validation accuracy on average. Masking noise and region masking both show the added benefit of dataset regularization by giving results that are more consistent after repeatedly training each configuration with new randomly generated augmented datasets. It is shown that gradually deteriorating bearings and bearings with abrupt failure are not treated significantly differently by the augmentation techniques.


Sign in / Sign up

Export Citation Format

Share Document