scholarly journals Construction of a Generalized Computational Experiment and Visual Analysis of Multidimensional Data

Author(s):  
Александр Бондарев ◽  
Aleksandr Bondarev ◽  
Владимир Галактионов ◽  
Vladimir Galaktionov

The work is devoted to the problems of constructing a generalized computational experiment in the problems of computational aerodynamics. The construction of a generalized computational experiment is based on the possibility of carrying out parallel calculations of the same problem with different input data in multitasking mode. This allows carrying out parametric studies and solving problems of optimization analysis. The results of such an experiment are multidimensional arrays, for the study of which visual analytics methods should be used. The construction of a generalized experiment allows one to obtain dependences for valuable functionals on the determining parameters of the problem under consideration. The implementation of a generalized experiment allows one to obtain a solution for a class of problems in the ranges under consideration, and not just for one problem. Examples of constructing a generalized computational experiment for various classes of problems of computational aerodynamics are presented. The article also provides an example of constructing such an experiment for a comparative assessment of the accuracy of numerical methods.

2021 ◽  
Vol 2127 (1) ◽  
pp. 012025
Author(s):  
A E Bondarev ◽  
A E Kuvshinnikov

Abstract In modern problems of mathematical modeling in computational gas dynamics, it is increasingly necessary to implement parametric studies. In these cases, the key factors of the problem under consideration vary with the chosen step within the given ranges. Calculations of this kind can be effectively carried out by constructing a generalized computational experiment. A generalized computational experiment is a computational technology that combines the solution of mathematical modeling problems, parallel technologies, and visual analytics technologies. The results of a generalized computational experiment are multidimensional arrays, where the dimension of the arrays corresponds to the number of key factors. Processing and visual presentation of such arrays requires solving a number of separate tasks. The processing and visual presentation of the results are carried out for target functionals represented as a function of many variables. The report presents an examples of solving specific processing and visualization problems based on the implemented generalized computational experiment for 3D cone in supersonic flow.


Author(s):  
Aleksey Alekseev ◽  
Aleksandr Bondarev ◽  
Artem Kuvshinnikov

This work is devoted to the application of a generalized computational experiment for a comparative assessment of numerical methods accuracy. The construction of a generalized computational experiment is based on the simultaneous solution using parallel computations in a multitasking mode of a basic problem with different input parameters, obtaining results in the form of multidimensional data volumes and their visual analysis. This approach can be effective in problems of verification of numerical methods. A comparative assessment of the accuracy for solvers of the open software package OpenFOAM is carried out. The classic inviscid problem of oblique shock wave is used as a basic task. Variations of the key parameters of the problem — the Mach number and angle of attack — are considered. An example of constructing error surfaces is given when the solvers of the OpenFOAM software package are compared. The concept of an error index is introduced as an integral characteristic of deviations from the exact solution for each solver in the class of problems under consideration. The surfaces of deviations from the exact solution in the L2 norm, constructed for each solver, together with the calculated error indices, make it possible to obtain a complete picture of the accuracy of the solvers under consideration for the class of problems defined by the ranges of variation of the Mach number and angle of attack.


2017 ◽  
Vol 18 (1) ◽  
pp. 3-32 ◽  
Author(s):  
Boris Kovalerchuk ◽  
Vladimir Grishin

Preserving all multidimensional data in two-dimensional visualization is a long-standing problem in Visual Analytics, Machine Learning/Data Mining, and Multiobjective Pareto Optimization. While Parallel and Radial (Star) coordinates preserve all n-D data in two dimensions, they are not sufficient to address visualization challenges of all possible datasets such as occlusion. More such methods are needed. Recently, the concepts of lossless General Line Coordinates that generalize Parallel, Radial, Cartesian, and other coordinates were proposed with initial exploration and application of several subclasses of General Line Coordinates such as Collocated Paired Coordinates and Star Collocated Paired Coordinates. This article explores and enhances benefits of General Line Coordinates. It shows the ways to increase expressiveness of General Line Coordinates including decreasing occlusion and simplifying visual pattern while preserving all n-D data in two dimensions by adjusting General Line Coordinates for given n-D datasets. The adjustments include relocating, rescaling, and other transformations of General Line Coordinates. One of the major sources of benefits of General Line Coordinates relative to Parallel Coordinates is twice less number of point and lines in visual representation of each n-D points. This article demonstrates the benefits of different General Line Coordinates for real data visual analysis such as health monitoring and benchmark Iris data classification compared with results from Parallel Coordinates, Radvis, and Support Vector Machine. The experimental part of the article presents the results of the experiment with about 70 participants on efficiency of visual pattern discovery using Star Collocated Paired Coordinates, Parallel, and Radial Coordinates. It shows advantages of visual discovery of n-D patterns using General Line Coordinates subclass Star Collocated Paired Coordinates with n = 160 dimensions.


Author(s):  
Alexander Evgenyevich Bondarev ◽  
Artyom Evgenyevich Kuvshinnikov

This work is devoted to the study of the influence of variation of the controlled dissipative properties on the accuracy of the QGDFoam solver and the visual representation of this influence. The work continues a series of studies on the comparative assessment of the accuracy of various numerical methods and solvers built on their basis. To carry out a comparative assessment, a generalized computational experiment for classes of problems with a reference solution is constructed and implemented. A generalized computational experiment based on the synthesis of solutions of mathematical modeling problems, parallel technologies and visual analysis tools makes it possible to obtain solutions not only for individual problems, but for whole classes of problems determined by the specified ranges of key parameters. Accordingly, a comparative assessment of the accuracy of numerical methods is also carried out for a class of problems. Earlier, a similar computational experiment was carried out for a comparative assessment of the accuracy for solvers of the OpenFOAM open source software package on the well-known classical problem of an oblique shock wave formation. One of the solvers participating in the calculations, namely the QGDFoam solver, was the only one of all to have controlled dissipative properties. New generalized computational experiment was implemented to study the effect of variation of the parameter that controls the dissipative properties. The target was to reduce the error in comparison with the reference solution. The research results are presented in this work.


Obesity Facts ◽  
2021 ◽  
pp. 1-11
Author(s):  
Marijn Marthe Georgine van Berckel ◽  
Saskia L.M. van Loon ◽  
Arjen-Kars Boer ◽  
Volkher Scharnhorst ◽  
Simon W. Nienhuijs

<b><i>Introduction:</i></b> Bariatric surgery results in both intentional and unintentional metabolic changes. In a high-volume bariatric center, extensive laboratory panels are used to monitor these changes pre- and postoperatively. Consecutive measurements of relevant biochemical markers allow exploration of the health state of bariatric patients and comparison of different patient groups. <b><i>Objective:</i></b> The objective of this study is to compare biomarker distributions over time between 2 common bariatric procedures, i.e., sleeve gastrectomy (SG) and gastric bypass (RYGB), using visual analytics. <b><i>Methods:</i></b> Both pre- and postsurgical (6, 12, and 24 months) data of all patients who underwent primary bariatric surgery were collected retrospectively. The distribution and evolution of different biochemical markers were compared before and after surgery using asymmetric beanplots in order to evaluate the effect of primary SG and RYGB. A beanplot is an alternative to the boxplot that allows an easy and thorough visual comparison of univariate data. <b><i>Results:</i></b> In total, 1,237 patients (659 SG and 578 RYGB) were included. The sleeve and bypass groups were comparable in terms of age and the prevalence of comorbidities. The mean presurgical BMI and the percentage of males were higher in the sleeve group. The effect of surgery on lowering of glycated hemoglobin was similar for both surgery types. After RYGB surgery, the decrease in the cholesterol concentration was larger than after SG. The enzymatic activity of aspartate aminotransferase, alanine aminotransferase, and alkaline phosphate in sleeve patients was higher presurgically but lower postsurgically compared to bypass values. <b><i>Conclusions:</i></b> Beanplots allow intuitive visualization of population distributions. Analysis of this large population-based data set using beanplots suggests comparable efficacies of both types of surgery in reducing diabetes. RYGB surgery reduced dyslipidemia more effectively than SG. The trend toward a larger decrease in liver enzyme activities following SG is a subject for further investigation.


BMC Genomics ◽  
2021 ◽  
Vol 22 (1) ◽  
Author(s):  
Ratanond Koonchanok ◽  
Swapna Vidhur Daulatabad ◽  
Quoseena Mir ◽  
Khairi Reda ◽  
Sarath Chandra Janga

Abstract Background Direct-sequencing technologies, such as Oxford Nanopore’s, are delivering long RNA reads with great efficacy and convenience. These technologies afford an ability to detect post-transcriptional modifications at a single-molecule resolution, promising new insights into the functional roles of RNA. However, realizing this potential requires new tools to analyze and explore this type of data. Result Here, we present Sequoia, a visual analytics tool that allows users to interactively explore nanopore sequences. Sequoia combines a Python-based backend with a multi-view visualization interface, enabling users to import raw nanopore sequencing data in a Fast5 format, cluster sequences based on electric-current similarities, and drill-down onto signals to identify properties of interest. We demonstrate the application of Sequoia by generating and analyzing ~ 500k reads from direct RNA sequencing data of human HeLa cell line. We focus on comparing signal features from m6A and m5C RNA modifications as the first step towards building automated classifiers. We show how, through iterative visual exploration and tuning of dimensionality reduction parameters, we can separate modified RNA sequences from their unmodified counterparts. We also document new, qualitative signal signatures that characterize these modifications from otherwise normal RNA bases, which we were able to discover from the visualization. Conclusions Sequoia’s interactive features complement existing computational approaches in nanopore-based RNA workflows. The insights gleaned through visual analysis should help users in developing rationales, hypotheses, and insights into the dynamic nature of RNA. Sequoia is available at https://github.com/dnonatar/Sequoia.


2021 ◽  
Vol 11 (11) ◽  
pp. 4751
Author(s):  
Jorge-Félix Rodríguez-Quintero ◽  
Alexander Sánchez-Díaz ◽  
Leonel Iriarte-Navarro ◽  
Alejandro Maté ◽  
Manuel Marco-Such ◽  
...  

Among the knowledge areas in which process mining has had an impact, the audit domain is particularly striking. Traditionally, audits seek evidence in a data sample that allows making inferences about a population. Mistakes are usually committed when generalizing the results and anomalies; therefore, they appear in unprocessed sets; however, there are some efforts to address these limitations using process-mining-based approaches for fraud detection. To the best of our knowledge, no fraud audit method exists that combines process mining techniques and visual analytics to identify relevant patterns. This paper presents a fraud audit approach based on the combination of process mining techniques and visual analytics. The main advantages are: (i) a method is included that guides the use of the visual capabilities of process mining to detect fraud data patterns during an audit; (ii) the approach can be generalized to any business domain; (iii) well-known process mining techniques are used (dotted chart, trace alignment, fuzzy miner…). The techniques were selected by a group of experts and were extended to enable filtering for contextual analysis, to handle levels of process abstraction, and to facilitate implementation in the area of fraud audits. Based on the proposed approach, we developed a software solution that is currently being used in the financial sector as well as in the telecommunications and hospitality sectors. Finally, for demonstration purposes, we present a real hotel management use case in which we detected suspected fraud behaviors, thus validating the effectiveness of the approach.


2019 ◽  
Vol 19 (1) ◽  
pp. 3-23
Author(s):  
Aurea Soriano-Vargas ◽  
Bernd Hamann ◽  
Maria Cristina F de Oliveira

We present an integrated interactive framework for the visual analysis of time-varying multivariate data sets. As part of our research, we performed in-depth studies concerning the applicability of visualization techniques to obtain valuable insights. We consolidated the considered analysis and visualization methods in one framework, called TV-MV Analytics. TV-MV Analytics effectively combines visualization and data mining algorithms providing the following capabilities: (1) visual exploration of multivariate data at different temporal scales, and (2) a hierarchical small multiples visualization combined with interactive clustering and multidimensional projection to detect temporal relationships in the data. We demonstrate the value of our framework for specific scenarios, by studying three use cases that were validated and discussed with domain experts.


2021 ◽  
Author(s):  
Taimur Khan ◽  
Syed Samad Shakeel ◽  
Afzal Gul ◽  
Hamza Masud ◽  
Achim Ebert

Visual analytics has been widely studied in the past decade both in academia and industry to improve data exploration, minimize the overall cost, and improve data analysis. In this chapter, we explore the idea of visual analytics in the context of simulation data. This would then provide us with the capability to not only explore our data visually but also to apply machine learning models in order to answer high-level questions with respect to scheduling, choosing optimal simulation parameters, finding correlations, etc. More specifically, we examine state-of-the-art tools to be able to perform these above-mentioned tasks. Further, to test and validate our methodology we followed the human-centered design process to build a prototype tool called ViDAS (Visual Data Analytics of Simulated Data). Our preliminary evaluation study illustrates the intuitiveness and ease-of-use of our approach with regards to visual analysis of simulated data.


Sign in / Sign up

Export Citation Format

Share Document