DTI data analysis: application of fiber tracking to group averaged data sets

2010 ◽  
Vol 41 (01) ◽  
Author(s):  
HP Müller ◽  
A Unrath ◽  
A Riecker ◽  
AC Ludolph ◽  
J Kassubek
2018 ◽  
Vol 20 (1) ◽  
Author(s):  
Tiko Iyamu

Background: Over the years, big data analytics has been statically carried out in a programmed way, which does not allow for translation of data sets from a subjective perspective. This approach affects an understanding of why and how data sets manifest themselves into various forms in the way that they do. This has a negative impact on the accuracy, redundancy and usefulness of data sets, which in turn affects the value of operations and the competitive effectiveness of an organisation. Also, the current single approach lacks a detailed examination of data sets, which big data deserve in order to improve purposefulness and usefulness.Objective: The purpose of this study was to propose a multilevel approach to big data analysis. This includes examining how a sociotechnical theory, the actor network theory (ANT), can be complementarily used with analytic tools for big data analysis.Method: In the study, the qualitative methods were employed from the interpretivist approach perspective.Results: From the findings, a framework that offers big data analytics at two levels, micro- (strategic) and macro- (operational) levels, was developed. Based on the framework, a model was developed, which can be used to guide the analysis of heterogeneous data sets that exist within networks.Conclusion: The multilevel approach ensures a fully detailed analysis, which is intended to increase accuracy, reduce redundancy and put the manipulation and manifestation of data sets into perspectives for improved organisations’ competitiveness.


F1000Research ◽  
2014 ◽  
Vol 3 ◽  
pp. 146 ◽  
Author(s):  
Guanming Wu ◽  
Eric Dawson ◽  
Adrian Duong ◽  
Robin Haw ◽  
Lincoln Stein

High-throughput experiments are routinely performed in modern biological studies. However, extracting meaningful results from massive experimental data sets is a challenging task for biologists. Projecting data onto pathway and network contexts is a powerful way to unravel patterns embedded in seemingly scattered large data sets and assist knowledge discovery related to cancer and other complex diseases. We have developed a Cytoscape app called “ReactomeFIViz”, which utilizes a highly reliable gene functional interaction network and human curated pathways from Reactome and other pathway databases. This app provides a suite of features to assist biologists in performing pathway- and network-based data analysis in a biologically intuitive and user-friendly way. Biologists can use this app to uncover network and pathway patterns related to their studies, search for gene signatures from gene expression data sets, reveal pathways significantly enriched by genes in a list, and integrate multiple genomic data types into a pathway context using probabilistic graphical models. We believe our app will give researchers substantial power to analyze intrinsically noisy high-throughput experimental data to find biologically relevant information.


Author(s):  
Terence W. Cavanaugh ◽  
Nicholas P. Eastham

Educational technologists are often asked to provide assistance in the identification or creation of assistive technologies for students. Individuals with visual impairments attending graduate schools are expected to be able to work with data sets, including reading, interpreting, and sharing findings with others in their field, but due to their impairments may not be able to work with standard displays. The cost and time involved in preparing adapted graphs based on student research data for individuals with visual impairments can be prohibitive. This chapter introduces a method for the rapid prototyping of tactile graphs for students to use in data analysis through the use of spreadsheets, internet-based conversion tools, and a 3D printer.


Author(s):  
Hocine Chebi

The number of hits to web pages continues to grow. The web has become one of the most popular platforms for disseminating and retrieving information. Consequently, many website operators are encouraged to analyze the use of their sites in order to improve their response to the expectations of internet users. However, the way a website is visited can change depending on a variety of factors. Usage models must therefore be continuously updated in order to accurately reflect visitor behavior. This remains difficult when the time dimension is neglected or simply introduced as an additional numeric attribute in the description of the data. Data mining is defined as the application of data analysis and discovery algorithms on large databases with the goal of discovering non-trivial models. Several algorithms have been proposed in order to formalize the new models discovered, to build more efficient models, to process new types of data, and to measure the differences between the data sets. However, the most traditional algorithms of data mining assume that the models are static and do not take into account the possible evolution of these models over time. These considerations have motivated significant efforts in the analysis of temporal data as well as the adaptation of static data mining methods to data that evolves over time. The review of the main aspects of data mining dealt with in this thesis constitutes the body of this chapter, followed by a state of the art of current work in this field as well as a discussion of the major issues that exist there. Interest in temporal databases has increased considerably in recent years, for example in the fields of finance, telecommunications, surveillance, etc. A growing number of prototypes and systems are being implemented to take into account the time dimension of data explicitly, for example to study the variability over time of analysis results. To model an application, it is necessary to choose a common language, precise and known by all members of a team. UML (unified modeling language, in English, or unified modeling language, in French) is an object-oriented modeling language standardized by the OMG. This chapter aims to present the modeling with the diagrams of packages and classes built using UML. This chapter presents the conceptual model of the data, and finally, the authors specify the SQL queries used for the extraction of descriptive statistical variables of the navigations from a warehouse containing the preprocessed usage data.


Author(s):  
Xiaohui Liu

Intelligent Data Analysis (IDA) is an interdisciplinary study concerned with the effective analysis of data. IDA draws the techniques from diverse fields, including artificial intelligence, databases, high-performance computing, pattern recognition, and statistics. These fields often complement each other (e.g., many statistical methods, particularly those for large data sets, rely on computation, but brute computing power is no substitute for statistical knowledge) (Berthold & Hand 2003; Liu, 1999).


Author(s):  
Xiao Wang

The purpose of this study is to examine the relationship between the diagnosis and the cost of patient care for those with diabetes in Medicare. In this analysis, the author used data sets about outpatient claim, inpatient claim as well as beneficiary demography information for the year 2004, all of which were taken from the Chronic Condition Data Warehouse provided by the Centers for Medicare and Medicaid. The author analyzed the cases for diabetic inpatients and outpatients by different methods. For outpatient analysis, exploratory data analysis and linear models were used .The results show that the total charges for diagnoses are reduced considerably for payment. The distribution of the total charges follows a Gamma distribution. The output of the generalized linear model demonstrates that only 15 out of the top 20 primary treatments for charges are statistically significant to the expenditures on outpatients.


Sign in / Sign up

Export Citation Format

Share Document