scholarly journals RAVE: comprehensive open-source software for reproducible analysis and visualization of intracranial EEG data

2020 ◽  
Author(s):  
John F. Magnotti ◽  
Zhengjia Wang ◽  
Michael S. Beauchamp

AbstractDirect recording of neural activity from the human brain using implanted electrodes (iEEG, intracranial electroencephalography) is a fast-growing technique in human neuroscience. While the ability to record from the human brain with high spatial and temporal resolution has advanced our understanding, it generates staggering amounts of data: a single patient can be implanted with hundreds of electrodes, each sampled thousands of times a second for hours or days. The difficulty of exploring these vast datasets is the rate-limiting step in discovery. To overcome this obstacle, we created RAVE (“R Analysis and Visualization of iEEG”). All components of RAVE, including the underlying “R” language, are free and open source. User interactions occur through a web browser, making it transparent to the user whether the back-end data storage and computation is occurring on a local machine, a lab server, or in the cloud. Without writing a single line of computer code, users can create custom analyses, apply them to data from hundreds of iEEG electrodes, and instantly visualize the results on cortical surface models. Multiple types of plots are used to display analysis results, each of which can be downloaded as publication-ready graphics with a single click. RAVE consists of nearly 50,000 lines of code designed to prioritize an interactive user experience, reliability and reproducibility.

Author(s):  
Elly Mufida ◽  
David Wardana Agus Rahayu

The VoIP communication system at OMNI Hospital Alam Sutera uses the Elastix 2.5 server with the Centos 5.11 operating system. Elastix 2.5 by the developer has been declared End of Life. The server security system is a serious concern considering that VoIP servers can be accessed from the internet. Iptables and fail2ban applications are applications that are used to limit and counteract those who try to attack the VoIP server. One application that can be used as an open source VoIP server is the Issabel Application version 4.0. The migration process from Elastix 2.5 application to Issabel 4.0 by backing up all configurations in the Elastix 2.5 application through a web browser including the configuration of endpoints, fax, e-mail, asterisk. After the backup file is downloaded then upload the backup file to the Issabel 4.0 application then run the migration process. Adding a backup path as a failover connection is needed because the VoIP communication protocol between the OMNI Hospitals Group still uses one path so that when there is a problem in the connection path, the communication protocol will stop. The tunnel EoIP is a protocol used as a backup path between the OMNI Hospitals Group site.


Author(s):  
Ganesh Chandra Deka

NoSQL databases are designed to meet the huge data storage requirements of cloud computing and big data processing. NoSQL databases have lots of advanced features in addition to the conventional RDBMS features. Hence, the “NoSQL” databases are popularly known as “Not only SQL” databases. A variety of NoSQL databases having different features to deal with exponentially growing data-intensive applications are available with open source and proprietary option. This chapter discusses some of the popular NoSQL databases and their features on the light of CAP theorem.


Author(s):  
Kevin Brock

The increasing prominence and variety of open source software (OSS) threaten to upset conventional approaches to software development and marketing. While a tremendous amount of scholarship has been published on the differences between proprietary and OSS development, little has been discussed regarding the effect of rhetorical appeals used to promote either type of software. This chapter offers just such an examination, focusing its scrutiny on the websites for three pairs of competitors (operating system, Web browser, and image manipulation program). The means by which the OSS websites promote their programs provide a significant set of insights into the potential trajectory of OSS development and its widespread public acceptance, in terms of both its initial philosophy and its perceived alternative nature to traditional software products and models.


Author(s):  
Sachin Arun Thanekar ◽  
K. Subrahmanyam ◽  
A.B. Bagwan

<p>Nowadays we all are surrounded by Big data. The term ‘Big Data’ itself indicates huge volume, high velocity, variety and veracity i.e. uncertainty of data which gave rise to new difficulties and challenges. Hadoop is a framework which can be used for tremendous data storage and faster processing. It is freely available, easy to use and implement. Big data forensic is one of the challenges of big data. For this it is very important to know the internal details of the Hadoop. Different files are generated by Hadoop during its process. Same can be used for forensics. In our paper our focus is on digital forensics and different files generated during different processes. We have given the short description on different files generated in Hadoop. With the help of an open source tool ‘Autopsy’ we demonstrated that how we can perform digital forensics using automated tool and thus big data forensics can be done efficiently.</p>


2014 ◽  
Vol 53 (03) ◽  
pp. 202-207 ◽  
Author(s):  
M. Haag ◽  
L. R. Pilz ◽  
D. Schrimpf

SummaryBackground: Clinical trials (CT) are in a wider sense experiments to prove and establish clinical benefit of treatments. Nowadays electronic data capture systems (EDCS) are used more often bringing a better data management and higher data quality into clinical practice. Also electronic systems for the randomization are used to assign the patients to the treatments.Objectives: If the mentioned randomization system (RS) and EDCS are used, possibly identical data are collected in both, especially by stratified randomization. This separated data storage may lead to data inconsistency and in general data samples have to be aligned. The article discusses solutions to combine RS and EDCS. In detail one approach is realized and introduced.Methods: Different possible settings of combination of EDCS and RS are determined and the pros and cons for each solution are worked out. For the combination of two independent applications the necessary interfaces for the communication are defined. Thereby, existing standards are considered. An example realization is implemented with the help of open-source applications and state-of-the-art software development procedures.Results: Three possibilities of separate usage or combination of EDCS and RS are pre -sented and assessed: i) the complete independent usage of both systems; ii) realization of one system with both functions; and iii) two separate systems, which communicate via defined interfaces. In addition a realization of our preferred approach, the combination of both systems, is introduced using the open source tools RANDI2 and Open-Clinica.Conclusion: The advantage of a flexible independent development of EDCS and RS is shown based on the fact that these tool are very different featured. In our opinion the combination of both systems via defined interfaces fulfills the requirements of randomization and electronic data capture and is feasible in practice. In addition, the use of such a setting can reduce the training costs and the error-prone duplicated data entry.


2012 ◽  
Vol 10 (4) ◽  
pp. 17-25
Author(s):  
Miroslav Minovic ◽  
Miloš Milovanovic ◽  
Jelena Minovic ◽  
Dušan Starcevic

The authors present a learning platform based on a computer game. Learning games combine two industries: education and entertainment, which is often called “Edutainment.” The game is realized as a strategic game (similar to Risk™), implemented as a module for Moodle CMS, utilizing Java Applet technology. Moodle is an open-source course management system (CMS), which is widely used among universities as an eLearning platform. Java Applet enables development of rich-client applications which are executed in web browser environment. During the game, players receive questions from specified Moodle quiz, and all answers are stored back into Moodle system. Students can later verify their score and answers, and examine the test that they actually worked on during the game. This system supports synchronous as well as asynchronous interaction between players.


Blood ◽  
2011 ◽  
Vol 118 (21) ◽  
pp. 4763-4763
Author(s):  
William T. Tse ◽  
Kevin K. Duh ◽  
Morris Kletzel

Abstract Abstract 4763 Data collection and analysis in clinical studies in hematology often require the use of specialized databases, which demand extensive information technology (IT) support and are expensive to maintain. With the goal of reducing the cost of clinical trials and promoting outcomes research, we have devised a new informatics framework that is low-cost, low-maintenance, and adaptable to both small- and large-scale clinical studies. This framework is based on the idea that most clinical data are hierarchical in nature: a clinical protocol typically entails the creation of sequential patient files, each of which documents multiple encounters, during which clinical events and data are captured and tagged for later retrieval and analysis. These hierarchical trees of clinical data can be easily stored in a hypertext mark-up language (HTML) document format, which is designed to represent similar hierarchical data on web pages. In this framework, the stored clinical data will be structured according to a web standard called Document Object Model (DOM), for which powerful informatics techniques have been developed to allow efficient retrieval and collation of data from the HTML documents. The proposed framework has many potential advantages. The data will be stored in plain text files in the HTML format, which is both human and machine readable, hence facilitating data exchange between collaborative groups. The framework requires only a regular web browser to function, thereby easing its adoption in multiple institutions. There will be no need to set up or maintain a relational database for data storage, thus minimizing data fragmentation and reducing the demand for IT support. Data entry and analysis will be performed mostly on the client computer, requiring the use of a backend server only for central data storage. Utility programs for data management and manipulation will be written in Javascript and JQuery, computer languages that are free, open-source and easy to maintain. Data can be captured, retrieved, and analyzed on different devices, including desktop computers, tablets or smart phones. Encryption and password protection can be applied in document storage and data transmission to ensure data security and HIPPA compliance. In a pilot project to implement and test this informatics framework, we designed prototype programming modules to perform individual tasks commonly encountered in clinical data management. The functionalities of these modules included user-interface creation, patient data entry and retrieval, visualization and analysis of aggregate results, and exporting and reporting of extracted data. These modules were used to access simulated clinical data stored in a remote server, employing standard web browsers available on all desktop computers and mobile devices. To test the capability of these modules, benchmark tests were performed. Simulated datasets of complete patient records, each with 1000 data items, were created and stored in the remote server. Data were retrieved via the web using a gzip compressed format. Retrieval of 100, 300, 1000 such records took only 1.01, 2.45, and 6.67 seconds using a desktop computer via a broadband connection, or 3.67, 11.39, and 30.23 seconds using a tablet computer via a 3G connection. Filtering of specific data from the retrieved records was equally speedy. Automated extraction of relevant data from 300 complete records for a two-sample t-test analysis took 1.97 seconds. A similar extraction of data for a Kaplan-Meier survival analysis took 4.19 seconds. The program allowed the data to be presented separately for individual patients or in aggregation for different clinical subgroups. A user-friendly interface enabled viewing of the data in either tabular or graphical forms. Incorporation of a new web browser technique permitted caching of the entire dataset locally for off-line access and analysis. Adaptable programming allowed efficient export of data in different formats for regulatory reporting purposes. Once the system was set up, no further intervention from IT department was necessary. In summary, we have designed and implemented a prototype of a new informatics framework for clinical data management, which should be low-cost and highly adaptable to various types of clinical studies. Field-testing of this framework in real-life clinical studies will be the next step to demonstrate its effectiveness and potential benefits. Disclosures: No relevant conflicts of interest to declare.


Sign in / Sign up

Export Citation Format

Share Document