scholarly journals Brain Predictability toolbox: a Python library for neuroimaging-based machine learning

Author(s):  
Sage Hahn ◽  
De Kang Yuan ◽  
Wesley K Thompson ◽  
Max Owens ◽  
Nicholas Allgaier ◽  
...  

Abstract Summary Brain Predictability toolbox (BPt) represents a unified framework of machine learning (ML) tools designed to work with both tabulated data (e.g. brain derived, psychiatric, behavioral and physiological variables) and neuroimaging specific data (e.g. brain volumes and surfaces). This package is suitable for investigating a wide range of different neuroimaging-based ML questions, in particular, those queried from large human datasets. Availability and implementation BPt has been developed as an open-source Python 3.6+ package hosted at https://github.com/sahahn/BPt under MIT License, with documentation provided at https://bpt.readthedocs.io/en/latest/, and continues to be actively developed. The project can be downloaded through the github link provided. A web GUI interface based on the same code is currently under development and can be set up through docker with instructions at https://github.com/sahahn/BPt_app.

Today, with an enormous generation and availability of time series data and streaming data, there is an increasing need for an automatic analyzing architecture to get fast interpretations and results. One of the significant potentiality of streaming analytics is to train and model each stream with unsupervised Machine Learning (ML) algorithms to detect anomalous behaviors, fuzzy patterns, and accidents in real-time. If executed reliably, each anomaly detection can be highly valuable for the application. In this paper, we propose a dynamic threshold setting system denoted as Thresh-Learner, mainly for the Internet of Things (IoT) applications that require anomaly detection. The proposed model enables a wide range of real-life applications where there is a necessity to set up a dynamic threshold over the streaming data to avoid anomalies, accidents or sending alerts to distant monitoring stations. We took the major problem of anomalies and accidents in coal mines due to coal fires and explosions. This results in loss of life due to the lack of automated alarming systems. We propose Thresh-Learner, a general purpose implementation for setting dynamic thresholds. We illustrate it through the Smart Helmet for coal mine workers which seamlessly integrates monitoring, analyzing and dynamic thresholds using IoT and analysis on the cloud.


Author(s):  
Yuan Zhao ◽  
Tieke He ◽  
Zhenyu Chen

It is typically a manual, time-consuming, and tedious task of assigning bug reports to individual developers. Although some machine learning techniques are adopted to alleviate this dilemma, they are mainly focused on the open source projects, which use traditional repositories such as Bugzilla to manage their bug reports. With the boom of the mobile Internet, some new requirements and methods of software testing are emerging, especially the crowdsourced testing. Unlike the traditional channels, whose bug reports are often heavyweight, which means their bug reports are standardized with detailed attribute localization, bug reports tend to be lightweight in the context of crowdsourced testing. To exploit the differences of the bug reports assignment in the new settings, a unified bug reports assignment framework is proposed in this paper. This framework is capable of handling both the traditional heavyweight bug reports and the lightweight ones by (i) first preprocessing the bug reports and feature selections, (ii) then tuning the parameters that indicate the ratios of choosing different methods to vectorize bug reports, (iii) and finally applying classification algorithms to assign bug reports. Extensive experiments are conducted on three datasets to evaluate the proposed framework. The results indicate the applicability of the proposed framework, and also reveal the differences of bug report assignment between traditional repositories and crowdsourced ones.


2021 ◽  
Vol 13 (1) ◽  
Author(s):  
Errol L. G. Samuel ◽  
Secondra L. Holmes ◽  
Damian W. Young

AbstractThe thermal shift assay (TSA)—also known as differential scanning fluorimetry (DSF), thermofluor, and Tm shift—is one of the most popular biophysical screening techniques used in fragment-based ligand discovery (FBLD) to detect protein–ligand interactions. By comparing the thermal stability of a target protein in the presence and absence of a ligand, potential binders can be identified. The technique is easy to set up, has low protein consumption, and can be run on most real-time polymerase chain reaction (PCR) instruments. While data analysis is straightforward in principle, it becomes cumbersome and time-consuming when the screens involve multiple 96- or 384-well plates. There are several approaches that aim to streamline this process, but most involve proprietary software, programming knowledge, or are designed for specific instrument output files. We therefore developed an analysis workflow implemented in the Konstanz Information Miner (KNIME), a free and open-source data analytics platform, which greatly streamlined our data processing timeline for 384-well plates. The implementation is code-free and freely available to the community for improvement and customization to accommodate a wide range of instrument input files and workflows. Graphical Abstract


Author(s):  
Jozefien De Bock

Historically, those societies that have the longest tradition in multicultural policies are settler societies. The question of how to deal with temporary migrants has only recently aroused their interest. In Europe, temporary migration programmes have a much longer history. In the period after WWII, a wide range of legal frameworks were set up to import temporary workers, who came to be known as guest workers. In the end, many of these ‘guests’ settled in Europe permanently. Their presence lay at the basis of European multicultural policies. However, when these policies were drafted, the former mobility of guest workers had been forgotten. This chapter will focus on this mobility of initially temporary workers, comparing the period of economic growth 1945-1974 with the years after the 1974 economic crisis. Further, it will look at the kind of policies that were developed towards guest workers in the era before multiculturalism. This way, it shows how their consideration as temporary residents had far-reaching consequences for the immigrants, their descendants and the receiving societies involved. The chapter will finish by suggesting a number of lessons from the past. If the mobility-gap between guest workers and present-day migrants is not as big as generally assumed, then the consequences of previous neglect should serve as a warning for future policy making.


2016 ◽  
Vol 3 (2) ◽  
pp. 82-93
Author(s):  
Gugulethu Shamaine Nkala ◽  
Rodreck David

Knowledge presented by Oral History (OH) is unique in that it shares the tacit perspective, thoughts, opinions and understanding of the interviewee in its primary form. While teachers, lecturers and other education specialists have at their disposal a wide range of primary, secondary and tertiary sources upon which to relate and share or impart knowledge, OH presents a rich source of information that can improve the learning and knowledge impartation experience. The uniqueness of OH is presented in the following advantages of its use: it allows one to learn about the perspectives of individuals who might not otherwise appear in the historical record; it allows one to compensate for the digital age; one can learn different kinds of information; it provides historical actors with an opportunity to tell their own stories in their own words; and it offers a rich opportunity for human interaction. This article discusses the placement of oral history in the classroom set-up by investigating its use as a source of learning material presented by the National Archives of Zimbabwe to students in the Department of Records and Archives Management at the National University of Science and Technology (NUST). Interviews and a group discussion were used to gather data from an archivist at the National Archives of Zimbabwe, lecturers and students in the Department of Records and Archives Management at NUST, respectively. These groups were approached on the usability, uniqueness and other characteristics that support this type of knowledge about OH in a tertiary learning experience. The findings indicate several qualities that reflect the richness of OH as a teaching source material in a classroom set-up. It further points to weak areas that may be addressed where the source is considered a viable strategy for knowledge sharing and learning. The researchers present a possible model that can be used to champion the use of this rich knowledge source in classroom education at this university and in similar set-ups. 


2018 ◽  
Author(s):  
Sherif Tawfik ◽  
Olexandr Isayev ◽  
Catherine Stampfl ◽  
Joseph Shapter ◽  
David Winkler ◽  
...  

Materials constructed from different van der Waals two-dimensional (2D) heterostructures offer a wide range of benefits, but these systems have been little studied because of their experimental and computational complextiy, and because of the very large number of possible combinations of 2D building blocks. The simulation of the interface between two different 2D materials is computationally challenging due to the lattice mismatch problem, which sometimes necessitates the creation of very large simulation cells for performing density-functional theory (DFT) calculations. Here we use a combination of DFT, linear regression and machine learning techniques in order to rapidly determine the interlayer distance between two different 2D heterostructures that are stacked in a bilayer heterostructure, as well as the band gap of the bilayer. Our work provides an excellent proof of concept by quickly and accurately predicting a structural property (the interlayer distance) and an electronic property (the band gap) for a large number of hybrid 2D materials. This work paves the way for rapid computational screening of the vast parameter space of van der Waals heterostructures to identify new hybrid materials with useful and interesting properties.


2020 ◽  
Author(s):  
Sina Faizollahzadeh Ardabili ◽  
Amir Mosavi ◽  
Pedram Ghamisi ◽  
Filip Ferdinand ◽  
Annamaria R. Varkonyi-Koczy ◽  
...  

Several outbreak prediction models for COVID-19 are being used by officials around the world to make informed-decisions and enforce relevant control measures. Among the standard models for COVID-19 global pandemic prediction, simple epidemiological and statistical models have received more attention by authorities, and they are popular in the media. Due to a high level of uncertainty and lack of essential data, standard models have shown low accuracy for long-term prediction. Although the literature includes several attempts to address this issue, the essential generalization and robustness abilities of existing models needs to be improved. This paper presents a comparative analysis of machine learning and soft computing models to predict the COVID-19 outbreak as an alternative to SIR and SEIR models. Among a wide range of machine learning models investigated, two models showed promising results (i.e., multi-layered perceptron, MLP, and adaptive network-based fuzzy inference system, ANFIS). Based on the results reported here, and due to the highly complex nature of the COVID-19 outbreak and variation in its behavior from nation-to-nation, this study suggests machine learning as an effective tool to model the outbreak. This paper provides an initial benchmarking to demonstrate the potential of machine learning for future research. Paper further suggests that real novelty in outbreak prediction can be realized through integrating machine learning and SEIR models.


2003 ◽  
Vol 3 (5-6) ◽  
pp. 321-327 ◽  
Author(s):  
M. Gallenkemper ◽  
T. Wintgens ◽  
T. Melin

Endocrine disrupting compounds can affect the hormone system in organisms. A wide range of endocrine disrupters were found in sewage and effluents of municipal wastewater treatment plants. Toxicological evaluations indicate that conventional wastewater treatment plants are not able to remove these substances sufficiently before disposing effluent into the environment. Membrane technology, which is proving to be an effective barrier to these substances, is the subject of this research. Nanofiltration provides high quality permeates in water and wastewater treatment. Eleven different nanofiltration membranes were tested in the laboratory set-up. The observed retention for nonylphenol (NP) and bisphenol A (BPA) ranged between 70% and 100%. The contact angle is an indicator for the hydrophobicity of a membrane, whose influence on the permeability and retention of NP was evident. The retention of BPA was found to be inversely proportional to the membrane permeability.


2021 ◽  
Vol 15 ◽  
Author(s):  
Alhassan Alkuhlani ◽  
Walaa Gad ◽  
Mohamed Roushdy ◽  
Abdel-Badeeh M. Salem

Background: Glycosylation is one of the most common post-translation modifications (PTMs) in organism cells. It plays important roles in several biological processes including cell-cell interaction, protein folding, antigen’s recognition, and immune response. In addition, glycosylation is associated with many human diseases such as cancer, diabetes and coronaviruses. The experimental techniques for identifying glycosylation sites are time-consuming, extensive laboratory work, and expensive. Therefore, computational intelligence techniques are becoming very important for glycosylation site prediction. Objective: This paper is a theoretical discussion of the technical aspects of the biotechnological (e.g., using artificial intelligence and machine learning) to digital bioinformatics research and intelligent biocomputing. The computational intelligent techniques have shown efficient results for predicting N-linked, O-linked and C-linked glycosylation sites. In the last two decades, many studies have been conducted for glycosylation site prediction using these techniques. In this paper, we analyze and compare a wide range of intelligent techniques of these studies from multiple aspects. The current challenges and difficulties facing the software developers and knowledge engineers for predicting glycosylation sites are also included. Method: The comparison between these different studies is introduced including many criteria such as databases, feature extraction and selection, machine learning classification methods, evaluation measures and the performance results. Results and conclusions: Many challenges and problems are presented. Consequently, more efforts are needed to get more accurate prediction models for the three basic types of glycosylation sites.


Sign in / Sign up

Export Citation Format

Share Document