scholarly journals GPSRdocker: A Docker-based Resource for Genomics, Proteomics and Systems biology

2019 ◽  
Author(s):  
Piyush Agrawal ◽  
Rajesh Kumar ◽  
Salman Sadullah Usmani ◽  
Anjali Dhall ◽  
Sumeet Patiyal ◽  
...  

AbstractBackgroundIn past number of web-based resources has been developed in the field of Bioinformatics. These resources are heavily used by scientific community to provide solution for challenges faced by experimental researchers particularly in the field of biomedical sciences. There are number of challenges in utilizing full potential of these services that includes internet speed, limits on computing power, and security of data. In order to enhance utilities of these web-based assets, we developed a docker-based container that integrates large number resources available in literature.ResultsThis paper describes GPSRdocker a docker-based container developed for providing wide-range of computational tools in the field of bioinformatics particularly in genomics, proteomics and system biology. Majority of tools integrated in GPSRdocker are based on web services developed at Raghava’s group in last two decades. Broadly, these tools can be categorized in three categories; i) general scripts, ii) supporting software and iii) major standalone software. In order to facilitate students or developers working in the field of bioinformatics, we developed general scripts in Perl and Python. These general-purpose scripts serve as building block for any bioinformatics tools like computing features/descriptors of a protein. Supporting software packages includes SCIKIT, WEKA, SVMlight, and PSI-BLAST; these software packages allow one to develop/implement bioinformatics software. Major Standalone software is core of this container which allows predicting function/class of biomolecules. These tools can be classified broadly in following categories; protein annotation, epitope-based vaccines, prediction of interaction and drug discovery.ConclusionA docker-based container has been developed which can be easily run on any operating system as well as it can be directly ported on cloud. Scripts can be run to build pipelines for addressing problems at system level like prediction of vaccine candidate for a pathogen. GPSRdocker including manual is available free for academic use from https://webs.iiitd.edu.in/gpsrdocker.

2014 ◽  
Vol 23 (08) ◽  
pp. 1430002 ◽  
Author(s):  
SPARSH MITTAL

Initially introduced as special-purpose accelerators for graphics applications, graphics processing units (GPUs) have now emerged as general purpose computing platforms for a wide range of applications. To address the requirements of these applications, modern GPUs include sizable hardware-managed caches. However, several factors, such as unique architecture of GPU, rise of CPU–GPU heterogeneous computing, etc., demand effective management of caches to achieve high performance and energy efficiency. Recently, several techniques have been proposed for this purpose. In this paper, we survey several architectural and system-level techniques proposed for managing and leveraging GPU caches. We also discuss the importance and challenges of cache management in GPUs. The aim of this paper is to provide the readers insights into cache management techniques for GPUs and motivate them to propose even better techniques for leveraging the full potential of caches in the GPUs of tomorrow.


2010 ◽  
Vol 20 (02) ◽  
pp. 103-121 ◽  
Author(s):  
MOSTAFA I. SOLIMAN ◽  
ABDULMAJID F. Al-JUNAID

Technological advances in IC manufacturing provide us with the capability to integrate more and more functionality into a single chip. Today's modern processors have nearly one billion transistors on a single chip. With the increasing complexity of today's system, the designs have to be modeled at a high-level of abstraction before partitioning into hardware and software components for final implementation. This paper explains in detail the implementation and performance evaluation of a matrix processor called Mat-Core with SystemC (system level modeling language). Mat-Core is a research processor aiming at exploiting the increasingly number of transistors per IC to improve the performance of a wide range of applications. It extends a general-purpose scalar processor with a matrix unit. To hide memory latency, the extended matrix unit is decoupled into two components: address generation and data computation, which communicate through data queues. Like vector architectures, the data computation unit is organized in parallel lanes. However, on parallel lanes, Mat-Core can execute matrix-scalar, matrix-vector, and matrix-matrix instructions in addition to vector-scalar and vector-vector instructions. For controlling the execution of vector/matrix instructions on the matrix core, this paper extends the well known scoreboard technique. Furthermore, the performance of Mat-Core is evaluated on vector and matrix kernels. Our results show that the performance of four lanes Mat-Core with matrix registers of size 4 × 4 or 16 elements each, queues size of 10, start up time of 6 clock cycles, and memory latency of 10 clock cycles is about 0.94, 1.3, 2.3, 1.6, 2.3, and 5.5 FLOPs per clock cycle; achieved on scalar-vector multiplication, SAXPY, Givens, rank-1 update, vector-matrix multiplication, and matrix-matrix multiplication, respectively.


2022 ◽  
Author(s):  
Yi Nian Niu ◽  
Eric G. Roberts ◽  
Danielle Denisko ◽  
Michael M. Hoffman

Background: Bioinformatics software tools operate largely through the use of specialized genomics file formats. Often these formats lack formal specification, and only rarely do the creators of these tools robustly test them for correct handling of input and output. This causes problems in interoperability between different tools that, at best, wastes time and frustrates users. At worst, interoperability issues could lead to undetected errors in scientific results. Methods: We sought (1) to assess the interoperability of a wide range of bioinformatics software using a shared genomics file format and (2) to provide a simple, reproducible method for enhancing interoperability. As a focus, we selected the popular BED file format for genomic interval data. Based on the file format's original documentation, we created a formal specification. We developed a new verification system, Acidbio (https://github.com/hoffmangroup/acidbio), which tests for correct behavior in bioinformatics software packages. We crafted tests to unify correct behavior when tools encounter various edge cases—potentially unexpected inputs that exemplify the limits of the format. To analyze the performance of existing software, we tested the input validation of 80 Bioconda packages that parsed the BED format. We also used a fuzzing approach to automatically perform additional testing. Results: Of 80 software packages examined, 75 achieved less than 70% correctness on our test suite. We categorized multiple root causes for the poor performance of different types of software. Fuzzing detected other errors that the manually designed test suite could not. We also created a badge system that developers can use to indicate more precisely which BED variants their software accepts and to advertise the software's performance on the test suite. Discussion: Acidbio makes it easy to assess interoperability of software using the BED format, and therefore to identify areas for improvement in individual software packages. Applying our approach to other file formats would increase the reliability of bioinformatics software and data.


Author(s):  
R.W. Horne

The technique of surrounding virus particles with a neutralised electron dense stain was described at the Fourth International Congress on Electron Microscopy, Berlin 1958 (see Home & Brenner, 1960, p. 625). For many years the negative staining technique in one form or another, has been applied to a wide range of biological materials. However, the full potential of the method has only recently been explored following the development and applications of optical diffraction and computer image analytical techniques to electron micrographs (cf. De Hosier & Klug, 1968; Markham 1968; Crowther et al., 1970; Home & Markham, 1973; Klug & Berger, 1974; Crowther & Klug, 1975). These image processing procedures have allowed a more precise and quantitative approach to be made concerning the interpretation, measurement and reconstruction of repeating features in certain biological systems.


2020 ◽  
Author(s):  
Julia Hegy ◽  
Noemi Anja Brog ◽  
Thomas Berger ◽  
Hansjoerg Znoj

BACKGROUND Accidents and the resulting injuries are one of the world’s biggest health care issues often causing long-term effects on psychological and physical health. With regard to psychological consequences, accidents can cause a wide range of burdens including adjustment problems. Although adjustment problems are among the most frequent mental health problems, there are few specific interventions available. The newly developed program SelFIT aims to remedy this situation by offering a low-threshold web-based self-help intervention for psychological distress after an accident. OBJECTIVE The overall aim is to evaluate the efficacy and cost-effectiveness of the SelFIT program plus care as usual (CAU) compared to only care as usual. Furthermore, the program’s user friendliness, acceptance and adherence are assessed. We expect that the use of SelFIT is associated with a greater reduction in psychological distress, greater improvement in mental and physical well-being, and greater cost-effectiveness compared to CAU. METHODS Adults (n=240) showing adjustment problems due to an accident they experienced between 2 weeks and 2 years before entering the study will be randomized. Participants in the intervention group receive direct access to SelFIT. The control group receives access to the program after 12 weeks. There are 6 measurement points for both groups (baseline as well as after 4, 8, 12, 24 and 36 weeks). The main outcome is a reduction in anxiety, depression and stress symptoms that indicate adjustment problems. Secondary outcomes include well-being, optimism, embitterment, self-esteem, self-efficacy, emotion regulation, pain, costs of health care consumption and productivity loss as well as the program’s adherence, acceptance and user-friendliness. RESULTS Recruitment started in December 2019 and is ongoing. CONCLUSIONS To the best of our knowledge, this is the first study examining a web-based self-help program designed to treat adjustment problems resulting from an accident. If effective, the program could complement the still limited offer of secondary and tertiary psychological prevention after an accident. CLINICALTRIAL ClinicalTrials.gov NCT03785912; https://clinicaltrials.gov/ct2/show/NCT03785912?cond=NCT03785912&draw=2&rank=1


Author(s):  
Richard Jiang ◽  
Bruno Jacob ◽  
Matthew Geiger ◽  
Sean Matthew ◽  
Bryan Rumsey ◽  
...  

Abstract Summary We present StochSS Live!, a web-based service for modeling, simulation and analysis of a wide range of mathematical, biological and biochemical systems. Using an epidemiological model of COVID-19, we demonstrate the power of StochSS Live! to enable researchers to quickly develop a deterministic or a discrete stochastic model, infer its parameters and analyze the results. Availability and implementation StochSS Live! is freely available at https://live.stochss.org/ Supplementary information Supplementary data are available at Bioinformatics online.


Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1031
Author(s):  
Joseba Gorospe ◽  
Rubén Mulero ◽  
Olatz Arbelaitz ◽  
Javier Muguerza ◽  
Miguel Ángel Antón

Deep learning techniques are being increasingly used in the scientific community as a consequence of the high computational capacity of current systems and the increase in the amount of data available as a result of the digitalisation of society in general and the industrial world in particular. In addition, the immersion of the field of edge computing, which focuses on integrating artificial intelligence as close as possible to the client, makes it possible to implement systems that act in real time without the need to transfer all of the data to centralised servers. The combination of these two concepts can lead to systems with the capacity to make correct decisions and act based on them immediately and in situ. Despite this, the low capacity of embedded systems greatly hinders this integration, so the possibility of being able to integrate them into a wide range of micro-controllers can be a great advantage. This paper contributes with the generation of an environment based on Mbed OS and TensorFlow Lite to be embedded in any general purpose embedded system, allowing the introduction of deep learning architectures. The experiments herein prove that the proposed system is competitive if compared to other commercial systems.


Pharmaceutics ◽  
2021 ◽  
Vol 13 (8) ◽  
pp. 1184
Author(s):  
Armin Mooranian ◽  
Thomas Foster ◽  
Corina M Ionescu ◽  
Daniel Walker ◽  
Melissa Jones ◽  
...  

Introduction: Recent studies in our laboratory have shown that some bile acids, such as chenodeoxycholic acid (CDCA), can exert cellular protective effects when encapsulated with viable β-cells via anti-inflammatory and anti-oxidative stress mechanisms. However, to explore their full potential, formulating such bile acids (that are intrinsically lipophilic) can be challenging, particularly if larger doses are required for optimal pharmacological effects. One promising approach is the development of nano gels. Accordingly, this study aimed to examine biological effects of various concentrations of CDCA using various solubilising nano gel systems on encapsulated β-cells. Methods: Using our established cellular encapsulation system, the Ionic Gelation Vibrational Jet Flow technology, a wide range of CDCA β-cell capsules were produced and examined for morphological, biological, and inflammatory profiles. Results and Conclusion: Capsules’ morphology and topographic characteristics remained similar, regardless of CDCA or nano gel concentrations. The best pharmacological, anti-inflammatory, and cellular respiration, metabolism, and energy production effects were observed at high CDCA and nano gel concentrations, suggesting dose-dependent cellular protective and positive effects of CDCA when incorporated with high loading nano gel.


Minerals ◽  
2021 ◽  
Vol 11 (4) ◽  
pp. 347
Author(s):  
Carsten Laukamp ◽  
Andrew Rodger ◽  
Monica LeGras ◽  
Heta Lampinen ◽  
Ian C. Lau ◽  
...  

Reflectance spectroscopy allows cost-effective and rapid mineral characterisation, addressing mineral exploration and mining challenges. Shortwave (SWIR), mid (MIR) and thermal (TIR) infrared reflectance spectra are collected in a wide range of environments and scales, with instrumentation ranging from spaceborne, airborne, field and drill core sensors to IR microscopy. However, interpretation of reflectance spectra is, due to the abundance of potential vibrational modes in mineral assemblages, non-trivial and requires a thorough understanding of the potential factors contributing to the reflectance spectra. In order to close the gap between understanding mineral-diagnostic absorption features and efficient interpretation of reflectance spectra, an up-to-date overview of major vibrational modes of rock-forming minerals in the SWIR, MIR and TIR is provided. A series of scripts are proposed that allow the extraction of the relative intensity or wavelength position of single absorption and other mineral-diagnostic features. Binary discrimination diagrams can assist in rapidly evaluating mineral assemblages, and relative abundance and chemical composition of key vector minerals, in hydrothermal ore deposits. The aim of this contribution is to make geologically relevant information more easily extractable from reflectance spectra, enabling the mineral resources and geoscience communities to realise the full potential of hyperspectral sensing technologies.


Sign in / Sign up

Export Citation Format

Share Document