scholarly journals How to draw the line – Raman spectroscopy as a tool for the assessment of biomedicines

2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Christel Kamp ◽  
Björn Becker ◽  
Walter Matheis ◽  
Volker Öppling ◽  
Isabelle Bekeredjian-Ding

Abstract Biomedicines are complex biochemical formulations with multiple components that require extensive quality control during manufacturing and in subsequent batch testing. A proof-of-concept study has shown that an application of Raman spectroscopy can be beneficial for a classification of vaccines. However, the complexity of biomedicines introduces new challenges to spectroscopic methodology that require advanced experimental protocols. We further show the impact of analytical protocols on vaccine classification using R as an Open Source data analysis platform. In conclusion, we advocate for standardized and transparent experimental and analytical procedures and discuss current findings and open challenges.

2021 ◽  
Vol 24 ◽  
pp. 256-266
Author(s):  
Nihayatul Karimah ◽  
Gijs Schaftenaar

Purpose: Structurally similar molecules are likely to have similar biological activity. In this study, similarity searching based on molecular 2D fingerprint was performed to analyze off-target effects of drugs. The purpose of this study is to determine the correlation between the adverse effects and drug off-targets. Methods: A workflow was built using KNIME to run dataset preparation of twenty-nine targets from ChEMBL, generate molecular 2D fingerprints of the ligands, calculate the similarity between ligand sets, and compute the statistical significance using similarity ensemble approach (SEA). Tanimoto coefficients (Tc) are used as a measure of chemical similarity in which the values between 0.2 and 0.4 are the most common for the majority of ligand pairs and considered to be insignificant similar. Result: The majority of ligand sets are unrelated, as is evidenced by the intrinsic chemical differences and the classification of statistical significance based on expectation value. The rank-ordered expectation value of inter-target similarity showed a correlation with off-target effects of the known drugs. Conclusion: Similarity-searching using molecular 2D fingerprint can be applied to predict off-targets and correlate them to the adverse effects of the drugs. KNIME as an open-source data analytic platform is applicable to build a workflow for data mining of ChEMBL database and generating SEA statistical model.


2020 ◽  
Author(s):  
Louise Darroch ◽  
Juan Ward ◽  
Alexander Tate ◽  
Justin Buck

<div> <p>More than 40% of the human population live within 100 km of the sea. Many of these communities intimately rely on the oceans for their food, climate and economy. However, the oceans are increasingly being adversely affected by human-driven activities such as climate change and pollution. Many targeted, marine monitoring programmes (e.g. GOSHIP, OceanSITES) and pioneering observing technologies (e.g. autonomous underwater vehicles, Argo floats) are being used to assess the impact humans are having on our oceans. Such activities and platforms are deployed, calibrated and serviced by state-of-the-art research ships, multimillion-pound floating laboratories which operate diverse arrays of high-powered, high-resolution sensors around-the-clock (e.g. sea-floor depth, weather, ocean current velocity and hydrography etc.). These sensors, coupled with event and environmental metadata provided by the ships logs and crew, are essential for understanding the wider context of the science they support, as well as directly contributing to crucial scientific understanding of the marine environment and key strategic policies (e.g. United Nation’s Sustainable Development Goal 14). However, despite their high scientific value and cost, these data streams are not routinely brought together from UK large research vessels in coordinated, reliable and accessible ways that are fundamental to ensuring user trust in the data and any products generated from the data.  </p> </div><div> <p>The National Oceanography Centre (NOC) and British Antarctic Survey (BAS) are currently working together to improve the integrity of the data management workflow from sensor systems to end-users across the UK National Environment Research Council (NERC) large research vessel fleet, making cost effective use of vessel time while improving the FAIRness of data from these sensor arrays. The solution is based upon an Application Programming Interface (API) framework with endpoints tailored towards different end-users such as scientists on-board the vessels as well as the public on land. Key features include: Sensor triage using real-time automated monitoring systems, assuring sensors are working correctly and only the best data are output; Standardised digital event logging systems allowing data quality issues to be identified and resolved quickly; Novel open-source, data transport formats that are embedded with well-structured metadata, common standards and provenance information (such as controlled vocabularies and persistent identifiers), reducing ambiguity and enhancing interoperability across platforms; An open-source data processing application that applies quality control to international standards (SAMOS, or IOOS Qartod); Digital notebooks that manage and capture processing applied to data putting data into context; Democratisation and brokering of data through open data APIs (e.g. ERDDAP, Sensor Web Enablement), allowing end-users to discover and access data, layer their own tools or generate products to meet their own needs; Unambiguous provenance that is maintained throughout the data management workflow using instrument persistent identifiers, part of the latest recommendations by the Research Data Alliance (RDA).  </p> </div><div> <p>Access to universally interoperable oceanic data, with known quality and provenance, will empower a broad range of stakeholder communities, creating opportunities for innovation and impact through data use, re-use and exploitation.</p> </div>


2021 ◽  
Vol 13 (10) ◽  
pp. 1892
Author(s):  
Sébastien Rapinel ◽  
Laurence Hubert-Moy

Advances in remote sensing (RS) technology in recent years have increased the interest in including RS data into one-class classifiers (OCCs). However, this integration is complex given the interdisciplinary issues involved. In this context, this review highlights the advances and current challenges in integrating RS data into OCCs to map vegetation classes. A systematic review was performed for the period 2013–2020. A total of 136 articles were analyzed based on 11 topics and 30 attributes that address the ecological issues, properties of RS data, and the tools and parameters used to classify natural vegetation. The results highlight several advances in the use of RS data in OCCs: (i) mapping of potential and actual vegetation areas, (ii) long-term monitoring of vegetation classes, (iii) generation of multiple ecological variables, (iv) availability of open-source data, (v) reduction in plotting effort, and (vi) quantification of over-detection. Recommendations related to interdisciplinary issues were also suggested: (i) increasing the visibility and use of available RS variables, (ii) following good classification practices, (iii) bridging the gap between spatial resolution and site extent, and (iv) classifying plant communities.


2021 ◽  
Vol 2 (2) ◽  
pp. 33-39
Author(s):  
Sarthak Maggu ◽  
Vijander Singh

The world was stuck by a deadly virus last year, which stopped the world and changed it completely. The virus was given the name “Corona.” It has had an everlasting effect on human lives which has caused a lot of people to study about it, following the trend we have chosen “Prediction and analysis of Covid - 19 with different ML algorithms and comparative analysis” as the title of this paper. The title of the paper is self-explanatory to explain what the paper is about and what technology will be used in the paper. The paper will use data cleaning and plotting techniques to analyse the impact and effect of Covid on human lives and various algorithms to predict how vulnerable the person is to the virus / predict whether a person suffered from Covid or not based on various parameters. The main ingredient of the paper will be the data on which the model will be built, will be collected through various Google forms or through open source data websites such as Kaggle. The paper would be divided into various parts, which will include data collection, data cleaning, data plotting etc. After cleaning of data and building of models using various ML algorithms, findings will be reported in a form of report using various plots to favour the findings of the various models.


2019 ◽  
pp. 27-35
Author(s):  
Alexandr Neznamov

Digital technologies are no longer the future but are the present of civil proceedings. That is why any research in this direction seems to be relevant. At the same time, some of the fundamental problems remain unattended by the scientific community. One of these problems is the problem of classification of digital technologies in civil proceedings. On the basis of instrumental and genetic approaches to the understanding of digital technologies, it is concluded that their most significant feature is the ability to mediate the interaction of participants in legal proceedings with information; their differentiating feature is the function performed by a particular technology in the interaction with information. On this basis, it is proposed to distinguish the following groups of digital technologies in civil proceedings: a) technologies of recording, storing and displaying (reproducing) information, b) technologies of transferring information, c) technologies of processing information. A brief description is given to each of the groups. Presented classification could serve as a basis for a more systematic discussion of the impact of digital technologies on the essence of civil proceedings. Particularly, it is pointed out that issues of recording, storing, reproducing and transferring information are traditionally more «technological» for civil process, while issues of information processing are more conceptual.


Sign in / Sign up

Export Citation Format

Share Document