scholarly journals The future of metabolomics in ELIXIR

F1000Research ◽  
2017 ◽  
Vol 6 ◽  
pp. 1649 ◽  
Author(s):  
Merlijn van Rijswijk ◽  
Charlie Beirnaert ◽  
Christophe Caron ◽  
Marta Cascante ◽  
Victoria Dominguez ◽  
...  

Metabolomics, the youngest of the major omics technologies, is supported by an active community of researchers and infrastructure developers across Europe. To coordinate and focus efforts around infrastructure building for metabolomics within Europe, a workshop on the “Future of metabolomics in ELIXIR” was organised at Frankfurt Airport in Germany. This one-day strategic workshop involved representatives of ELIXIR Nodes, members of the PhenoMeNal consortium developing an e-infrastructure that supports workflow-based metabolomics analysis pipelines, and experts from the international metabolomics community. The workshop established metabolite identification as the critical area, where a maximal impact of computational metabolomics and data management on other fields could be achieved. In particular, the existing four ELIXIR Use Cases, where the metabolomics community - both industry and academia - would benefit most, and which could be exhaustively mapped onto the current five ELIXIR Platforms were discussed. This opinion article is a call for support for a new ELIXIR metabolomics Use Case, which aligns with and complements the existing and planned ELIXIR Platforms and Use Cases.

F1000Research ◽  
2017 ◽  
Vol 6 ◽  
pp. 1649 ◽  
Author(s):  
Merlijn van Rijswijk ◽  
Charlie Beirnaert ◽  
Christophe Caron ◽  
Marta Cascante ◽  
Victoria Dominguez ◽  
...  

Metabolomics, the youngest of the major omics technologies, is supported by an active community of researchers and infrastructure developers across Europe. To coordinate and focus efforts around infrastructure building for metabolomics within Europe, a workshop on the “Future of metabolomics in ELIXIR” was organised at Frankfurt Airport in Germany. This one-day strategic workshop involved representatives of ELIXIR Nodes, members of the PhenoMeNal consortium developing an e-infrastructure that supports workflow-based metabolomics analysis pipelines, and experts from the international metabolomics community. The workshop established metabolite identification as the critical area, where a maximal impact of computational metabolomics and data management on other fields could be achieved. In particular, the existing four ELIXIR Use Cases, where the metabolomics community - both industry and academia - would benefit most, and which could be exhaustively mapped onto the current five ELIXIR Platforms were discussed. This opinion article is a call for support for a new ELIXIR metabolomics Use Case, which aligns with and complements the existing and planned ELIXIR Platforms and Use Cases.


2020 ◽  
Vol 77 ◽  
pp. 03006
Author(s):  
Christina Salwitzek ◽  
Christina Steuer

Today’s users no longer expect a classic manual, but short, clearly structured pieces of information that fit their application context, use case and role. Instead of conventional documentation, “intelligent information” is required that is modular, format-neutral and can be found via metadata and full-text search. The information is often created in a CMS and provided via CDPs. There are not always compatible interfaces between these systems, especially those of different software manufacturers. Therefore, the information created cannot be processed further. The purpose of this paper is to show that data transformations can provide accessibility for the information from a CMS for different CDPs. On this basis, data transformations were developed, enriched by semantics and implemented within the project. For the enrichment by semantics, metadata were used as well as a further approach based on metadata, called “microDocs”. This approach describes the combination and aggregation of different topic-based information that are connected by defined use cases and a logical context. Some CDP manufacturers already support microDocs and it is expected that even more extensions will be implemented in the future. Accordingly, it is highly likely that microDocs will play an important role in the field of information delivery.


Electronics ◽  
2021 ◽  
Vol 10 (5) ◽  
pp. 592
Author(s):  
Radek Silhavy ◽  
Petr Silhavy ◽  
Zdenka Prokopova

Software size estimation represents a complex task, which is based on data analysis or on an algorithmic estimation approach. Software size estimation is a nontrivial task, which is important for software project planning and management. In this paper, a new method called Actors and Use Cases Size Estimation is proposed. The new method is based on the number of actors and use cases only. The method is based on stepwise regression and led to a very significant reduction in errors when estimating the size of software systems compared to Use Case Points-based methods. The proposed method is independent of Use Case Points, which allows the elimination of the effect of the inaccurate determination of Use Case Points components, because such components are not used in the proposed method.


Author(s):  
Marvin Drewel ◽  
Leon Özcan ◽  
Jürgen Gausemeier ◽  
Roman Dumitrescu

AbstractHardly any other area has as much disruptive potential as digital platforms in the course of digitalization. After serious changes have already taken place in the B2C sector with platforms such as Amazon and Airbnb, the B2B sector is on the threshold to the so-called platform economy. In mechanical engineering, pioneers like GE (PREDIX) and Claas (365FarmNet) are trying to get their hands on the act. This is hardly a promising option for small and medium-sized companies, as only a few large companies will survive. Small and medium-sized enterprises (SMEs) are already facing the threat of losing direct consumer contact and becoming exchangeable executers. In order to prevent this, it is important to anticipate at an early stage which strategic options exist for the future platform economy and which adjustments to the product program should already be initiated today. Basically, medium-sized companies in particular lack a strategy for an advantageous entry into the future platform economy.The paper presents different approaches to master the challenges of participating in the platform economy by using platform patterns. Platform patterns represent proven principles of already existing platforms. We show how we derived a catalogue with 37 identified platform patterns. The catalogue has a generic design and can be customized for a specific use case. The versatility of the catalogue is underlined by three possible applications: (1) platform ideation, (2) platform development, and (3) platform characterization.


Author(s):  
Elizabeth M. Borycki ◽  
Andre W. Kushniruk ◽  
Ryan Kletke ◽  
Vivian Vimarlund ◽  
Yalini Senathirajah ◽  
...  

Objectives: This paper describes a methodology for gathering requirements and early design of remote monitoring technology (RMT) for enhancing patient safety during pandemics using virtual care technologies. As pandemics such as COrona VIrus Disease (COVID-19) progress there is an increasing need for effective virtual care and RMT to support patient care while they are at home. Methods: The authors describe their work in conducting literature reviews by searching PubMed.gov and the grey literature for articles, and government websites with guidelines describing the signs and symptoms of COVID-19, as well as the progression of the disease. The reviews focused on identifying gaps where RMT could be applied in novel ways and formed the basis for the subsequent modelling of use cases for applying RMT described in this paper. Results: The work was conducted in the context of a new Home of the Future laboratory which has been set up at the University of Victoria. The literature review led to the development of a number of object-oriented models for deploying RMT. This modeling is being used for a number of purposes, including for education of students in health infomatics as well as testing of new use cases for RMT with industrial collaborators and projects within the smart home of the future laboratory. Conclusions: Object-oriented modeling, based on analysis of gaps in the literature, was found to be a useful approach for describing, communicating and teaching about potential new uses of RMT.


2021 ◽  
Vol 20 ◽  
pp. 100071
Author(s):  
Nuno Bandeira ◽  
Eric W. Deutsch ◽  
Oliver Kohlbacher ◽  
Lennart Martens ◽  
Juan Antonio Vizcaíno

2017 ◽  
Vol 107 (04) ◽  
pp. 273-279
Author(s):  
T. Knothe ◽  
A. Ullrich ◽  
N. Weinert

Die Transformation in die „intelligente“ und vernetzte Fabrik der Zukunft folgt einem schrittweise iterativ ablaufenden Prozess. Besonderer Wert ist dabei auf die schnelle Realisierung von Prototypen und einzelnen Maßnahmen zu legen, um rasch Ergebnisse zu erzielen. Gefördert wird mit diesem Vorgehen nicht zuletzt auch das Verständnis und die Partizipationsbereitschaft der beteiligten Mitarbeiter, die somit früher in konkrete Entwicklungen eingebunden werden und diese mitgestalten können. Das Projekt „MetamoFAB“ hat Methoden sowie Hilfsmittel entwickelt, die beim Planen und Umsetzen der Transformation unterstützen. Diese wurden zudem exemplarisch in Fallbeispielen erprobt.   The transformation towards intelligent and interconnected Factories of the future follows a stepwise, iterative approach. For quickly achieving results, a fast realization of haptic prototypes is crucial. By this, not at least understanding and willingness for participation of involved employees is raised, including them early phases of the transformation. The project MetamoFAB has developed methods and tools supporting this transformation process during planning and implementation. The applicability has been demonstrated exemplarily in use cases.


2014 ◽  
Vol 23 (01) ◽  
pp. 27-35 ◽  
Author(s):  
S. de Lusignan ◽  
S-T. Liaw ◽  
C. Kuziemsky ◽  
F. Mold ◽  
P. Krause ◽  
...  

Summary Background: Generally benefits and risks of vaccines can be determined from studies carried out as part of regulatory compliance, followed by surveillance of routine data; however there are some rarer and more long term events that require new methods. Big data generated by increasingly affordable personalised computing, and from pervasive computing devices is rapidly growing and low cost, high volume, cloud computing makes the processing of these data inexpensive. Objective: To describe how big data and related analytical methods might be applied to assess the benefits and risks of vaccines. Method: We reviewed the literature on the use of big data to improve health, applied to generic vaccine use cases, that illustrate benefits and risks of vaccination. We defined a use case as the interaction between a user and an information system to achieve a goal. We used flu vaccination and pre-school childhood immunisation as exemplars. Results: We reviewed three big data use cases relevant to assessing vaccine benefits and risks: (i) Big data processing using crowd-sourcing, distributed big data processing, and predictive analytics, (ii) Data integration from heterogeneous big data sources, e.g. the increasing range of devices in the “internet of things”, and (iii) Real-time monitoring for the direct monitoring of epidemics as well as vaccine effects via social media and other data sources. Conclusions: Big data raises new ethical dilemmas, though its analysis methods can bring complementary real-time capabilities for monitoring epidemics and assessing vaccine benefit-risk balance.


Author(s):  
Matt Woodburn ◽  
Gabriele Droege ◽  
Sharon Grant ◽  
Quentin Groom ◽  
Janeen Jones ◽  
...  

The utopian vision is of a future where a digital representation of each object in our collections is accessible through the internet and sustainably linked to other digital resources. This is a long term goal however, and in the meantime there is an urgent need to share data about our collections at a higher level with a range of stakeholders (Woodburn et al. 2020). To sustainably achieve this, and to aggregate this information across all natural science collections, the data need to be standardised (Johnston and Robinson 2002). To this end, the Biodiversity Information Standards (TDWG) Collection Descriptions (CD) Interest Group has developed a data standard for describing collections, which is approaching formal review for ratification as a new TDWG standard. It proposes 20 classes (Suppl. material 1) and over 100 properties that can be used to describe, categorise, quantify, link and track digital representations of natural science collections, from high-level approximations to detailed breakdowns depending on the purpose of a particular implementation. The wide range of use cases identified for representing collection description data means that a flexible approach to the standard and the underlying modelling concepts is essential. These are centered around the ‘ObjectGroup’ (Fig. 1), a class that may represent any group (of any size) of physical collection objects, which have one or more common characteristics. This generic definition of the ‘collection’ in ‘collection descriptions’ is an important factor in making the standard flexible enough to support the breadth of use cases. For any use case or implementation, only a subset of classes and properties within the standard are likely to be relevant. In some cases, this subset may have little overlap with those selected for other use cases. This additional need for flexibility means that very few classes and properties, representing the core concepts, are proposed to be mandatory. Metrics, facts and narratives are represented in a normalised structure using an extended MeasurementOrFact class, so that these can be user-defined rather than constrained to a set identified by the standard. Finally, rather than a rigid underlying data model as part of the normative standard, documentation will be developed to provide guidance on how the classes in the standard may be related and quantified according to relational, dimensional and graph-like models. So, in summary, the standard has, by design, been made flexible enough to be used in a number of different ways. The corresponding risk is that it could be used in ways that may not deliver what is needed in terms of outputs, manageability and interoperability with other resources of collection-level or object-level data. To mitigate this, it is key for any new implementer of the standard to establish how it should be used in that particular instance, and define any necessary constraints within the wider scope of the standard and model. This is the concept of the ‘collection description scheme,’ a profile that defines elements such as: which classes and properties should be included, which should be mandatory, and which should be repeatable; which controlled vocabularies and hierarchies should be used to make the data interoperable; how the collections should be broken down into individual ObjectGroups and interlinked, and how the various classes should be related to each other. which classes and properties should be included, which should be mandatory, and which should be repeatable; which controlled vocabularies and hierarchies should be used to make the data interoperable; how the collections should be broken down into individual ObjectGroups and interlinked, and how the various classes should be related to each other. Various factors might influence these decisions, including the types of information that are relevant to the use case, whether quantitative metrics need to be captured and aggregated across collection descriptions, and how many resources can be dedicated to amassing and maintaining the data. This process has particular relevance to the Distributed System of Scientific Collections (DiSSCo) consortium, the design of which incorporates use cases for storing, interlinking and reporting on the collections of its member institutions. These include helping users of the European Loans and Visits System (ELViS) (Islam 2020) to discover specimens for physical and digital loans by providing descriptions and breakdowns of the collections of holding institutions, and monitoring digitisation progress across European collections through a dynamic Collections Digitisation Dashboard. In addition, DiSSCo will be part of a global collections data ecosystem requiring interoperation with other infrastructures such as the GBIF (Global Biodiversity Information Facility) Registry of Scientific Collections, the CETAF (Consortium of European Taxonomic Facilities) Registry of Collections and Index Herbariorum. In this presentation, we will introduce the draft standard and discuss the process of defining new collection description schemes using the standard and data model, and focus on DiSSCo requirements as examples of real-world collection descriptions use cases.


Author(s):  
Mathias Uslar ◽  
Fabian Grüning ◽  
Sebastian Rohjans

Within this chapter, the authors provide two use cases on semantic interoperability in the electric utility industry based on the IEC TR 62357 seamless integration architecture. The first use case on semantic integration based on ontologies deals with the integration of the two heterogeneous standards families IEC 61970 and IEC 61850. Based on a quantitative analysis, we outline the need for integration and provide a solution based on our framework, COLIN. The second use cases points out the need to use better metadata semantics in the utility branch, also being solely based on the IEC 61970 standard. The authors provide a solution to use the CIM as a domain ontology and taxonomy for improving data quality. Finally, this chapter outlines open questions and argues that proper semantics and domain models based on international standards can improve the systems within a utility.


Sign in / Sign up

Export Citation Format

Share Document