ARPHA Conference Abstracts
Latest Publications


TOTAL DOCUMENTS

425
(FIVE YEARS 357)

H-INDEX

1
(FIVE YEARS 1)

Published By Pensoft Publishers

2603-3925

2021 ◽  
Vol 4 ◽  
Author(s):  
Kyrre Kausrud ◽  
Karin Lagesen ◽  
Ryan Easterday ◽  
Jason Whittington ◽  
Wendy Turner ◽  
...  

Here we present a developing probabilistic simulation model and tool to assess likely lead times from emergence to detection and arrival for new emerging infectious diseases (EIDs). Key aspects include combining real-world data available on multiple scales with a flexible underlying disease model. As demonstrated by the SARS-CoV-2 pandemic and other emerging infectious diseases, there is a need for scenario exploration for mitigation, surveillance and preparedness strategies. Existing simulation engines have been assessed but found to offer an insufficient set of features with regards to flexibility and control over processes, disease model structure and data sets incorporated for a wider enough range of diseases, circumstances, cofactors and scenarios (Heslop et al. 2017) to suit our aims. We are therefore developing the first version of a simulation model designed to be able to incorporate a diverse range of disease models and data sources including multiple transmission and infectivity stages, multiple host species, varying and evolving virulence, socioeconomic differences, climate events and public health countermeasures. It is designed to be flexible with respect to implementing both improvements in the model structure and data as they become available. It is based on a discrete-time (daily) structure where spatial movement and transition between categories and detection are stochastic rates dependent on spatial data and past states in the model, while being informed by the most suitable data available (Fig. 1). The probability of detection is in itself treated as a probabilistic process and treated as a variable dependent on socioeconomic factors and parameterized by past performance, yet open for manipulation in scenario exploration regarding surveillance and reporting effectiveness. Pathogen hotspot data are sourced from literature and included as a probabilistic assessment of emergence as well as a source of cofactor data (Allen et al. 2017), population data are adressed (Leyk et al. 2019) for utility and combined with data on local connectivity (Nelson et al. 2019) and transnational movement patterns (Recchi et al. 2019Fig. 1), as well as an increasing set of ecological and socioeconomic candidate variables. Model parameterization relies on a machine learning framework with matching to the often partial data available for known relevant disease cases as the training data, and assessing them for plausible ranges of input for new, hypothetical EIDs. As parameterizations improve, the range of scenarios to explore will incorporate effects of climate change and multiple stressors. When a suitable version becomes available it will be shared under a MIT license.


2021 ◽  
Vol 4 ◽  
Author(s):  
Jakub Fusiak ◽  
Kyrre Kausrud ◽  
Marion Gottschald ◽  
Dominic Tölle ◽  
Marco Rügen ◽  
...  

Identifying a specific product causing a foodborne disease outbreak can be difficult, especially when dealing with a large amounts of suspicious food items and weak epidemiological evidence. A previously described likelihood model (Norström et al. 2015), improved within the OHEJP NOVA project, helps to prioritize food products that should be sampled for laboratory analysis. It is the aim of our study to integrate this approach into state of the art tracing software FoodChain-Lab (FCL; https://foodrisklabs.bfr.bund.de/foodchain-lab) developed at BfR to facilitate outbreak investigations. The model improved by Kausrud et al. in R (Ihaka and Gentleman 1996) uses wholesale data, the distribution of disease cases and census data to sort food items by their estimated likelihood to be the source of an outbreak. We developed a fast and secure intuitive software module using the Web Assembly technology (Haas et al. 2017) allowing professionals to embed the module easily into other applications. We integrated the module into the FCL web application for tracing (FCL Web; https://fcl-portal.bfr.berlin) to provide an intuitive and user-friendly solution. This solution combines a simple data input with extended data wrangling to make the calculation of the NOVA model as easy as possible. Since the model can be executed directly inside the web browser and therefore does not rely on any server environment, the possibility of data leakage can be highly reduced. The implementation of the advanced likelihood model into FCL Web increase the availability of this model and provides investigators easy, fast and reliable usage to improve outbreak investigation workflows.


2021 ◽  
Vol 4 ◽  
Author(s):  
Jakub Fusiak ◽  
Annemarie Käsbohrer

The lack of a harmonized model exchange formats among modelling tools impedes communication between researchers, since the exchange and usage of existing models in various software environments can be very difficult. The RaDAR model inventory aims to provide a platform to exchange models among professionals utilizing the Food Safety Knowledge Exchange (FSKX) Format (de Alba Aparicio et al. 2018) as a harmonized model exchange format. FSKX defines a framework that encodes all relevant data, metadata, and model scripts in an exchangeable file format. However, the creation of such a file can be a time-consuming and difficult process. To increase the usage of the FSK standard, we developed the RaDAR model inventory web application that targets the process of creating an FSKX file for the end user. Our inventory aims to be a user-friendly tool that allows users to create, read, edit, write, execute and compile FSKX files within the web browser. The possibility of sharing models with the public or a specific group of people facilitates collaboration and the exchange of information. Since the RaDAR model inventory is based on the open-source technology of Project Jupyter (Granger and Perez 2021), it can support nearly all relevant programming languages executed within a reproducible cloud-computing environment. The intuitive nature of the RaDAR model along with its wide range of features reduce the threshold for contribution to a harmonized model exchange format and eases collaboration. The RaDAR model inventory can be accessed at http://ejp-radar.eu.


2021 ◽  
Vol 4 ◽  
Author(s):  
Taras Günther ◽  
Matthias Filter ◽  
Fernanda Dórea

In times of emerging diseases, data sharing and data integration are of particular relevance for One Health Surveillance (OHS) and decision support. Furthermore, there is an increasing demand to provide governmental data in compliance to the FAIR (Findable, Accessible, Interoperable, Reusable) data principles. Semantic web technologies are key facilitators for providing data interoperability, as they allow explicit annotation of data with their meaning, enabling reuse without loss of the data collection context. Among these, we highlight ontologies as a tool for modeling knowledge in a field, which simplify the interpretation and mapping of datasets in a computer readable medium; and the Resource Description Format (RDF), which allows data to be shared among human and computer agents following this knowledge model. Despite their potential for enabling cross-sectoral interoperability and data linkage, the use and application of these technologies is often hindered by their complexity and the lack of easy-to-use software applications. To overcome these challenges the OHEJP Project ORION developed the Health Surveillance Ontology (HSO). This knowledge model forms a foundation for semantic interoperability in the domain of One Health Surveillance. It provides a solution to add data from the target sectors (public health, animal health and food safety) in compliance with the FAIR principles of findability, accessibility, interoperability, and reusability, supporting interdisciplinary data exchange and usage. To provide use cases and facilitate the accessibility to HSO, we developed the One Health Linked Data Toolbox (OHLDT), which consists of three new and custom-developed web applications with specific functionalities. The first web application allows users to convert surveillance data available in Excel files online into HSO-RDF and vice versa. The web application demonstrates that data provided in well-established data formats can be automatically translated in the linked data format HSO-RDF. The second application is a demonstrator of the usage of HSO-RDF in a HSO triplestore database. In the user interface of this application, the user can select HSO concepts based on which to search and filter among surveillance datasets stored in a HSO triplestore database. The service then provides automatically generated dashboards based on the context of the data. The third web application demonstrates the use of data interoperability in the OHS context by using HSO-RDF to annotate meta-data, and in this way link datasets across sectors. The web application provides a dashboard to compare public data on zoonosis surveillance provided by EFSA and ECDC. The first solution enables linked data production, while the second and third provide examples of linked data consumption, and their value in enabling data interoperability across sectors. All described solutions are based on the open-source software KNIME and are deployed as web service via a KNIME Server hosted at the German Federal Institute for Risk Assessment. The semantic web extension of KNIME, which is based on the Apache Jena Framework, allowed a rapid an easy development within the project. The underlying open source KNIME workflows are freely available and can be easily customized by interested end users. With our applications, we demonstrate that the use of linked data has a great potential strengthening the use of FAIR data in OHS and interdisciplinary data exchange.


2021 ◽  
Vol 4 ◽  
Author(s):  
Marion Gottschald ◽  
Birgit Lewicki ◽  
Alexander Falenski ◽  
Marco Rügen ◽  
Isaak Gerber ◽  
...  

In times of globalised food and feed trade, powerful integrative software tools are essential to solve foodborne crises quickly and reliably. The FoodChain-Lab web application (FCL Web; https://fcl-portal.bfr.berlin/) is such a tool. FCL Web is free and open-source software which helps to trace back and forward food along complex global supply chains during foodborne disease outbreaks or other food-related events. In the framework of One Health EJP COHESIVE, the efforts of several national and international tracing-related software projects are integrated within FCL Web to provide a modular tracing platform following the One Health approach. FCL Web unifies interactive tracing data visualisation, analysis as well as reporting - and in the future data collection - in one modular tracing platform (Fig. 1). The interactive analysis module was developed in a project with EFSA and offers automated visualisation of supply chains based on the needs of the user. A data table displays key information on involved food business operators and food items and includes comprehensive filter functions to analyse the information given in the table. The analysis module also helps to run simulations on hypothetical cross contamination or geographic clustering events during outbreaks via a scoring algorithm for deliveries and food business operators. A pilot version of a reporting module was integrated in FCL Web as well to display tracing, sample and case information in a format suitable for publishing tracing results in outbreak reports. A web-based tracing data collection mask offering a guided and structured data assessment with access to curated data was developed in a national project and will be integrated in FCL Web soon. Its multi-language design allows for potential European-wide use. In the future, more modules, e.g. to analyse genome sequencing data in the context of tracing are planned for FCL Web. With its features and its integrative approach, FCL Web blends seamlessly into a list of crucial tracing tool projects in Europe. In the future, these tools will be strongly interconnected to serve several tracing purposes on the local, national or European level. Hence, there is a need to improve interoperability of the tools e.g. via a universal data exchange format.


2021 ◽  
Vol 4 ◽  
Author(s):  
Estibaliz Lopez de Abechuco ◽  
Nazareno Scaccia ◽  
Taras Günther ◽  
Matthias Filter

Efficient communication and collaboration across sectors is an important precondition for true One Health Surveillance (OHS) activities. Despite the overall willingness to embrace the One Health paradigm, it is still challenging to accomplish this in day-to-day practice due to the differences in terminology and interpretation of sector-specific terms. In this sense, simple interventions like the inclusion of integrative glossaries in OHS documents (e.g. reports, research papers and guidelines) would help to reduce misunderstandings and could significantly improve the written communication in OHS. Here, we present the Glossaryfication Web Service that generates a document-specific glossary for any text file provided by the user. The web service automatically adds the available definitions with their corresponding references for the words in the document that match with terms in the user-selected glossaries. The Glossaryfication Web Service was developed to provide added value to the OHEJP Glossary that was developed within the OHEJP project ORION. The OHEJP Glossary improves the communication and collaboration among OH sectors by providing an online resource that lists relevant OH terms and sector-specific definitions. The Glossaryfication Web Service supports the practical use of the curated OHEJP Glossary and can also source information from other glossaries relevant for OH professionals (currently supporting the online CDC, WHO and EFSA glossaries). The Glossaryfication Web Service was created using the open-source software KNIME and the KNIME Text Processing extension (https://www.knime.com/knime-text-processing). The Glossaryfication KNIME workflow is deployed on BfR’s KNIME Server infrastructure providing an easy-to-use web interface where the users can upload their documents (any text-type file e.g. PDF, Word, Excel) and select the desired glossary to compare with. The Glossaryfication KNIME workflow reads in the document provided via the web interface and applies natural language processing (e.g. text cleaning, stemming), transforming (bag-of-words generation) and information retrieval methods to identify the matching terms in the selected glossaries. The Glossaryfication Web Service generates as an output a table containing all the terms that match with the selected glossaries. It also provides the available definitions, corresponding references and additional meta-information, e.g. the term frequency, i.e., how often each term appears in the given text, and the sectoral classification (only for the OHEJP Glossary terms). Furthermore, the workflow generates a tag cloud where the terms are categorized as: (i) exact match when the term in the text matches exactly with the entry of this term in the glossary; (ii) inexact match when the term appears in the text slightly modified (e.g. plural forms or suffixes) and (iii) non-matching that corresponds to all the other words appearing in the text that do not match with any glossary term. Through the user interface, the users can then choose if they want to download the whole list of terms, select only the exact/inexact matching terms, or just choose those terms and definitions that match with the meaning intended for this term in the user-provided document. The resulting table of terms can be downloaded as an Excel file and added to the user’s document as a document-specific glossary. The Glossaryfication Web Service provides an easy-to-adopt solution to enrich documents and reports with more comprehensive and unambiguous glossaries. Furthermore, it improves the referentiality of terms and definitions from different OH sectors. An additional feature provided by the Glossaryfication Web Service is the possibility of extending its use to other glossaries from other national or international institutions allowing the user to customize this glossary creation service.


2021 ◽  
Vol 4 ◽  
Author(s):  
Miguel Pinheiro ◽  
Ricardo Pais ◽  
Joana Isidro ◽  
Miguel Pinto ◽  
Carlijn Bogaardt ◽  
...  

A new era of virus surveillance is emerging based on the real-time monitoring of virus evolution at whole-genome scale (World Health Organization 2021). Although national and international health authorities have strongly recommended this technological transition, especially for influenza and SARS-CoV-2 (World Health Organization 2021, Revez et al. 2017), the implementation of genomic surveillance can be particularly challenging due to the lack of bioinformatics infrastructures and/or expertise to process and interpret next-generation sequencing (NGS) data (Oakeson et al. 2017). We developed and implemented INSaFLU-TELE-Vir platform (https://insaflu.insa.pt/) (Borges et al. 2018), which is an influenza- and SARS-CoV-2-oriented bioinformatics free web-based suite that handles primary NGS data (reads) towards the automatic generation of the main “genetic requests'' for effective and timely laboratory surveillance. By handling NGS data collected from any amplicon-based schema (making it applicable for other pathogens), INSaFLU-TELE-Vir enables any laboratory to perform multi-step and intensive bioinformatics analyses in a user-oriented manner without requiring advanced training. INSaFLU-TELE-Vir handles NGS data collected from distinct sequencing technologies (Illumina, Ion Torrent and Oxford Nanopore Technologies), with the possibility of constructing comparative analyses using different technologies. It gives access to user-restricted sample databases and project management, being a transparent and flexible tool specifically designed to automatically update project outputs as more samples are uploaded. Data integration is thus cumulative and scalable, fitting the need for both routine surveillance and outbreak investigation activities. The bioinformatics pipeline consists of six core steps: read quality analysis and improvement, human betacoronaviruses (including SARS-CoV-2 Pango lineages) and influenza type/subtype classification, mutation detection and consensus generation, coverage analysis, alignment/phylogeny, intra-host minor variant detection (and automatic detection of putative mixed infections). read quality analysis and improvement, human betacoronaviruses (including SARS-CoV-2 Pango lineages) and influenza type/subtype classification, mutation detection and consensus generation, coverage analysis, alignment/phylogeny, intra-host minor variant detection (and automatic detection of putative mixed infections). The multiple outputs are provided in nomenclature-stable and standardized formats that can be visualized and explored in situ or through multiple compatible downstream applications for fine-tuned data analysis. Novel features are being implemented into the INSaFLU-TELE-Vir bioinformatics toolkit as part of the OHEJP TELE-Vir (https://onehealthejp.eu/jrp-tele-vir/) project, including rapid detection of selected genotype-phenotype associations, and enhanced geotemporal data visualization. All the code is available in github (https://github.com/INSaFLU) with the possibility of a local docker installation (https://github.com/INSaFLU/docker). A detailed documentation and tutorial is also available (https://insaflu.readthedocs.io/en/latest/). In summary, INSaFLU supplies public health laboratories and researchers with an open and user-friendly framework, potentiating a strengthened and timely multi-country genome-based virus surveillance.


2021 ◽  
Vol 4 ◽  
Author(s):  
Clemence Koren ◽  
David Swanson ◽  
Gry Grøneng ◽  
Gunnar Rø ◽  
Petter Hopp ◽  
...  

Sykdomspulsen is a real time surveillance system developed by the Norwegian Institute of Public Health (NIPH) for One Health surveillance and the surveillance of other infectious diseases in humans like respiratory diseases and lately covid-19. The One Health surveillance comprise of Campylobacter data from humans and chicken farms and also includes diagnosis codes from doctor appointments and weather data with analysis forecasting outbreaks in Norway. It is a joint project between the Norwegian Institute of Public Health (NIPH) and the Norwegian Veterinary Institute (NVI), under the framework of the OHEJP NOVA (Novel approaches for design and evaluation of cost-effective surveillance across the food chain) and MATRIX (Connecting dimensions in One-Health surveillance) projects. The system relies on two pillars, the first being an analytics infrastructure which in real time retrieves data from tens of sources, cleans and harmonizes it, then runs over half a million analyses each day and produces over 20 000 000 rows of results to be used every day. The analytics infrastructure is based on R. Results are notably being used by NIPH for the monitoring of covid-19 development and the surveillance of other transmittable diseases such as influenza and gastro-intestinal illness. The analytics framework also generates hundreds of reports every day, directed at dissemination to municipal health authorities. This framework is not currently publicly available, but an open-source release is expected by the end of 2021. The second pilar is an interactive R Shiny dashboard platform, which is used for communicating the data and the model results to partner organisations. It allows for the easy creation of a website where public and animal health researchers and food safety experts can view real time analyses. This dashboard combines the powerful data visualisation and analysis strength of R with the accessibility, flexibility, structure and interactivity of web-based platforms. The result is a real time interactive surveillance system, that is supported by a solid infrastructure and streamlined data flow, and shared with actors through a beautiful and user-friendly website, based entirely on R.


2021 ◽  
Vol 4 ◽  
Author(s):  
Jeevan Karloss Antony-Samy ◽  
Georgios Marselis ◽  
Eve Fiskebeck ◽  
Taran Skjerdal ◽  
Camilla Sekse ◽  
...  

Managing sequence data, associated metadata, bioinformatics analyses and results can be challenging. In a One Health context, the challenge is even larger as there are many actors involved, many diverse types of results need to be produced, and the ensuing process data, such as software versions and options have to be tracked for auditing purposes. In addition, results must often be produced rapidly to be actionable, and non-bioinformaticians should be able to perform the the analyses. Therefore, a graphical user interface (preferably web system) with pipelines and visualization tools are needed to do these analyses. The Public Health Agency of Canada has together with other actors developed the web based system IRIDA (https://www.irida.ca) which uses Galaxy for analyses. IRIDA comes with a set of pipelines, visualization tools and a project based data management system that allows for fine grained data access control, which satisfies many of the requirements that a One Health bioinformatics platform dictates. However, as is often the case with a system meant to satisfy high demands, the platform is not trivial to set up and adapt for local use. In our setup, we are using two web servers, two database servers and one file server. The IRIDA web server provides the user interface. The Galaxy web server receives commands from IRIDA, executes the commands and returns results. Each web server has a database that keeps their respective metadata: user information, file locations and results. The actual files are stored on the fileserver. This spoke-and-wheel infrastructure was implemented to ensure minimum disruption of service if a component should go down. To get the necessary compute resources for this system, we are contracting with the Norwegian Research and Education Cloud (NREC), which offers Infrastructure as a Service (IaaS) services for Norwegian institutions and universities. NREC utilizes template VM images which can be instantiated according to need. The automated configuration and orchestration of images ensure that we can have dynamic access to resources according to need. This dynamic scaling is accomplished through collaboration with Elixir Norway. They have implemented the Pulse software which can check usage and instantiate and take down virtual machines as needed. At the Institute, we have spent close to two years on exploring and setting up this system. We have learned that it is important to not underestimate the amount compute resources needed to get a solid setup. However, having enough compute is irrelevant without knowledgeable staff. IRIDA comes with many features, which require considerable prior knowledge to adapt and set up in a local infrastructure. This includes knowledge on webservers, database systems, linux administration and Galaxy systems administration. The complexity dictates that these systems need to be set up and managed by in-house IT trained staff that will be able to tend the system along the way. It is also very important to maintain interactions with the users of the system, to ensure that the setup produces results that are useful to the users. To accomplish this, bioinformaticians are needed to develop pipelines and visualizations that give results that will on their own be easy for users to interpret in a biologically correct manner. Last but not least - such systems require a significant investment from the institution, thus it is important to showcase the benefits that the system will provide.


2021 ◽  
Vol 4 ◽  
Author(s):  
Antonio Rodríguez ◽  
Ana de la Torre

The undermining of the therapeutic effectiveness of antibiotics by their widespread use is causing the emergence of antimicrobial resistance, which is a major threat for both animal and human health. Since most veterinary antibiotics employed in livestock production are excreted essentially unaltered, they have been identified as major contributors of environmental contamination. However, the efforts of monitoring antimicrobial effects are focused on humans and livestock, neglecting the environment. The European Union institutions recognized this gap in the appreciation of the issue, and adopted an approach that includes to prioritize environmental tracking and to build the tools to make it economically accessible. This abstract has three main targets. Firstly, to fill the gap applying the IT methodological approach (the soil vulnerability map to antibiotic contamination) developed by De La Torre et al. (2012). Secondly, to identify the main livestock species and scenarios (agriculture and pasture) to be prioritized in surveillance efforts. Finally, to implement the code of agriculture practices and the stocking rates of grazing animals based on high vulnerability areas for antibiotic contamination. To facilitate the implementation of this risk evaluation procedure, we developed an interactive tool that allows to obtain downloadable maps of soil vulnerability to contamination for several land use (agriculture and pasture) and livestock (cattle, pig and chicken) scenarios for any veterinary antibiotics. Additionally, the tool allows to obtain a plot of the mean vulnerability of each considered administrative unit. We implemented the European Union countries as an example, but the tool could be applied to individual countries or even regional or sub-national scales.


Sign in / Sign up

Export Citation Format

Share Document