scholarly journals Mapping the impact of papers on various status groups in excellencemapping.net: a new release of the excellence mapping tool based on citation and reader scores

2021 ◽  
Author(s):  
Lutz Bornmann ◽  
Rüdiger Mutz ◽  
Robin Haunschild ◽  
Felix de Moya-Anegon ◽  
Mirko de Almeida Madeira Clemente ◽  
...  

AbstractIn over five years, Bornmann, Stefaner, de Moya Anegon, and Mutz (2014b) and Bornmann, Stefaner, de Moya Anegón, and Mutz (2014c, 2015) have published several releases of the www.excellencemapping.net tool revealing (clusters of) excellent institutions worldwide based on citation data. With the new release, a completely revised tool has been published. It is not only based on citation data (bibliometrics), but also Mendeley data (altmetrics). Thus, the institutional impact measurement of the tool has been expanded by focusing on additional status groups besides researchers such as students and librarians. Furthermore, the visualization of the data has been completely updated by improving the operability for the user and including new features such as institutional profile pages. In this paper, we describe the datasets for the current excellencemapping.net tool and the indicators applied. Furthermore, the underlying statistics for the tool and the use of the web application are explained.

2020 ◽  
Vol 13 (6) ◽  
pp. 94-109
Author(s):  
Rajeev Kumar ◽  
◽  
Mamdouh Alenezi ◽  
Md Ansari ◽  
Bineet Gupta ◽  
...  

Nowadays, most of the cyber-attacks are initiated by extremely malicious programs known as Malware. Malwares are very vigorous and can penetrate the security of information and communication systems. While there are different techniques available for malware analysis, it becomes challenging to select the most effective approach. In this context, the decision-making process may be an efficient means of empirically assessing the impact of different methods for securing the web applications. In this research study, we have used a methodology that includes the integration of Fuzzy AHP and Fuzzy TOPSIS technique for evaluating the impact of different malware analysis techniques in web application perspective. This study uses different versions of a university’s web application for evaluating the impact of several existing malware analysis techniques. The findings of the study show that the Reverse Engineering approach is the most efficient technique for analyzing complex malware. The outcome of this study would definitely aid the future researchers and developers in selecting the appropriate techniques for scanning the web application code and enhancing the security.


Author(s):  
Nashat Mansour ◽  
Nabil Baba

The number of internet web applications is rapidly increasing in a variety of fields and not much work has been done for ensuring their quality, especially after modification. Modifying any part of a web application may affect other parts. If the stability of a web application is poor, then the impact of modification will be costly in terms of maintenance and testing. Ripple effect is a measure of the structural stability of source code upon changing a part of the code, which provides an assessment of how much a local modification in the web application may affect other parts. Limited work has been published on computing the ripple effect for web application. In this paper, the authors propose, a technique for computing ripple effect in web applications. This technique is based on direct-change impact analysis and dependence analysis for web applications developed in the .Net environment. Also, a complexity metric is proposed to be included in computing the ripple effect in web applications.


PeerJ ◽  
2020 ◽  
Vol 8 ◽  
pp. e9277
Author(s):  
Xinming Lin ◽  
Huiying Ren ◽  
Amy E. Goldman ◽  
James C. Stegen ◽  
Timothy D. Scheibe

Background The Worldwide Hydrobiogeochemistry Observation Network for Dynamic River Systems (WHONDRS) is a consortium that aims to understand complex hydrologic, biogeochemical, and microbial connections within river corridors experiencing perturbations such as dam operations, floods, and droughts. For one ongoing WHONDRS sampling campaign, surface water metabolite and microbiome samples are collected through a global survey to generate knowledge across diverse river corridors. Metabolomics analysis and a suite of geochemical analyses have been performed for collected samples through the Environmental Molecular Sciences Laboratory (EMSL). The obtained knowledge and data package inform mechanistic and data-driven models to enhance predictions of outcomes of hydrologic perturbations and watershed function, one of the most critical components in model-data integration. To support efforts of the multi-domain integration and make the ever-growing data package more accessible for researchers across the world, a Shiny/R Graphical User Interface (GUI) called WHONDRS-GUI was created. Results The web application can be run on any modern web browser without any programming or operational system requirements, thus providing an open, well-structured, discoverable dataset for WHONDRS. Together with a context-aware dynamic user interface, the WHONDRS-GUI has functionality for searching, compiling, integrating, visualizing and exporting different data types that can easily be used by the community. The web application and data package are available at https://data.ess-dive.lbl.gov/view/doi:10.15485/1484811, which enables users to simultaneously obtain access to the data and code and to subsequently run the web app locally. The WHONDRS-GUI is also available for online use at Shiny Server (https://xmlin.shinyapps.io/whondrs/).


2019 ◽  
Author(s):  
Lukas Jendele ◽  
Radoslav Krivak ◽  
Petr Skoda ◽  
Marian Novotny ◽  
David Hoksza

ABSTRACTPrankWeb is an online resource providing an interface to P2Rank, a state-of-the-art ligand binding site prediction method. P2Rank is a template-free machine learning method which is based on the prediction of ligandability of local chemical neighborhoods centered on points placed on a solvent accessible surface of a protein. Points with high ligandability score are then clustered to form the resulting ligand binding sites. On top of that, PrankWeb then provides a web interface enabling users to easily carry out the prediction and visually inspect the predicted binding sites via an integrated sequence-structure view. Moreover, PrankWeb can determine sequence conservation for the input molecule and use it in both the prediction and results visualization steps. Alongside its online visualization options, PrankWeb also offers the possibility to export the results as a PyMOL script for offline visualization. The web frontend communicates with the serer side via a REST API. Therefore, in high-throughput scenarios users can utilize the server API directly, bypassing the need for a webbased front end or installation of the P2Rank application. PrankWeb is available at http://prankweb.cz/. The source code of the web application and the P2Rank method can be accessed at https://github.com/jendelel/PrankWebApp and https://github.com/rdk/p2rank, respectively.


2017 ◽  
Author(s):  
Richard Newton ◽  
Lorenz Wernisch

AbstractBackgroundThe outcome from the analysis of high through-put genomics experiments is commonly a list of genes. The most basic measure of association is whether the genes in the list have ever been cocited together.ResultsThe web application gene-cocite accepts a list of genes and returns a list of the papers which cocite any two or more of the genes. The proportion of the genes which are cocited with at least one other gene is given, and the p-value for the probability of this proportion of cocitations occurring by chance from a random list of genes of the same length calculated. An interactive graph with links to papers is displayed, showing how the genes in the list are related to each other by publications.Conclusionsgene-cocite (http://sysbio.mrc-bsu.cam.ac.uk/gene-cocite) is designed to be an easy to use first step for biological researchers investigating the background of their list of genes.


2018 ◽  
Vol 7 (2.30) ◽  
pp. 6
Author(s):  
Daljit Kaur ◽  
Dr Parminder Kaur

With the growth of web and Internet, every era of human life has been affected. People want to make their or their organization’s presence globally visible through this medium. Web applications and/or mobile apps are used for the purpose of making their recognition as well as to attract the clients worldwide. With the demand of putting the business or services online faster than anyone else, web applications are developed in hustle and under pressure by developers and most of the times they ignore the few essential activities for securing them from severe attacks, which may be a greater loss for the business. This work is an effort to understand the complex distributed environment of web applications and show the impact of husting the web development process.  


Author(s):  
David Parsons

This chapter explores how Web application software architecture has evolved from the simple beginnings of static content, through dynamic content, to adaptive content and the integrated client-server technologies of the Web 2.0. It reviews how various technologies and standards have developed in a repeating cycle of innovation, which tends to fragment the Web environment, followed by standardisation, which enables the wider reach of new technologies. It examines the impact of the Web 2.0, XML, Ajax and mobile Web clients on Web application architectures, and how server side processes can support increasingly rich, diverse and interactive clients. It provides an overview of a server-side Java-based architecture for contemporary Web applications that demonstrates some of the key concepts under discussion. By outlining the various forces that influence architectural decisions, this chapter should help developers to take advantage of the potential of innovative technologies without sacrificing the broad reach of standards based development.


2020 ◽  
Author(s):  
Moritz Langenstein ◽  
Henning Hermjakob ◽  
Manuel Bernal Llinares

AbstractMotivationCuration is essential for any data platform to maintain the quality of the data it provides. Existing databases, which require maintenance, and the amount of newly published information that needs to be surveyed, are growing rapidly. More efficient curation is often vital to keep up with this growth, requiring modern curation tools. However, curation interfaces are often complex and difficult to further develop. Furthermore, opportunities for experimentation with curation workflows may be lost due to a lack of development resources, or a reluctance to change sensitive production systems.ResultsWe propose a decoupled, modular and scriptable architecture to build curation tools on top of existing platforms. Instead of modifying the existing infrastructure, our architecture treats the existing platform as a black box and relies only on its public APIs and web application. As a decoupled program, the tool’s architecture gives more freedom to developers and curators. This added flexibility allows for quickly prototyping new curation workflows as well as adding all kinds of analysis around the data platform. The tool can also streamline and enhance the curator’s interaction with the web interface of the platform. We have implemented this design in cmd-iaso, a command-line curation tool for the identifiers.org registry.AvailabilityThe cmd-iaso curation tool is implemented in Python 3.7+ and supports Linux, macOS and Windows. Its source code and documentation are freely available from https://github.com/identifiers-org/cmd-iaso. It is also published as a Docker container at https://hub.docker.com/r/identifiersorg/[email protected]


Author(s):  
Dinara Abiyeva ◽  
Рoza Karagulova ◽  
Aiman Nysanbaeva ◽  
Nurlan Abayev ◽  
Gulzhamila Urazbayeva ◽  
...  

Climate change modelling data is represented by large datasets that require certain expertise and computational resources for its transformation and adjustment to user needs. Geospatial web applications and geoportals are considered as a solution to this problem in this article. Global web resources do not provide geoinformation services for research on climate change in Kazakhstan due to aggregation or low resolution of the source data coupled with limited functionality for interactive geo-visualization and data analysis. The article describes the web application “Kazakhstan Climate Change” developed by the authors, the purpose of which is aimed at supporting research on spatial-temporal patterns of climate change in Kazakhstan. The data derived from CMIP5 models served as the source data. Based on the initial indicators such as temperature and precipitation, using the developed Python scripts and R Climpact climate script packages, additional indicators such as evapotranspiration, drought indices, heat supply indices and indices of the length of the growing season were calculated in order to determine the impact of climate change on water resources and agriculture. The key advantages of the web application include time-series geo-visualization, interactive generation of diagrams and tables for analysis, in particular for selected units of water management zoning. The geospatial web application “Kazakhstan Climate Change” responds to the challenges of presenting large climate datasets in the easy-to-perceive style and in an easily comprehensible way for geospatial analysis. The functionality of the web application allows users, without GIS skills, to explore climate change scenarios on their own, this opportunity is of practical value for scientific and educational community, for policymakers in the field of climate change and water resources management.


2017 ◽  
Vol 2 (1) ◽  
pp. 28-47 ◽  
Author(s):  
Valérie Guillard ◽  
Olivier Couvert ◽  
Valérie Stahl ◽  
Patrice Buche ◽  
Aurélie Hanin ◽  
...  

Abstract In this paper,we present the implementation of a dedicated software, MAP-OPT, for optimising the design of ModifiedAtmosphere Packaging of refrigerated fresh, nonrespiring food products. The core principle of this software is to simulate the impact of gas (O2/CO2) exchanges on the growth of gas-sensitive microorganisms in the packed food system. In its simplest way, this tool, associated with a data warehouse storing food, bacteria and packaging properties, allows the user to explore his/her system in a user-friendly manner by adjusting/changing the pack geometry, packaging material and gas composition (mixture of O2/CO2/N2). Via the @Web application, the data warehouse associated with MAP-OPT is structured by an ontology, which allows data to be collected and stored in a standardized format and vocabulary in order to be easily retrieved using a standard querying methodology. In an optimisation approach, the MAP-OPT software enables to determine the packaging characteristics (e.g. gas permeability) suitable for a target application (e.g. maximal bacterial population at the best-before-date). These targeted permeabilities are then used to query the packaging data warehouse using the@Web applicationwhich proposes a ranking of the most satisfying materials for the target application (i.e. packaging materialswhose characteristics are the closest to the target ones identified by the MAP-OPT software). This approach allows a more rational dimensioning of MAP of non-respiring food products by selecting the packaging material fitted to “just necessary” (and not by default, that with the greatest barrier properties). A working example of MAP dimensioning for a strictly anaerobic, CO2-sensitive microorganism, Pseudomonas fluorescens, is given to highlight the usefulness of the software.


Sign in / Sign up

Export Citation Format

Share Document