scholarly journals An open framework for biodiversity databases

2014 ◽  
Author(s):  
Miklós Bán ◽  
Zsolt Végvári ◽  
Sándor Bérces

OpenBioMaps is a recently developed, open and free web application with distributed database background, operated by several universities and national parks. This system provides free web map services and open database access by using standard OGC protocols. One of its' main features is that users can create new and unique database projects. The databases involved are maintained by the data providers themselves. Standard tools supplied to the users include repeatable and referable data queries, exporting, evaluations and tracking of data changes. The system also provides a programmable data service for promoting data processing.

2005 ◽  
Vol 20 (16) ◽  
pp. 3877-3879 ◽  
Author(s):  
ALEXANDRE VANIACHINE ◽  
DAVID MALON ◽  
MATTHEW VRANICAR

HEP collaborations are deploying grid technologies to address petabyte-scale data processing challenges. In addition to file-based event data, HEP data processing requires access to terabytes of non-event data (detector conditions, calibrations, etc.) stored in relational databases. Inadequate for non-event data delivery in these amounts, database access control technologies for grid computing are limited to encrypted message transfers. To overcome these database access limitations one must go beyond the existing grid infrastructure. A proposed hyperinfrastructure of distributed database services implements efficient secure data access methods. We introduce several technologies laying a foundation of a new hyperinfrastructure. We present efficient secure data transfer methods and secure grid query engine technologies federating heterogeneous databases. Lessons learned in a production environment of ATLAS Data Challenges are presented.


2018 ◽  
Vol 7 (2.9) ◽  
pp. 24
Author(s):  
Natarajan M ◽  
Manimegalai R

Distributed database is a collection of multiple databases that can be stored at different network sites. It acts as an important role in today’s world intended for storing and retrieving huge data. The implementation of distributed database advantages such as data replication, low operating costs, faster data transaction and data processing, but security is still a significant problem. In this paper make clear to explain security issues of distributed database and give the suggestion to improve security of distributed database. Subsequently, secured distributed database design in light of trusted node is proposed. The design contains a unique node in a system called a trusted node for each site through which every single other node will get to the database. Trusted node process client demands, joins the outcomes from concerned distributed databases and forward it to the confirmed client. The system adjusted by the trusted nodes keeping in mind the end goal to give authentication is Key Agreement based Secure Kerberos Authentication Protocol (KASKAP). Hence authenticated users can only access the database.


2000 ◽  
Vol 12 (5) ◽  
pp. 802-820 ◽  
Author(s):  
S. Papastavrou ◽  
G. Samaras ◽  
E. Pitoura

Author(s):  
Y. K. Zhou

Accurate extracting of the vegetation phenology information play an important role in exploring the effects of climate changes on vegetation. Repeated photos from digital camera is a useful and huge data source in phonological analysis. Data processing and mining on phenological data is still a big challenge. There is no single tool or a universal solution for big data processing and visualization in the field of phenology extraction. In this paper, we proposed a R-shiny based web application for vegetation phenological parameters extraction and analysis. Its main functions include phenological site distribution visualization, ROI (Region of Interest) selection, vegetation index calculation and visualization, data filtering, growth trajectory fitting, phenology parameters extraction, etc. the long-term observation photography data from Freemanwood site in 2013 is processed by this system as an example. The results show that: (1) this system is capable of analyzing large data using a distributed framework; (2) The combination of multiple parameter extraction and growth curve fitting methods could effectively extract the key phenology parameters. Moreover, there are discrepancies between different combination methods in unique study areas. Vegetation with single-growth peak is suitable for using the double logistic module to fit the growth trajectory, while vegetation with multi-growth peaks should better use spline method.


2020 ◽  
Author(s):  
Stevenn Volant ◽  
Pierre Lechat ◽  
Perrine Woringer ◽  
Laurence Motreff ◽  
Christophe Malabat ◽  
...  

Abstract BackgroundComparing the composition of microbial communities among groups of interest (e.g., patients vs healthy individuals) is a central aspect in microbiome research. It typically involves sequencing, data processing, statistical analysis and graphical representation of the detected signatures. Such an analysis is normally obtained by using a set of different applications that require specific expertise for installation, data processing and in some case, programming skills. ResultsHere, we present SHAMAN, an interactive web application we developed in order to facilitate the use of (i) a bioinformatic workflow for metataxonomic analysis, (ii) a reliable statistical modelling and (iii) to provide among the largest panels of interactive visualizations as compared to the other options that are currently available. SHAMAN is specifically designed for non-expert users who may benefit from using an integrated version of the different analytic steps underlying a proper metagenomic analysis. The application is freely accessible at http://shaman.pasteur.fr/, and may also work as a standalone application with a Docker container (aghozlane/shaman), conda and R. The source code is written in R and is available at https://github.com/aghozlane/shaman. Using two datasets (a mock community sequencing and published 16S rRNA metagenomic data), we illustrate the strengths of SHAMAN in quickly performing a complete metataxonomic analysis. ConclusionsWe aim with SHAMAN to provide the scientific community with a platform that simplifies reproducible quantitative analysis of metagenomic data.


Author(s):  
A. V. Vo ◽  
D. F. Laefer ◽  
M. Trifkovic ◽  
C. N. L. Hewage ◽  
M. Bertolotto ◽  
...  

Abstract. The massive amounts of spatio-temporal information often present in LiDAR data sets make their storage, processing, and visualisation computationally demanding. There is an increasing need for systems and tools that support all the spatial and temporal components and the three-dimensional nature of these datasets for effortless retrieval and visualisation. In response to these needs, this paper presents a scalable, distributed database system that is designed explicitly for retrieving and viewing large LiDAR datasets on the web. The ultimate goal of the system is to provide rapid and convenient access to a large repository of LiDAR data hosted in a distributed computing platform. The system is composed of multiple, share-nothing nodes operating in parallel. Namely, each node is autonomous and has a dedicated set of processors and memory. The nodes communicate with each other via an interconnected network. The data management system presented in this paper is implemented based on Apache HBase, a distributed key-value datastore within the Hadoop eco-system. HBase is extended with new data encoding and indexing mechanisms to accommodate both the point cloud and the full waveform components of LiDAR data. The data can be consumed by any desktop or web application that communicates with the data repository using the HTTP protocol. The communication is enabled by a web servlet. In addition to the command line tool used for administration tasks, two web applications are presented to illustrate the types of user-facing applications that can be coupled with the data system.


Author(s):  
K. Yalova ◽  
K. Yashyna ◽  
O. Tarasiyk

Using of automated information systems in the field of geolocation data processing increases the control and management efficiency of freight and passenger traffic. The article presents the results of design and software implementation of the automated information system that allows monitoring of GPS tracking data in real time, build routes and set control points for it, generate system messages about the status of vehicles on the route and generate reporting information on the base of user requests. The design of the system architecture and interface was carried out on the basis of developed object and functional data domain models, which take into account its structural and functional features. The microservice approach principles were applied during the developing of the system architecture. The system software is a set of independent services that work in their own process, implement a certain business logic algorithm and communicate with other services through the HTTP protocol. The set of the system software services consists of: a service for working with GPS data, a service for implementing geolocation data processing functions, and a web application service. The main algorithms of the developed system services and their functional features are described in the work. Article’s figures graphically describe developed system site map and system typical Web forms. This data displays the composition of web pages, paths between them and shows the user interface. The design of the user interface was carried out taking into account quality requirements of user graphical web interfaces.


2021 ◽  
Author(s):  
Wenxi Gao ◽  
Ishmael Rico ◽  
Yu Sun

People now prefer to follow trends. Since the time is moving, people can only keep themselves from being left behind if they keep up with the pace of time. There are a lot of websites for people to explore the world, but websites for those who show the public something new are uncommon. This paper proposes an web application to help YouTuber with recommending trending video content because they sometimes have trouble in thinking of the video topic. Our method to solve the problem is basically in four steps: YouTube scraping, data processing, prediction by SVM and the webpage. Users input their thoughts on our web app and computer will scrap the trending page of YouTube and process the data to do prediction. We did some experiments by using different data, and got the accuracy evaluation of our method. The results show that our method is feasible so people can use it to get their own recommendation.


2019 ◽  
Vol 1 ◽  
pp. 1-1
Author(s):  
Mátyás Gede

<p><strong>Abstract.</strong> Maps created before the 17th century often have large distortions which make it useless to force them into any modern map projection by georeferencing. In a local scope, however, they preserve spatial relationships between map objects, therefore, using an appropriate control point set and local interpolation it is possible to define a quite accurate connection between the old and a modern map.</p><p> Additionally, researchers in the past decades created long lists of settlements of these maps, often without any geometric information but matching most places with present day settlements.</p><p> The author developed a web application to help geocoding these lists and at the same time to create an accurate georeferenced of the corresponding old maps. This tool displays the old map and a recent web map parallel, without forcing the projection of the web map to the old one. The user can load settlement lists, perform a bulk geocoding based on present day names. The geocoded places appear on the new map, and any of these points can be also placed on the old map, defining a control point pair. After setting enough control points, all the other place names can be automatically placed by local interpolation based on the control points. The place positions can be refined manually by the user, which will improve the accuracy of the automatic placement as well.</p>


2021 ◽  
Author(s):  
Muneeb Shahid ◽  
Yusuf Sermet ◽  
Ibrahim Demir

Geographic Information Systems (GIS) are available as stand-alone desktop applications as well as web platforms for vector- and raster-based geospatial data processing and visualization. While each approach offers certain advantages, limitations exist that motivate the development of hybrid systems that will increase the productivity of users for performing interactive data analytics using multidimensional gridded data. Web-based applications are platform-independent, however, require the internet to communicate with servers for data management and processing which raises issues for performance, data integrity, handling, and transfer of massive multidimensional raster data. On the other hand, stand-alone desktop applications can usually function without relying on the internet, however, they are platform-dependent, making distribution and maintenance of these systems difficult. This paper presents RasterJS, a hybrid client-side web library for geospatial data processing that is built on the Progressive Web Application (PWA) architecture to operate seamlessly in both Online and Offline modes. A packaged version of this system is also presented with the help of Web Bundles API for offline access and distribution. RasterJS entails the use of latest web technologies that are supported by modern web browsers, including Service Workers API, Cache API, IndexedDB API, Notifications API, Push API, and Web Workers API, in order to bring geospatial analytics capabilities to large-scale raster data for client-side processing. Each of these technologies acts as a component in the RasterJS to collectively provide a similar experience to users in both Online and Offline modes in terms of performing geospatial analysis activities such as flow direction calculation with hydro-conditioning, raindrop flow tracking, and watershed delineation. A large-scale case study is included in the study for watershed analysis to demonstrate the capabilities and limitations of the library. The framework further presents the potential to be utilized for other use cases that rely on raster processing, including land use, agriculture, soil erosion, transportation, and population studies.


Sign in / Sign up

Export Citation Format

Share Document