Supporting Multi-cloud Model Execution with VLab

Author(s):  
Mattia Santoro ◽  
Paolo Mazzetti ◽  
Nicholas Spadaro ◽  
Stefano Nativi

<p>The VLab (Virtual Laboratory), developed in the context of the European projects ECOPOTENTIAL and ERA-PLANET, is a cloud-based platform to support the activity of environmental scientists in sharing their models. The main challenges addressed by VLab are: (i) minimization of interoperability requirements in the process of model porting (i.e. to simplify as much as possible the process of publishing and sharing a model for model developers) and (ii) support multiple programming languages and environments (it must be possible porting models developed in different programming languages and which use an arbitrary set of libraries).</p><p>In this presentation we describe how VLab supports a multi-cloud deployment approach and the benefits.</p><p>In this presentation we describe VLab architecture and, in particular, how this enables supporting a multi-cloud deployment approach.</p><p>Deploying VLab on different cloud environments allows model execution where it is most convenient, e.g. depending on the availability of required data (move code to data).</p><p>This was implemented in the web application for Protected Areas, developed by the Joint Research Centre of the European Commission (EC JRC) in the context of the EuroGEOSS Sprint to Ministerial activity and demonstrated at the last GEO-XVI Plenary meeting in Canberra. The web application demonstrates the use of Copernicus Sentinel data to calculate Land Cover and Land Cover change in a set of Protected Areas belonging to different ecosystems. Based on user’s selection of satellite products to use, the different available cloud platforms where to run the model are presented along with their data availability for the selected products. After the platform selection, the web application utilizes the VLab APIs to launch the EODESM (Earth Observation Data for Ecosystem Monitoring) model (Lucas and Mitchell, 2017), monitoring the execution status and retrieve the output.</p><p>Currently, VLab was experimented with the following cloud platforms: Amazon Web Services, three of the 4+1 the Coperncius DIAS platforms (namely: ONDA, Creodias and Sobloo) and the European Open Science Cloud (EOSC).</p><p>Another possible scenario empowered by this multi-platform deployment feature is the possibility to let the user choose the computational platform and utilize her/his credentials to request the needed computational resources. Finally, it is also possible to exploit this feature for benchmarking different cloud platforms with respect to their performances.</p><p> </p><p>References</p><p>Lucas, R. and A. Mitchell (2017). "Integrated Land Cover and Change Classifications"The Roles of Remote Sensing in Nature Conservation, pp. 295–308.</p><p> </p>

2005 ◽  
Vol 5 ◽  
pp. 65-73 ◽  
Author(s):  
D. Haase ◽  
K. Frotscher

Abstract. Only a few studies have attempted to quantify topography-depending water fluxes, to evaluate retention and reservoir capacities and surface run-off paths within large river basins because data availability and data quality are critical issues to face this objective. It becomes most relevant if water balance has to be calculated in large or transboundary river basins. The advance of space based earth observation data offers a solution to this information problem. Therefore, this paper mainly focuses on weaknesses and strengths analyzing topography with SRTM (Shuttle Radar Topography Mission) digital height data and thus provides techniques for their improved application in river network derivation, floodplain analysis, watershed hydrology in large as well as in large river basins (>1000 km2). In the analysis different types of digital elevation models (DEM), terrain models (DTM) and land cover classification data (biotope map, Corine Land Cover 1994) have been used. The DHMs are generated from Airborne Laser Scanning (0.5 m), topographic maps (10.0/50.0 m) and SRTM at 30.0 m and 90.0 m spatial resolution. SRTM digital height models are generated by Synthetic Aperture Radar (SAR) and show a high spatial variance in urban areas, regions of dense vegetation canopy, floodplains and water bodies. As study area serve the Elbe basin (Czech Republic, Germany) with its sub-basins and the Saale river basin (Germany, different federal countries Saxony-Anhalt, Saxony and Thuringia).


Author(s):  
Humberto Cortés ◽  
Antonio Navarro

With the advent of multitier and service-oriented architectures, the presentation tier is more detached from the rest of the web application than ever. Moreover, complex web applications can have thousands of linked web pages built using different technologies. As a result, the description of navigation maps has become more complex in recent years. This paper presents NMMp, a UML extension that: (i) provides an abstract vision of the navigation structure of the presentation tier of web applications, independently of architectural details or programming languages; (ii) can be automatically transformed into UML-WAE class diagrams, which can be easily integrated with the design of the other tiers of the web application; (iii) encourages the use of architectural and multitier design patterns; and (iv) has been developed according to OMG standards, thus facilitating its use with general purpose UML CASE tools in industry.


Author(s):  
Annisa Dwi Oktavianita ◽  
Hendra Dea Arifin ◽  
Muhammad Dzulfikar Fauzi ◽  
Aulia Faqih Rifa'i

A RAM or formerly known as a memory is a primary memory which helps swift data availability without waiting the whole data processed by the hard disk. A memory is also used by all installed applications including web browsers but there have been disappointed in cases of memory usages. Researchers use a descriptive quantitative approach with an observation, a central tendency and a dispersion method. There are 15 browsers chosen by random to be tested with low, medium and high loads to get their memory usage logs. Researchers proceed to analyze the log by using descriptive statistics to measure the central tendency and dispersion of data. A standard reference value from web application memory usage has been found as much as 393.38 MB. From that point, this research is successful and has been found the result. The web browser with the lowest memory usage is Flock with 134.67 MB and the web browser with the highest memory usage is Baidu with 699.66 MB.


2021 ◽  
Vol 13 (1) ◽  
pp. 47-52
Author(s):  
Biljana Risteska-Stojkoska ◽  
Hristijan Gjorshevski ◽  
Elizabeta Mitreva

The aim of this paper is to develop a Web application where scholars of the Faculty of Computer Science and Engineering (FINKI) at the University of Ss. Cyril and Methodius can display and share their projects and publications. Visitors can view, search through, and filter the authors, projects and publications that can be added and edited by the administrators via the administrator panel. In this paper, we first explain the type of system we are building and go through similar existing systems explaining how they work and what they offer. Then, we go through the programming languages and technologies we decided to use to develop this Web application. After that, the development phase follows, where we describe each of the features we implemented. In The Final Product section we finally show images where you can see how the Web application works and what it looks like. We finish the paper with a conclusion, briefly summarizing what we have achieved.


2020 ◽  
Vol 4 (3) ◽  
pp. 582
Author(s):  
Wahyu Mahatma Kurniawan ◽  
Fauziah Fauziah ◽  
Aris Gunaryati

The existence of the digital world has given a new impact to all human activities. Not surprisingly, if now all switch to the digital world because on the one hand can make things more efficient and practical. The purpose of this study, the author wants to develop a student data administration application that will be applied at one of the universities with an Android-based platform system. Where this design is made to perfect the limitations of applications that have previously  been developed, namely the web, limitations of previous applications at the University, when users (users) access  to the online academic menu in the web application, are required to deactivate pop-ups first, that is felt to be inefficient . Based on the existing problems, the author wants to design an application that will be developed on the Android platform by using a sequential searching algorithm for each program function in the application. The system development method itself uses the Waterfall methodology. This method has stages namely: Analysis, Design, Coding and Testing Data collection techniques are obtained from the results of observations (observations) and literature studies. Database as a data storage medium is MySQL. For the PHP and Java programming languages used as an application interface, the application test results are represented in the White box testing and show a valid value of the test results amounting to 6 in the total decision shows the algorithm used is quite good and meets the standard


Author(s):  
Donatus I. Bayem ◽  
Henry O. Osuagwu ◽  
Chimezie F. Ugwu

A Web portal aggregates an array of information for a target audience and affords a variety of services including search engines, directories, news, e-mail, and chat rooms, and they have evolved to provide a customized gateway to Web information. Also, a high level of personalization and customization has been possible. The portal concept could further be established to function as a classy Web interface that can serves as sustenance for variety of the task performance. The aggregate information Web portal will serve as portals for the information needs of users on the web. The Web based portal enable marketing of users broadly across a wide variety of interests. Most of the popular usage of the Web based aggregate information portal probably refers to the visual and user interface (UI) design of a Web site. It is a crucial aspect since the visitor is often more impressed with looks of website and easy to use rather than about the technologies and techniques that are used behind the scenes, or the operating system that runs on the web server. In other words, it just does not matter what technologies that is involved in creating, when the site is hard to use and easy to forget. This paper explores the factors that must be painstaking during the design and development of a Web based aggregate information portal. Design as a word in the context of a Web application can mean many things. A working Web based aggregate information portal, kaseremulticoncept was developed to support the various users’ task performances. A number of technologies was studied and implemented in this research, which includes multi-tier architecture, server and client side scripting techniques and technologies such as PHP programming languages and relational databases such as MySQL, Structured Query language (SQL) and XAMPP Server.


2018 ◽  
Vol 48 (3) ◽  
pp. 84-90 ◽  
Author(s):  
E. A. Lapchenko ◽  
S. P. Isakova ◽  
T. N. Bobrova ◽  
L. A. Kolpakova

It is shown that the application of the Internet technologies is relevant in the selection of crop production technologies and the formation of a rational composition of the machine-and-tractor fl eet taking into account the conditions and production resources of a particular agricultural enterprise. The work gives a short description of the web applications, namely “ExactFarming”, “Agrivi” and “AgCommand” that provide a possibility to select technologies and technical means of soil treatment, and their functions. “ExactFarming” allows to collect and store information about temperature, precipitation and weather forecast in certain areas, keep records of information about crops and make technological maps using expert templates. “Agrivi” allows to store and provide access to weather information in the fi elds with certain crops. It has algorithms to detect and make warnings about risks related to diseases and pests, as well as provides economic calculations of crop profi tability and crop planning. “AgCommand” allows to track the position of machinery and equipment in the fi elds and provides data on the weather situation in order to plan the use of agricultural machinery in the fi elds. The web applications presented hereabove do not show relation between the technologies applied and agro-climatic features of the farm location zone. They do not take into account the phytosanitary conditions in the previous years, or the relief and contour of the fi elds while drawing up technological maps or selecting the machine-and-tractor fl eet. Siberian Physical-Technical Institute of Agrarian Problems of Siberian Federal Scientifi c Center of AgroBioTechnologies of the Russian Academy of Sciences developed a software complex PIKAT for supporting machine agrotechnologies for production of spring wheat grain at an agricultural enterprise, on the basis of which there is a plan to develop a web application that will consider all the main factors limiting the yield of cultivated crops.


2020 ◽  
Author(s):  
Darshak Mota ◽  
Neel Zadafiya ◽  
Jinan Fiaidhi

Java Spring is an application development framework for enterprise Java. It is an open source platform which is used to develop robust Java application easily. Spring can also be performed using MVC structure. The MVC architecture is based on Model View and Controller techniques, where the project structure or code is divided into three parts or sections which helps to categorize the code files and other files in an organized form. Model, View and Controller code are interrelated and often passes and fetches information from each other without having to put all code in a single file which can make testing the program easy. Testing the application while and after development is an integral part of the Software Development Life Cycle (SDLC). Different techniques have been used to test the web application which is developed using Java Spring MVC architecture. And compares the results among all the three different techniques used to test the web application.


2020 ◽  
Vol 3 (1) ◽  
pp. 78
Author(s):  
Francis Oloo ◽  
Godwin Murithi ◽  
Charlynne Jepkosgei

Urban forests contribute significantly to the ecological integrity of urban areas and the quality of life of urban dwellers through air quality control, energy conservation, improving urban hydrology, and regulation of land surface temperatures (LST). However, urban forests are under threat due to human activities, natural calamities, and bioinvasion continually decimating forest cover. Few studies have used fine-scaled Earth observation data to understand the dynamics of tree cover loss in urban forests and the sustainability of such forests in the face of increasing urban population. The aim of this work was to quantify the spatial and temporal changes in urban forest characteristics and to assess the potential drivers of such changes. We used data on tree cover, normalized difference vegetation index (NDVI), and land cover change to quantify tree cover loss and changes in vegetation health in urban forests within the Nairobi metropolitan area in Kenya. We also used land cover data to visualize the potential link between tree cover loss and changes in land use characteristics. From approximately 6600 hectares (ha) of forest land, 720 ha have been lost between 2000 and 2019, representing about 11% loss in 20 years. In six of the urban forests, the trend of loss was positive, indicating a continuing disturbance of urban forests around Nairobi. Conversely, there was a negative trend in the annual mean NDVI values for each of the forests, indicating a potential deterioration of the vegetation health in the forests. A preliminary, visual inspection of high-resolution imagery in sample areas of tree cover loss showed that the main drivers of loss are the conversion of forest lands to residential areas and farmlands, implementation of big infrastructure projects that pass through the forests, and extraction of timber and other resources to support urban developments. The outcome of this study reveals the value of Earth observation data in monitoring urban forest resources.


2021 ◽  
Vol 13 (2) ◽  
pp. 50
Author(s):  
Hamed Z. Jahromi ◽  
Declan Delaney ◽  
Andrew Hines

Content is a key influencing factor in Web Quality of Experience (QoE) estimation. A web user’s satisfaction can be influenced by how long it takes to render and visualize the visible parts of the web page in the browser. This is referred to as the Above-the-fold (ATF) time. SpeedIndex (SI) has been widely used to estimate perceived web page loading speed of ATF content and a proxy metric for Web QoE estimation. Web application developers have been actively introducing innovative interactive features, such as animated and multimedia content, aiming to capture the users’ attention and improve the functionality and utility of the web applications. However, the literature shows that, for the websites with animated content, the estimated ATF time using the state-of-the-art metrics may not accurately match completed ATF time as perceived by users. This study introduces a new metric, Plausibly Complete Time (PCT), that estimates ATF time for a user’s perception of websites with and without animations. PCT can be integrated with SI and web QoE models. The accuracy of the proposed metric is evaluated based on two publicly available datasets. The proposed metric holds a high positive Spearman’s correlation (rs=0.89) with the Perceived ATF reported by the users for websites with and without animated content. This study demonstrates that using PCT as a KPI in QoE estimation models can improve the robustness of QoE estimation in comparison to using the state-of-the-art ATF time metric. Furthermore, experimental result showed that the estimation of SI using PCT improves the robustness of SI for websites with animated content. The PCT estimation allows web application designers to identify where poor design has significantly increased ATF time and refactor their implementation before it impacts end-user experience.


Sign in / Sign up

Export Citation Format

Share Document