Technical Considerations for Building a Landslide Tracker Mobile App

Author(s):  
Ramesh Guntha ◽  
Maneesha Vinodini Ramesh

<p>Substantially complete landslide inventories aid the accurate landslide modelling of a region’s susceptibility and landslide forecasting. Recording of landslides soon after they have occurred is important as their presence can be quickly erased (e.g., the landslide removed by people or through erosion/vegetation). In this paper, we present the technical software considerations that went into building a Landslide Tracker app to aid in the collection of landslide information by non-technical local citizens, trained volunteers, and experts to create more complete inventories on a real-time basis through the model of crowdsourcing. The tracked landslide information is available for anyone across the world to view. This app is available on Google Play Store for free, and at http://landslides.amrita.edu, with software conceived and developed by Amrita University in the context of the UK NERC/FCDO funded LANDSLIP research project (http://www.landslip.org/).</p><p>The three technical themes we discuss in this paper are the following: (i) security, (ii) performance, and (iii) network resilience. (i) Security considerations include authentication, authorization, and client/server-side enforcement. Authentication allows only the registered users to record and view the landslides, whereas authorization protects the data from illegal access. For example, landslides created by one user are not editable by others, and no user should be able to delete landslides. This validation is enforced at the client-side (mobile and web apps) and also at the server-side software to prevent unintentional and intentional illegal access. (ii) Performance considerations include designing high-performance data structures, mobile databases, client-side caching, server-side caching, cache synchronization, and push-notifications. The database is designed to ensure the best performance without sacrificing data integrity. Then the read-heavy data is cached in memory to get this data with very low latency. Similarly, the data, once fetched, is cached in memory on the app so that it can be re-used without making repeated calls to the server every time when the user visits a screen.  The data persists in the mobile database so the app can load faster while reopening. A cache-synchronization mechanism is implemented to prevent the caches' data from becoming stale as new data comes into the database. The synchronization mechanism consists of push-notifications and incremental data pulls. (iii) Network resiliency considerations are achieved with the help of local storage on the app. This allows recording the landslides even when there is no internet connection. The app automatically pushes the updates to the server as soon as the connectivity resumes. We have observed over 300% reduction in time taken to load 2000 landslides, between the no-cache mode to cache mode during the performance testing. </p><p>The Landslide tracker app was released during the 2020 monsoon season and more than 250 landslides were recorded through the app across India and the world.</p>

2021 ◽  
Author(s):  
Balaji Hariharan ◽  
Ramesh Guntha

<p>With the <em>Landslide Tracker</em> mobile app's launch to track landslides through a crowdsourcing model during the monsoon season of 2020, we learned several important lessons that may help us improve the data quality, volunteer participation, and participation from institutions. The '<em>Landslide Tracker</em>' mobile application allows tracking the landslides and details such as GPS location, date & time of occurrence, images, type, material, size, impact, area, geology, geomorphology, and comments. This app is available on Google Play Store for free, and at http://landslides.amrita.edu, with software conceived and developed by Amrita University in the context of the UK NERC/FCDO funded LANDSLIP research project (http://www.landslip.org/). The <em>Landslide tracker</em> app was released during the 2020 monsoon season, and more than 250 landslides were recorded through the app across India and the world.</p><p>Due to the nature of crowdsourcing, we have seen test entries, duplicate entries, entries with apparent mistakes such as the wrong location. In many cases, these entries were deleted by the administrator through proactive verification. To sustain the removal of invalid entries with continued usage, we can allow users to mark a landslide for verification. The administrator can remove invalid entries or approach the original contributor to update the data with minimum effort. Currently it takes under three minutes to record a landslide. To reduce the time further, it is requested to make a single page form to record date, location, images and few questions. To improve volunteer participation for contributing and validating landslide entries, we can implement digital rewards such as points, badges, titles, leader boards, etc. Additionally, allow users to like, comment, and share the landslide entries to improve the engagement. To improve the participation of universities, disaster management authorities, district authorities, and other governmental and non-governmental agencies for contributing and using landslide information, we can implement the institutional management functionality. It allows the institution to configure the staff and manager user. The manager can review, update, delete entries from the team, get reports on the contribution of the staff, and download and share the landslides contributed by the whole institution.</p>


2020 ◽  
Author(s):  
Stephan Kindermann ◽  
Maria Moreno

<p>We will present a new service designed to assist the users of model data in running their analyses in world-class supercomputers. The increase of data volumes and model complexities can be challenging for data users with limited access to high performance computers or low network bandwidth. To avoid heavy data transfers, strong memory requirements, and slow sequential processing, the data science community is rapidly moving from classical client-side to new server-side frameworks. Three simple steps enable server-side users to compute in parallel and near the data: (1) discover the data you are interested in, (2) perform your analyses and visualizations in the supercomputer, and (3) download the outcome. A server-side service is especially beneficial for exploiting the high-volume data collections produced in the framework of internationally coordinated model intercomparison projects like CMIP5/6 and CORDEX and disseminated via the  Earth System Grid Federation (ESGF) infrastructure. To facilitate the adoption of server-side capabilities by the ESGF users, the infrastructure project of the European Network for Earth System Modelling (IS-ENES3) is now opening its high performance resources and data pools at the CMCC (Italy), JASMIN (UK), IPSL (France), and DKRZ (Germany) supercomputing centers. The data pools allow access to results from several models on the same site and the data and resources are locally maintained by the hosts. Besides, our server-side framework not only speeds the workload but also reduces the errors in file format conversions and standardizations and software dependencies and upgrade. The service is founded by the EU Commission and it is free of charge. Find more information here: https://portal.enes.org/data/data-metadata-service/analysis-platforms. Demos and tutorials have been created by a dedicated user support team. We will present several use cases showing how easy and flexible it is to use our analysis platforms for multimodel comparisons of CMIP5/6 and CORDEX data. </p>


2019 ◽  
Author(s):  
Ram P Rustagi ◽  
Viraj Kumar

In the 21st century, the internet has become essential part of everyday tasks including banking, interacting with government services, education, entertainment, text/voice/video communication, etc. Individuals access the internet using client-side applications such as a browser or an app on their mobile phone or laptop/desktop. This client-side application communicates with a server-side application, typically running on a web server, which in turn may interact with other business applications. The underlying protocol is typically HTTP [1] running on top of the TCP/IP protocol [2][3]. A typical web server supports a large number (hundreds or thousands) of concurrent TCP connections. The most commonly deployed web servers in use today are Apache server [4], Nginx [5], or Microsoft Internet Information Server (IIS)[6]. Nginx is mostly used on Linux and IIS runs only on Windows OS. In contrast, Apache web server (which is almost as old as the web itself) is supported on all platforms (Linux, Windows, MacOS, etc.). In its initial release in 1995 (version 1.3), Apache server could serve only a few concurrent clients, but its current release (2.4.41) can support a huge number of concurrent clients. In this article (as well as Part II that will follow), we will present a simplified view of this evolution that nevertheless explains how current web manage such high levels of concurrency. To do so, we will delve into socket programming, which is at the heart of managing TCP connections, and we will examine the key role that it plays in delivering high performance. We have studied both transport layers protocol i.e., TCP [2] and UDP [7], in detail in the last few articles, and we have developed a basic understanding of the working of the transport layer. This is a communication-enabling layer used by applications to exchange application-level data. Simple working of applications using TCP (providing reliable delivery) and UDP (providing best effort delivery) socket programming are provided in [8]. In this article, however, we will discuss increasingly complex levels of socket programming, from simple socket connections to complex connection management that are necessary to attain high TCP performance. We will focus on TCP Socket programming only. UDP socket programming is simply a best effort delivery and socket implementation support does not impact the application communication performance.


2010 ◽  
Vol 9 (2) ◽  
pp. 47-52
Author(s):  
Samiksha Shukla ◽  
D. K. Mishra ◽  
Kapil Tiwari

Due to complex infrastructure of web application response time for different service request by client requires significantly larger time. Simple Object Access Protocol (SOAP) is a recent and emerging technology in the field of web services, which aims at replacing traditional methods of remote communications. Basic aim of designing SOAP was to increase interoperability among broad range of programs and environment, SOAP allows applications from different languages, installed on different platforms to communicate with each other over the network. Web services demand security, high performance and extensibility. SOAP provides various benefits for interoperability but we need to pay price of performance degradation and security for that. This formulates SOAP a poor preference for high performance web services. In this paper we present a new approach by enabling multi-level caching at client side as well as server side. Reference describes implementation based on the Apache Java SOAP client, which gives radically enhanced performance.


Author(s):  
Shankar Chaudhary

Despite being in nascent stage m-commerce is gaining momentum in India. The explosive growth of smart-phone users has made India much loved business destination for whole world. Indian internet user is becoming the second largest in the world next to China surpassing US, which throws open plenty of e-commerce opportunities, not only for Indian players, offshore players as well. Mobile commerce is likely to overtake e-commerce in the next few years, spurred by the continued uptrend in online shopping and increasing use of mobile apps.The optimism comes from the fact that people accessing the Internet through their mobiles had jumped 33 per cent in 2014 to 173 million and is expected to grow 21 per cent year-on-year till 2019 to touch 457 million. e-Commerce brands are eyeing on the mobile app segment by developing user-friendly and secure mobile apps offering a risk-free and easy shopping experience to its users. Budget 4G smart phones coupled with affordable plans, can very well drive 4G growth in India.


Author(s):  
Kostyantyn Kharchenko

The approach to organizing the automated calculations’ execution process using the web services (in particular, REST-services) is reviewed. The given solution will simplify the procedure of introduction of the new functionality in applied systems built according to the service-oriented architecture and microservice architecture principles. The main idea of the proposed solution is in maximum division of the server-side logic development and the client-side logic, when clients are used to set the abstract computation goals without any dependencies to existing applied services. It is proposed to rely on the centralized scheme to organize the computations (named as orchestration) and to put to the knowledge base the set of rules used to build (in multiple steps) the concrete computational scenario from the abstract goal. It is proposed to include the computing task’s execution subsystem to the software architecture of the applied system. This subsystem is composed of the service which is processing the incoming requests for execution, the service registry and the orchestration service. The clients send requests to the execution subsystem without any references to the real-world services to be called. The service registry searches the knowledge base for the corresponding input request template, then the abstract operation description search for the request template is performed. Each abstract operation may already have its implementation in the form of workflow composed of invocations of the real applied services’ operations. In case of absence of the corresponding workflow in the database, this workflow implementation could be synthesized dynamically according to the input and output data and the functionality description of the abstract operation and registered applied services. The workflows are executed by the orchestrator service. Thus, adding some new functions to the client side can be possible without any changes at the server side. And vice versa, adding new services can impact the execution of the calculations without updating the clients.


2020 ◽  
Vol 16 ◽  
Author(s):  
Kirubanandam Grace Pavithra ◽  
Vasudevan Jaikumar ◽  
Ponnusamy Senthil Kumar ◽  
PanneerSelvam SundarRajan

Background: Many antibiotics were widely used as medication based on their distinctive features. Among them, sulphonamides were commonly used, however their recalcitrant nature makes them difficult to dispose. Hence, their interaction with environment and analytic technique requires considerable attention globally. Objective: Therefore, this review aimed to provide detailed discussion about environmental as well as human health behaviour and analytic techniques corresponding to sulphonamides. Methods: Various results and discussion were extracted from technical journals and books published by different researchers from all over the world. The cited bibliographic references were intentionally investigated in order to extract relevant information related to proposed work. Results: In this review, the determination techniques such as UV-spectroscopy, Enthalpimetry, Immunosensor, Chromatography, Chemiluminescence, Photoinduced fluorometric determination, Capillary electrophoresis for sulphonamide determination were discussed in detail. Among them, High performance liquid chromatography (HPLC) and UV-spectroscopy was effective and extensively used for screening sulphonamide. Conclusion: Knowing the quantification and behaviour of sulphonamide in aqueous solution is mandatory to opt the suitable wastewater treatment required. Hence, choosing appropriate high precision and feasible screening techniques is necessary, which can be attained with this review.


2018 ◽  
Vol 199 ◽  
pp. 09001
Author(s):  
Renaud Franssen ◽  
Serhan Guner ◽  
Luc Courard ◽  
Boyan Mihaylov

The maintenance of large aging infrastructure across the world creates serious technical, environmental, and economic challenges. Ultra-high performance fibre-reinforced concretes (UHPFRC) are a new generation of materials with outstanding mechanical properties as well as very high durability due to their extremely low permeability. These properties open new horizons for the sustainable rehabilitation of aging concrete structures. Since UHPFRC is a young and evolving material, codes are still either lacking or incomplete, with recent design provisions proposed in France, Switzerland, Japan, and Australia. However, engineers and public agencies around the world need resources to study, model, and rehabilitate structures using UHPFRC. As an effort to contribute to the efficient use of this promising material, this paper presents a new numerical modelling approach for UHPFRC-strengthened concrete members. The approach is based on the Diverse Embedment Model within the global framework of the Disturbed Stress Field Model, a smeared rotating-crack formulation for 2D modelling of reinforced concrete structures. This study presents an adapted version of the DEM in order to capture the behaviour of UHPFRC by using a small number of input parameters. The model is validated with tension tests from the literature and is then used to model UHPFRC-strengthened elements. The paper will discuss the formulation of the model and will provide validation studies with various tests of beams, columns and walls from the literature. These studies will demonstrate the effectiveness of the proposed modelling approach.


2003 ◽  
Vol 3 (2) ◽  
pp. 170-173 ◽  
Author(s):  
Karthik Ramani, ◽  
Abhishek Agrawal, and ◽  
Mahendra Babu ◽  
Christoph Hoffmann

New and efficient paradigms for web-based collaborative product design in a global economy will be driven by increased outsourcing, increased competition, and pressures to reduce product development time. We have developed a three-tier (client-server-database) architecture based collaborative shape design system, Computer Aided Distributed Design and Collaboration (CADDAC). CADDAC has a centralized geometry kernel and constraint solver. The server-side provides support for solid modeling, constraint solving operations, data management, and synchronization of clients. The client-side performs real-time creation, modification, and deletion of geometry over the network. In order to keep the clients thin, many computationally intensive operations are performed at the server. Only the graphics rendering pipeline operations are performed at the client-side. A key contribution of this work is a flexible architecture that decouples Application Data (Model), Controllers, Viewers, and Collaboration. This decoupling allows new feature development to be modular and easy to develop and manage.


Sign in / Sign up

Export Citation Format

Share Document