remote computation
Recently Published Documents


TOTAL DOCUMENTS

13
(FIVE YEARS 3)

H-INDEX

5
(FIVE YEARS 0)

2020 ◽  
Vol 10 (19) ◽  
pp. 6866
Author(s):  
Arnauld Nzegha Fountsop ◽  
Jean Louis Ebongue Kedieng Fendji ◽  
Marcellin Atemkeng

Deep learning has been successfully showing promising results in plant disease detection, fruit counting, yield estimation, and gaining an increasing interest in agriculture. Deep learning models are generally based on several millions of parameters that generate exceptionally large weight matrices. The latter requires large memory and computational power for training, testing, and deploying. Unfortunately, these requirements make it difficult to deploy on low-cost devices with limited resources that are present at the fieldwork. In addition, the lack or the bad quality of connectivity in farms does not allow remote computation. An approach that has been used to save memory and speed up the processing is to compress the models. In this work, we tackle the challenges related to the resource limitation by compressing some state-of-the-art models very often used in image classification. For this we apply model pruning and quantization to LeNet5, VGG16, and AlexNet. Original and compressed models were applied to the benchmark of plant seedling classification (V2 Plant Seedlings Dataset) and Flavia database. Results reveal that it is possible to compress the size of these models by a factor of 38 and to reduce the FLOPs of VGG16 by a factor of 99 without considerable loss of accuracy.


2020 ◽  
Author(s):  
Carsten Ehbrecht ◽  
Stephan Kindermann ◽  
Ag Stephens ◽  
David Huard

<p>The Web Processing Service (WPS) is an OGC interface standard to provide processing tools as Web Service.<br>The WPS interface standardizes the way processes and their inputs/outputs are described,<br>how a client can request the execution of a process, and how the output from a process is handled.</p><p>Birdhouse tools enable you to build your own customised WPS compute service<br>in support of remote climate data analysis.</p><p>Birdhouse offers you:</p><ul><li>A Cookiecutter template to create your own WPS compute service.</li> <li>An Ansible script to deploy a full-stack WPS service.</li> <li>A Python library, Birdy, suitable for Jupyter notebooks to interact with WPS compute services.</li> <li>An OWS security proxy, Twitcher, to provide access control to WPS compute services.</li> </ul><p>Birdhouse uses the PyWPS Python implementation of the Web Processing Service standard.<br>PyWPS is part of the OSGeo project.</p><p>The Birdhouse tools are used by several partners and projects.<br>A Web Processing Service will be used in the Copernicus Climate Change Service (C3S) to provide subsetting<br>operations on climate model data (CMIP5, CORDEX) as a service to the Climate Data Store (CDS).<br>The Canadian non profit organization Ouranos is using a Web Processing Service to provide climate indices<br>calculation to be used remotely from Jupyter notebooks.</p><p>In this session we want to show how a Web Processing Service can be used with the Freva evaluation system.<br>Freva plugins can be made available as processes in a Web Processing Service. These plugins can be run<br>using a standard WPS client from a terminal and Jupyter notebooks with remote access to the Freva system.</p><p>We want to emphasise the integrational aspects of the Birdhouse tools: supporting existing processing frameworks<br>to add a standardized web service for remote computation.</p><p>Links:</p><ul><li>http://bird-house.github.io</li> <li>http://pywps.org</li> <li>https://www.osgeo.org/</li> <li>http://climate.copernicus.eu</li> <li>https://www.ouranos.ca/en</li> <li>https://freva.met.fu-berlin.de/</li> </ul>


Author(s):  
Dr. Bhalaji N. ◽  
Shanmuga Skandh Vinayak E

Ever since the concept of parallel processing and remote computation became feasible, Cloud computing is at its highest peak in its popularity. Although cloud computing is effective and feasible in its usage, using the cloud for frequent operations may not be the be the most optimal solution. Hence the concept of FOG proves to be more optimal and efficient. In this paper, we propose a solution by improving the FOG computing concept of decentralization by implementing a secure distributed files system utilizing the IPFS and the Ethereum Blockchain technology. Our proposed system has proved to be efficient by successfully distributing the data in a Raspberry Pi network. The outcome of this work will assist FOG architects in implementing this system in their infrastructure and also prove to be effective for IoT developers in implementing a Raspberry Pi decentralized network while providing more security to the data.


Solid Earth ◽  
2017 ◽  
Vol 8 (5) ◽  
pp. 1047-1070 ◽  
Author(s):  
Kasra Hosseini ◽  
Karin Sigloch

Abstract. We present obspyDMT, a free, open-source software toolbox for the query, retrieval, processing and management of seismological data sets, including very large, heterogeneous and/or dynamically growing ones. ObspyDMT simplifies and speeds up user interaction with data centers, in more versatile ways than existing tools. The user is shielded from the complexities of interacting with different data centers and data exchange protocols and is provided with powerful diagnostic and plotting tools to check the retrieved data and metadata. While primarily a productivity tool for research seismologists and observatories, easy-to-use syntax and plotting functionality also make obspyDMT an effective teaching aid. Written in the Python programming language, it can be used as a stand-alone command-line tool (requiring no knowledge of Python) or can be integrated as a module with other Python codes. It facilitates data archiving, preprocessing, instrument correction and quality control – routine but nontrivial tasks that can consume much user time. We describe obspyDMT's functionality, design and technical implementation, accompanied by an overview of its use cases. As an example of a typical problem encountered in seismogram preprocessing, we show how to check for inconsistencies in response files of two example stations. We also demonstrate the fully automated request, remote computation and retrieval of synthetic seismograms from the Synthetics Engine (Syngine) web service of the Data Management Center (DMC) at the Incorporated Research Institutions for Seismology (IRIS).


2017 ◽  
Author(s):  
Kasra Hosseini ◽  
Karin Sigloch

Abstract. We present obspyDMT, a free, open source software toolbox for the query, retrieval, processing and management of seismological data sets, including very large, heterogeneous, and/or dynamically growing ones. obspyDMT simplifies and speeds up user-interaction with data centres, in more versatile ways than existing tools. The user is shielded from the complexities of interacting with different data centres and data exchange protocols, and is provided with powerful diagnostic and plotting tools to check the retrieved data and meta-data. While primarily a productivity tool for research seismologists and observatories, easy-to-use syntax and plotting functionality also make obspyDMT an effective teaching aid. Written in the Python programming language, it can be used as a stand-alone command line tool (requiring no knowledge of Python) or can be integrated as a module with other Python codes. It facilitates data archival, pre-processing, instrument correction, and quality control -- routine but non-trivial tasks that can consume much user time. We describe obspyDMT's functionality, design and technical implementation, accompanied by an overview of its use cases. As an example of a typical problem encountered in seismogram preprocessing, we show how to check for inconsistencies in response files of two example stations. We also demonstrate the fully automated request, remote computation, and retrieval of synthetic seismograms from IRIS DMC's Syngine webservice.


2016 ◽  
Vol 16 (1) ◽  
pp. 80-88 ◽  
Author(s):  
Todor Balabanov ◽  
Iliyan Zankinski ◽  
Maria Barova

Abstract One of the strongest advantages of Distributed Evolutionary Algorithms (DEAs) is that they can be implemented in distributed environment of heterogeneous computing nodes. Usually such computing nodes differ in hardware and operating systems. Distributed systems are limited by network latency. Some Evolutionary Algorithms (EAs) are quite suitable for distributed computing implementation, because of their high level of parallelism and relatively less intensive network communication demands. One of the most widely used topologies for distributed computing is the star topology. In a star topology there is a central node with global EA population and many remote computation nodes which are working on a local population (usually sub-population of the global population). This model of distributed computing is also known as island model. What is common for DEAs is an operation called migration that transfers some individuals between local populations. In this paper, the term 'distribution' will be used instead of the term 'migration', because it is more accurate for the model proposed. This research proposes a strategy for distribution of EAs individuals in star topology based on incident node participation (INP). Solving the Rubik's cube by a Genetic Algorithm (GA) will be used as a benchmark. It is a combinatorial problem and experiments are done with a C++ program which uses OpenMPI.


Sign in / Sign up

Export Citation Format

Share Document