scholarly journals Reliable service chain orchestration for scalable data-intensive computing at infrastructure edges

2019 ◽  
Author(s):  
◽  
Dmitrii Yurievich Chemodanov

In the event of natural or man-made disasters, geospatial video analytics is valuable to provide situational awareness that can be extremely helpful for first responders. However, geospatial video analytics demands massive imagery/video data 'collection' from Internet-of-Things (IoT) and their seamless 'computation/consumption' within a geo-distributed (edge/core) cloud infrastructure in order to cater to user Quality of Experience (QoE) expectations. Thus, the edge computing needs to be designed with a reliable performance while interfacing with the core cloud to run computer vision algorithms. This is because infrastructure edges near locations generating imagery/video content are rarely equipped with high-performance computation capabilities. This thesis addresses challenges of interfacing edge and core cloud computing within the geo-distributed infrastructure as a novel 'function-centric computing' paradigm that brings new insights to computer vision, edge routing and network virtualization areas. Specifically, we detail the state-of-the-art techniques and illustrate our new/improved solution approaches based on function-centric computing for the two problems of: (i) high-throughput data collection from IoT devices at the wireless edge, and (ii) seamless data computation/consumption within the geo-distributed (edge/core) cloud infrastructure. To address (i), we present a novel deep learning-augmented geographic edge routing that relies on physical area knowledge obtained from satellite imagery. To address (ii), we describe a novel reliable service chain orchestration framework that builds upon microservices and utilizes a novel 'metapath composite variable' approach supported by a constrained-shortest path finder. Finally, we show both analytically and empirically, how our geographic routing, constrained shortest path finder and reliable service chain orchestration approaches that compose our function-centric computing framework are superior than many traditional and state-of-the-art techniques. As a result, we can significantly speedup (up to 4 times) data-intensive computing at infrastructure edges fostering effective disaster relief coordination to save lives.

2021 ◽  
Author(s):  
Moritz D Luerig

Digital images are a ubiquitous way to represent phenotypes. More and more ecologists and evolutionary biologists are using images to capture and analyze high dimensional phenotypic data to understand complex developmental and evolutionary processes. As a consequence, images are being collected at ever increasing rates, already outpacing our abilities for processing and analysis of the contained phenotypic information. phenopype is a high throughput phenotyping package for the programming language Python to support ecologists and evolutionary biologists in extracting high dimensional phenotypic data from digital images. phenopype integrates existing state-of-the-art computer vision functions (using the OpenCV library as a backend), GUI-based interactions, and a project management ecosystem to facilitate rapid data collection and reproducibility. phenopype offers three different workflow types that support users during different stages of scientific image analysis (prototyping, low-throughput, and high-throughput). In the high-throughput workflow, users interact with human-readable YAML configuration files to effectively modify settings for different images. These settings are stored along with processed images and results, so that the acquired phenotypic information becomes highly reproducible. phenopype combines the advantages of the Python environment, with its state-of-the-art computer vision, array manipulation and data handling libraries, and basic GUI capabilities, which allow users to step into the automatic workflow when necessary. Overall, phenopype is aiming to augment, rather than replace the utility of existing Python CV libraries, allowing biologists to focus on rapid and reproducible data collection.


Author(s):  
Muhammad Yousaf ◽  
Petr Bris

A systematic literature review (SLR) from 1991 to 2019 is carried out about EFQM (European Foundation for Quality Management) excellence model in this paper. The aim of the paper is to present state of the art in quantitative research on the EFQM excellence model that will guide future research lines in this field. The articles were searched with the help of six strings and these six strings were executed in three popular databases i.e. Scopus, Web of Science, and Science Direct. Around 584 peer-reviewed articles examined, which are directly linked with the subject of quantitative research on the EFQM excellence model. About 108 papers were chosen finally, then the purpose, data collection, conclusion, contributions, and type of quantitative of the selected papers are discussed and analyzed briefly in this study. Thus, this study identifies the focus areas of the researchers and knowledge gaps in empirical quantitative literature on the EFQM excellence model. This article also presents the lines of future research.


1999 ◽  
Vol 18 (3-4) ◽  
pp. 265-273
Author(s):  
Giovanni B. Garibotto

The paper is intended to provide an overview of advanced robotic technologies within the context of Postal Automation services. The main functional requirements of the application are briefly referred, as well as the state of the art and new emerging solutions. Image Processing and Pattern Recognition have always played a fundamental role in Address Interpretation and Mail sorting and the new challenging objective is now off-line handwritten cursive recognition, in order to be able to handle all kind of addresses in a uniform way. On the other hand, advanced electromechanical and robotic solutions are extremely important to solve the problems of mail storage, transportation and distribution, as well as for material handling and logistics. Finally a short description of new services of Postal Automation is referred, by considering new emerging services of hybrid mail and paper to electronic conversion.


Agronomy ◽  
2021 ◽  
Vol 11 (6) ◽  
pp. 1069
Author(s):  
Shibbir Ahmed ◽  
Baijing Qiu ◽  
Fiaz Ahmad ◽  
Chun-Wei Kong ◽  
Huang Xin

Over the last decade, Unmanned Aerial Vehicles (UAVs), also known as drones, have been broadly utilized in various agricultural fields, such as crop management, crop monitoring, seed sowing, and pesticide spraying. Nonetheless, autonomy is still a crucial limitation faced by the Internet of Things (IoT) UAV systems, especially when used as sprayer UAVs, where data needs to be captured and preprocessed for robust real-time obstacle detection and collision avoidance. Moreover, because of the objective and operational difference between general UAVs and sprayer UAVs, not every obstacle detection and collision avoidance method will be sufficient for sprayer UAVs. In this regard, this article seeks to review the most relevant developments on all correlated branches of the obstacle avoidance scenarios for agricultural sprayer UAVs, including a UAV sprayer’s structural details. Furthermore, the most relevant open challenges for current UAV sprayer solutions are enumerated, thus paving the way for future researchers to define a roadmap for devising new-generation, affordable autonomous sprayer UAV solutions. Agricultural UAV sprayers require data-intensive algorithms for the processing of the images acquired, and expertise in the field of autonomous flight is usually needed. The present study concludes that UAV sprayers are still facing obstacle detection challenges due to their dynamic operating and loading conditions.


Author(s):  
Sebastian Hoppe Nesgaard Jensen ◽  
Mads Emil Brix Doest ◽  
Henrik Aanæs ◽  
Alessio Del Bue

AbstractNon-rigid structure from motion (nrsfm), is a long standing and central problem in computer vision and its solution is necessary for obtaining 3D information from multiple images when the scene is dynamic. A main issue regarding the further development of this important computer vision topic, is the lack of high quality data sets. We here address this issue by presenting a data set created for this purpose, which is made publicly available, and considerably larger than the previous state of the art. To validate the applicability of this data set, and provide an investigation into the state of the art of nrsfm, including potential directions forward, we here present a benchmark and a scrupulous evaluation using this data set. This benchmark evaluates 18 different methods with available code that reasonably spans the state of the art in sparse nrsfm. This new public data set and evaluation protocol will provide benchmark tools for further development in this challenging field.


IEEE Micro ◽  
2014 ◽  
Vol 34 (5) ◽  
pp. 52-63 ◽  
Author(s):  
Laurent Schares ◽  
Benjamin G. Lee ◽  
Fabio Checconi ◽  
Russell Budd ◽  
Alexander Rylyakov ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document