scholarly journals A Performance Analysis of Internet of Things Networking Protocols: Evaluating MQTT, CoAP, OPC UA

2021 ◽  
Vol 11 (11) ◽  
pp. 4879
Author(s):  
Daniel Silva ◽  
Liliana I. Carvalho ◽  
José Soares ◽  
Rute C. Sofia

IoT data exchange is supported today by different communication protocols and different protocolar frameworks, each of which with its own advantages and disadvantages, and often co-existing in a way that is mandated by vendor policies. Although different protocols are relevant in different domains, there is not a protocol that provides better performance (jitter, latency, energy consumption) across different scenarios. The focus of this work is two-fold. First, to provide a comparison of the different available solutions in terms of protocolar features such as type of transport, type of communication pattern support, security aspects, including Named-data networking as relevant example of an Information-centric networking architecture. Secondly, the work focuses on evaluating three of the most popular protocols used both in Consumer as well as in Industrial IoT environments: MQTT, CoAP, and OPC UA. The experimentation has been carried out first on a local testbed for MQTT, COAP and OPC UA. Then, larger experiments have been carried out for MQTT and CoAP, based on the large-scale FIT-IoT testbed. Results show that CoAP is the protocol that achieves across all scenarios lowest time-to-completion, while OPC UA, albeit exhibiting less variability, resulted in higher time-to-completion in comparison to CoAP or MQTT.

2020 ◽  
Vol 39 (4) ◽  
pp. 5449-5458
Author(s):  
A. Arokiaraj Jovith ◽  
S.V. Kasmir Raja ◽  
A. Razia Sulthana

Interference in Wireless Sensor Network (WSN) predominantly affects the performance of the WSN. Energy consumption in WSN is one of the greatest concerns in the current generation. This work presents an approach for interference measurement and interference mitigation in point to point network. The nodes are distributed in the network and interference is measured by grouping the nodes in the region of a specific diameter. Hence this approach is scalable and isextended to large scale WSN. Interference is measured in two stages. In the first stage, interference is overcome by allocating time slots to the node stations in Time Division Multiple Access (TDMA) fashion. The node area is split into larger regions and smaller regions. The time slots are allocated to smaller regions in TDMA fashion. A TDMA based time slot allocation algorithm is proposed in this paper to enable reuse of timeslots with minimal interference between smaller regions. In the second stage, the network density and control parameter is introduced to reduce interference in a minor level within smaller node regions. The algorithm issimulated and the system is tested with varying control parameter. The node-level interference and the energy dissipation at nodes are captured by varying the node density of the network. The results indicate that the proposed approach measures the interference and mitigates with minimal energy consumption at nodes and with less overhead transmission.


Author(s):  
Stefano Vassanelli

Establishing direct communication with the brain through physical interfaces is a fundamental strategy to investigate brain function. Starting with the patch-clamp technique in the seventies, neuroscience has moved from detailed characterization of ionic channels to the analysis of single neurons and, more recently, microcircuits in brain neuronal networks. Development of new biohybrid probes with electrodes for recording and stimulating neurons in the living animal is a natural consequence of this trend. The recent introduction of optogenetic stimulation and advanced high-resolution large-scale electrical recording approaches demonstrates this need. Brain implants for real-time neurophysiology are also opening new avenues for neuroprosthetics to restore brain function after injury or in neurological disorders. This chapter provides an overview on existing and emergent neurophysiology technologies with particular focus on those intended to interface neuronal microcircuits in vivo. Chemical, electrical, and optogenetic-based interfaces are presented, with an analysis of advantages and disadvantages of the different technical approaches.


Author(s):  
Lichao Xu ◽  
Szu-Yun Lin ◽  
Andrew W. Hlynka ◽  
Hao Lu ◽  
Vineet R. Kamat ◽  
...  

AbstractThere has been a strong need for simulation environments that are capable of modeling deep interdependencies between complex systems encountered during natural hazards, such as the interactions and coupled effects between civil infrastructure systems response, human behavior, and social policies, for improved community resilience. Coupling such complex components with an integrated simulation requires continuous data exchange between different simulators simulating separate models during the entire simulation process. This can be implemented by means of distributed simulation platforms or data passing tools. In order to provide a systematic reference for simulation tool choice and facilitating the development of compatible distributed simulators for deep interdependent study in the context of natural hazards, this article focuses on generic tools suitable for integration of simulators from different fields but not the platforms that are mainly used in some specific fields. With this aim, the article provides a comprehensive review of the most commonly used generic distributed simulation platforms (Distributed Interactive Simulation (DIS), High Level Architecture (HLA), Test and Training Enabling Architecture (TENA), and Distributed Data Services (DDS)) and data passing tools (Robot Operation System (ROS) and Lightweight Communication and Marshalling (LCM)) and compares their advantages and disadvantages. Three specific limitations in existing platforms are identified from the perspective of natural hazard simulation. For mitigating the identified limitations, two platform design recommendations are provided, namely message exchange wrappers and hybrid communication, to help improve data passing capabilities in existing solutions and provide some guidance for the design of a new domain-specific distributed simulation framework.


Author(s):  
Clemens M. Lechner ◽  
Nivedita Bhaktha ◽  
Katharina Groskurth ◽  
Matthias Bluemke

AbstractMeasures of cognitive or socio-emotional skills from large-scale assessments surveys (LSAS) are often based on advanced statistical models and scoring techniques unfamiliar to applied researchers. Consequently, applied researchers working with data from LSAS may be uncertain about the assumptions and computational details of these statistical models and scoring techniques and about how to best incorporate the resulting skill measures in secondary analyses. The present paper is intended as a primer for applied researchers. After a brief introduction to the key properties of skill assessments, we give an overview over the three principal methods with which secondary analysts can incorporate skill measures from LSAS in their analyses: (1) as test scores (i.e., point estimates of individual ability), (2) through structural equation modeling (SEM), and (3) in the form of plausible values (PVs). We discuss the advantages and disadvantages of each method based on three criteria: fallibility (i.e., control for measurement error and unbiasedness), usability (i.e., ease of use in secondary analyses), and immutability (i.e., consistency of test scores, PVs, or measurement model parameters across different analyses and analysts). We show that although none of the methods are optimal under all criteria, methods that result in a single point estimate of each respondent’s ability (i.e., all types of “test scores”) are rarely optimal for research purposes. Instead, approaches that avoid or correct for measurement error—especially PV methodology—stand out as the method of choice. We conclude with practical recommendations for secondary analysts and data-producing organizations.


npj Vaccines ◽  
2021 ◽  
Vol 6 (1) ◽  
Author(s):  
Nikolaos C. Kyriakidis ◽  
Andrés López-Cortés ◽  
Eduardo Vásconez González ◽  
Alejandra Barreto Grimaldos ◽  
Esteban Ortiz Prado

AbstractThe new SARS-CoV-2 virus is an RNA virus that belongs to the Coronaviridae family and causes COVID-19 disease. The newly sequenced virus appears to originate in China and rapidly spread throughout the world, becoming a pandemic that, until January 5th, 2021, has caused more than 1,866,000 deaths. Hence, laboratories worldwide are developing an effective vaccine against this disease, which will be essential to reduce morbidity and mortality. Currently, there more than 64 vaccine candidates, most of them aiming to induce neutralizing antibodies against the spike protein (S). These antibodies will prevent uptake through the human ACE-2 receptor, thereby limiting viral entrance. Different vaccine platforms are being used for vaccine development, each one presenting several advantages and disadvantages. Thus far, thirteen vaccine candidates are being tested in Phase 3 clinical trials; therefore, it is closer to receiving approval or authorization for large-scale immunizations.


2014 ◽  
Vol 26 (4) ◽  
pp. 781-817 ◽  
Author(s):  
Ching-Pei Lee ◽  
Chih-Jen Lin

Linear rankSVM is one of the widely used methods for learning to rank. Although its performance may be inferior to nonlinear methods such as kernel rankSVM and gradient boosting decision trees, linear rankSVM is useful to quickly produce a baseline model. Furthermore, following its recent development for classification, linear rankSVM may give competitive performance for large and sparse data. A great deal of works have studied linear rankSVM. The focus is on the computational efficiency when the number of preference pairs is large. In this letter, we systematically study existing works, discuss their advantages and disadvantages, and propose an efficient algorithm. We discuss different implementation issues and extensions with detailed experiments. Finally, we develop a robust linear rankSVM tool for public use.


Author(s):  
John A. Stankovic ◽  
Tian He

This paper presents a holistic view of energy management in sensor networks. We first discuss hardware designs that support the life cycle of energy, namely: (i) energy harvesting, (ii) energy storage and (iii) energy consumption and control. Then, we discuss individual software designs that manage energy consumption in sensor networks. These energy-aware designs include media access control, routing, localization and time-synchronization. At the end of this paper, we present a case study of the VigilNet system to explain how to integrate various types of energy management techniques to achieve collaborative energy savings in a large-scale deployed military surveillance system.


2017 ◽  
Vol 2017 ◽  
pp. 1-12 ◽  
Author(s):  
Shanghong Zhang ◽  
Wenda Li ◽  
Zhu Jing ◽  
Yujun Yi ◽  
Yong Zhao

Three parallel methods (OpenMP, MPI, and OpenACC) are evaluated for the computation of a two-dimensional dam-break model using the explicit finite volume method. A dam-break event in the Pangtoupao flood storage area in China is selected as a case study to demonstrate the key technologies for implementing parallel computation. The subsequent acceleration of the methods is also evaluated. The simulation results show that the OpenMP and MPI parallel methods achieve a speedup factor of 9.8× and 5.1×, respectively, on a 32-core computer, whereas the OpenACC parallel method achieves a speedup factor of 20.7× on NVIDIA Tesla K20c graphics card. The results show that if the memory required by the dam-break simulation does not exceed the memory capacity of a single computer, the OpenMP parallel method is a good choice. Moreover, if GPU acceleration is used, the acceleration of the OpenACC parallel method is the best. Finally, the MPI parallel method is suitable for a model that requires little data exchange and large-scale calculation. This study compares the efficiency and methodology of accelerating algorithms for a dam-break model and can also be used as a reference for selecting the best acceleration method for a similar hydrodynamic model.


Author(s):  
Zhixin Tie ◽  
David Ko ◽  
Harry H. Cheng

Mobile agent technology has become an important approach for the design and development of distributed systems. However, there is little research regarding the monitoring of computer resources and usage at large scale distributed computer centers. This paper presents a mobile agent-based system called the Mobile Agent Based Computer Monitoring System (MABCMS) that supports the dynamic sending and executing of control command, dynamic data exchange, and dynamic deployment of mobile code in C/C++. Based on the Mobile-C library, agents can call low level functions in binary dynamic or static libraries, and thus can monitor computer resources and usage conveniently and efficiently. Two experimental applications have been designed using the MABCMS. The experiments were conducted in a university computer center with hundreds of computer workstations and 15 server machines. The first experiment uses the MABCMS to detect improper usage of the computer workstations, such as playing computer games. The second experimental application uses the MABCMS to detect system resources such as available hard disk space. The experiments show that the mobile agent based monitoring system is an effective method for detecting and interacting with students playing computer games and a practical way to monitor computer resources in large scale distributed computer centers.


2016 ◽  
Vol 723 ◽  
pp. 572-578
Author(s):  
Li Fu ◽  
Qi Chi Le ◽  
Xi Bo Wang ◽  
Xuan Liu ◽  
Wei Tao Jia

In recent years, the development and utilization of renewable generation have attracted more and more attention, and the grid puts forward higher requirements to the energy storage technology, especially for security, stability and reliability. The liquid metal battery (LMB) consists of two liquid metal electrodes and a molten salt electrolyte, which will be segregated into three liquid layers naturally. Being low-cost and long-life, it is regarded as the best choice for grid-level large-scale energy storage. This paper describes the main structure and working principle of the LMB, analyzes the advantages and disadvantages of the LMB when compared with the traditional batteries, and explores the feasibility and economy when it is used as a kind of large-scale energy storage applied in the power grid. The paper also makes a comprehensive comparison on the performance of several LMBs, and points out the LMB’s research and development in the future.


Sign in / Sign up

Export Citation Format

Share Document