scholarly journals The Stream Exchange Protocol: A Secure and Lightweight Tool for Decentralized Connection Establishment

Sensors ◽  
2021 ◽  
Vol 21 (15) ◽  
pp. 4969
Author(s):  
Stefan Tatschner ◽  
Ferdinand Jarisch ◽  
Alexander Giehl ◽  
Sven Plaga ◽  
Thomas Newe

With the growing availability and prevalence of internet-capable devices, the complexity of networks and associated connection management increases. Depending on the use case, different approaches in handling connectivity have emerged over the years, tackling diverse challenges in each distinct area. Exposing centralized web-services facilitates reachability; distributing information in a peer-to-peer fashion offers availability; and segregating virtual private sub-networks promotes confidentiality. A common challenge herein lies in connection establishment, particularly in discovering, and securely connecting to peers. However, unifying different aspects, including the usability, scalability, and security of this process in a single framework, remains a challenge. In this paper, we present the Stream Exchange Protocol (SEP) collection, which provides a set of building blocks for secure, lightweight, and decentralized connection establishment. These building blocks use unique identities that enable both the identification and authentication of single communication partners. By utilizing federated directories as decentralized databases, peers are able to reliably share authentic data, such as current network locations and available endpoints. Overall, this collection of building blocks is universally applicable, easy to use, and protected by state-of-the-art security mechanisms by design. We demonstrate the capabilities and versatility of the SEP collection by providing three tools that utilize our building blocks: a decentralized file sharing application, a point-to-point network tunnel using the SEP trust model, and an application that utilizes our decentralized discovery mechanism for authentic and asynchronous data distribution.

Electronics ◽  
2021 ◽  
Vol 10 (13) ◽  
pp. 1517
Author(s):  
Xinsheng Wang ◽  
Xiyue Wang

True random number generators (TRNGs) have been a research hotspot due to secure encryption algorithm requirements. Therefore, such circuits are necessary building blocks in state-of-the-art security controllers. In this paper, a TRNG based on random telegraph noise (RTN) with a controllable rate is proposed. A novel method of noise array circuits is presented, which consists of digital decoder circuits and RTN noise circuits. The frequency of generating random numbers is controlled by the speed of selecting different gating signals. The results of simulation show that the array circuits consist of 64 noise source circuits that can generate random numbers by a frequency from 1 kHz to 16 kHz.


Author(s):  
Nasir Saeed ◽  
Heba Almorad ◽  
Hayssam Dahrouj ◽  
Tareq Y. Al-Naffouri ◽  
Jeff S. Shamma ◽  
...  

Author(s):  
Michał R. Nowicki ◽  
Dominik Belter ◽  
Aleksander Kostusiak ◽  
Petr Cížek ◽  
Jan Faigl ◽  
...  

Purpose This paper aims to evaluate four different simultaneous localization and mapping (SLAM) systems in the context of localization of multi-legged walking robots equipped with compact RGB-D sensors. This paper identifies problems related to in-motion data acquisition in a legged robot and evaluates the particular building blocks and concepts applied in contemporary SLAM systems against these problems. The SLAM systems are evaluated on two independent experimental set-ups, applying a well-established methodology and performance metrics. Design/methodology/approach Four feature-based SLAM architectures are evaluated with respect to their suitability for localization of multi-legged walking robots. The evaluation methodology is based on the computation of the absolute trajectory error (ATE) and relative pose error (RPE), which are performance metrics well-established in the robotics community. Four sequences of RGB-D frames acquired in two independent experiments using two different six-legged walking robots are used in the evaluation process. Findings The experiments revealed that the predominant problem characteristics of the legged robots as platforms for SLAM are the abrupt and unpredictable sensor motions, as well as oscillations and vibrations, which corrupt the images captured in-motion. The tested adaptive gait allowed the evaluated SLAM systems to reconstruct proper trajectories. The bundle adjustment-based SLAM systems produced best results, thanks to the use of a map, which enables to establish a large number of constraints for the estimated trajectory. Research limitations/implications The evaluation was performed using indoor mockups of terrain. Experiments in more natural and challenging environments are envisioned as part of future research. Practical implications The lack of accurate self-localization methods is considered as one of the most important limitations of walking robots. Thus, the evaluation of the state-of-the-art SLAM methods on legged platforms may be useful for all researchers working on walking robots’ autonomy and their use in various applications, such as search, security, agriculture and mining. Originality/value The main contribution lies in the integration of the state-of-the-art SLAM methods on walking robots and their thorough experimental evaluation using a well-established methodology. Moreover, a SLAM system designed especially for RGB-D sensors and real-world applications is presented in details.


2021 ◽  
Vol 7 (2) ◽  
pp. 19
Author(s):  
Tirivangani Magadza ◽  
Serestina Viriri

Quantitative analysis of the brain tumors provides valuable information for understanding the tumor characteristics and treatment planning better. The accurate segmentation of lesions requires more than one image modalities with varying contrasts. As a result, manual segmentation, which is arguably the most accurate segmentation method, would be impractical for more extensive studies. Deep learning has recently emerged as a solution for quantitative analysis due to its record-shattering performance. However, medical image analysis has its unique challenges. This paper presents a review of state-of-the-art deep learning methods for brain tumor segmentation, clearly highlighting their building blocks and various strategies. We end with a critical discussion of open challenges in medical image analysis.


Author(s):  
Jewel Okyere-Benya ◽  
Georgios Exarchakos ◽  
Vlado Menkovski ◽  
Antonio Liotta ◽  
Paolo Giaccone

Evolving paradigms of parallel transport mechanisms are necessary to satisfy the ever increasing need of high performing communication systems. Parallel transport mechanisms can be described as a technique to send several data simultaneously using several parallel channels. The authors’ survey captures the entire building blocks in designing next generation parallel transport mechanisms by firstly analyzing the basic structure of a transport mechanism using a point to point scenario. They then proceed to segment parallel transport into four categories and describe some of the most sophisticated technologies such as Multipath under Point to Point, Multicast under Point to Multipoint, Parallel downloading under Multipoint to Point, and Peer to Peer streaming under Multipoint to Multipoint. The Survey enables the authors to stipulate that high performing parallel transport mechanisms can be achieved by integrating the most efficient technologies under these categories, while using the most efficient underlying Point to Point transport protocols.


Author(s):  
Andreas U. Schmidt ◽  
Nicolai Kuntze

Security in the value creation chain hinges on many single components and their interrelations. Trusted Platforms open ways to fulfil the pertinent requirements. This chapter gives a systematic approach to the utilisation of trusted computing platforms over the whole lifecycle of multimedia products. This spans production, aggregation, (re)distribution, consumption, and charging. Trusted Computing technology as specified by the Trusted Computing Group provides modular building blocks which can be utilized at many points in the multimedia lifecycle. We propose an according research roadmap beyond the conventional Digital Rights Management use case. Selected technical concepts illustrate the principles of Trusted Computing applications in the multimedia context.


2020 ◽  
Vol 10 (20) ◽  
pp. 7201
Author(s):  
Xiao-Xia Yin ◽  
Lihua Yin ◽  
Sillas Hadjiloucas

Mining algorithms for Dynamic Contrast Enhanced Magnetic Resonance Imaging (DCE-MRI) of breast tissue are discussed. The algorithms are based on recent advances in multi-dimensional signal processing and aim to advance current state-of-the-art computer-aided detection and analysis of breast tumours when these are observed at various states of development. The topics discussed include image feature extraction, information fusion using radiomics, multi-parametric computer-aided classification and diagnosis using information fusion of tensorial datasets as well as Clifford algebra based classification approaches and convolutional neural network deep learning methodologies. The discussion also extends to semi-supervised deep learning and self-supervised strategies as well as generative adversarial networks and algorithms using generated confrontational learning approaches. In order to address the problem of weakly labelled tumour images, generative adversarial deep learning strategies are considered for the classification of different tumour types. The proposed data fusion approaches provide a novel Artificial Intelligence (AI) based framework for more robust image registration that can potentially advance the early identification of heterogeneous tumour types, even when the associated imaged organs are registered as separate entities embedded in more complex geometric spaces. Finally, the general structure of a high-dimensional medical imaging analysis platform that is based on multi-task detection and learning is proposed as a way forward. The proposed algorithm makes use of novel loss functions that form the building blocks for a generated confrontation learning methodology that can be used for tensorial DCE-MRI. Since some of the approaches discussed are also based on time-lapse imaging, conclusions on the rate of proliferation of the disease can be made possible. The proposed framework can potentially reduce the costs associated with the interpretation of medical images by providing automated, faster and more consistent diagnosis.


2014 ◽  
Vol 05 (03) ◽  
pp. 660-669 ◽  
Author(s):  
S. Schulz ◽  
C. Martínez-Costa

SummaryObjective: Semantic interoperability of the Electronic Health Record (EHR) requires a rigorous and precise modelling of clinical information. Our objective is to facilitate the representation of clinical facts based on formal principles.Methods: We here explore the potential of ontology content patterns, which are grounded on a formal and semantically rich ontology model and can be specialised and composed.Results: We describe and apply two content patterns for the representation of data on tobacco use, rendered according to two heterogeneous models, represented in openEHR and in HL7 CDA. Finally, we provide some query exemplars that demonstrate a data interoperability use case.Conclusion: The use of ontology content patterns facilitate the semantic representation of clinical information and therefore improve their semantic interoperability. There are open issues such as the scalability and performance of the approach if a logic-based language is used. Implementation decisions might determine the final degree of semantic interoperability, influenced by the state of the art of the semantic technologies.Citation: Martínez-Costa C, Schulz S. Ontology content patterns as bridge for the semantic rRepresentation of clinical information Appl Clin Inf 2014; 5: 660–669http://dx.doi.org/10.4338/ACI-2014-04-RA-0031


2020 ◽  
Vol 2020 ◽  
pp. 1-7 ◽  
Author(s):  
Aboubakar Nasser Samatin Njikam ◽  
Huan Zhao

This paper introduces an extremely lightweight (with just over around two hundred thousand parameters) and computationally efficient CNN architecture, named CharTeC-Net (Character-based Text Classification Network), for character-based text classification problems. This new architecture is composed of four building blocks for feature extraction. Each of these building blocks, except the last one, uses 1 × 1 pointwise convolutional layers to add more nonlinearity to the network and to increase the dimensions within each building block. In addition, shortcut connections are used in each building block to facilitate the flow of gradients over the network, but more importantly to ensure that the original signal present in the training data is shared across each building block. Experiments on eight standard large-scale text classification and sentiment analysis datasets demonstrate CharTeC-Net’s superior performance over baseline methods and yields competitive accuracy compared with state-of-the-art methods, although CharTeC-Net has only between 181,427 and 225,323 parameters and weighs less than 1 megabyte.


Sign in / Sign up

Export Citation Format

Share Document