scholarly journals LoRaWAN Security

Author(s):  
Olivier Seller

The LoRaWAN security design adheres to state-of-the-art principles: use of standard, well-vetted algorithms, and end-to-end security. The fundamental properties supported in LoRaWAN security are mutual end-point authentication, data origin authentication, integrity and replay protection, and confidentiality. The use of symmetric cryptography and prior secret key sharing between a device and a server enables an extremely power efficient and network efficient activation procedure.

2021 ◽  
Vol 11 (15) ◽  
pp. 6975
Author(s):  
Tao Zhang ◽  
Lun He ◽  
Xudong Li ◽  
Guoqing Feng

Lipreading aims to recognize sentences being spoken by a talking face. In recent years, the lipreading method has achieved a high level of accuracy on large datasets and made breakthrough progress. However, lipreading is still far from being solved, and existing methods tend to have high error rates on the wild data and have the defects of disappearing training gradient and slow convergence. To overcome these problems, we proposed an efficient end-to-end sentence-level lipreading model, using an encoder based on a 3D convolutional network, ResNet50, Temporal Convolutional Network (TCN), and a CTC objective function as the decoder. More importantly, the proposed architecture incorporates TCN as a feature learner to decode feature. It can partly eliminate the defects of RNN (LSTM, GRU) gradient disappearance and insufficient performance, and this yields notable performance improvement as well as faster convergence. Experiments show that the training and convergence speed are 50% faster than the state-of-the-art method, and improved accuracy by 2.4% on the GRID dataset.


Author(s):  
Yuheng Hu ◽  
Yili Hong

Residents often rely on newspapers and television to gather hyperlocal news for community awareness and engagement. More recently, social media have emerged as an increasingly important source of hyperlocal news. Thus far, the literature on using social media to create desirable societal benefits, such as civic awareness and engagement, is still in its infancy. One key challenge in this research stream is to timely and accurately distill information from noisy social media data streams to community members. In this work, we develop SHEDR (social media–based hyperlocal event detection and recommendation), an end-to-end neural event detection and recommendation framework with a particular use case for Twitter to facilitate residents’ information seeking of hyperlocal events. The key model innovation in SHEDR lies in the design of the hyperlocal event detector and the event recommender. First, we harness the power of two popular deep neural network models, the convolutional neural network (CNN) and long short-term memory (LSTM), in a novel joint CNN-LSTM model to characterize spatiotemporal dependencies for capturing unusualness in a region of interest, which is classified as a hyperlocal event. Next, we develop a neural pairwise ranking algorithm for recommending detected hyperlocal events to residents based on their interests. To alleviate the sparsity issue and improve personalization, our algorithm incorporates several types of contextual information covering topic, social, and geographical proximities. We perform comprehensive evaluations based on two large-scale data sets comprising geotagged tweets covering Seattle and Chicago. We demonstrate the effectiveness of our framework in comparison with several state-of-the-art approaches. We show that our hyperlocal event detection and recommendation models consistently and significantly outperform other approaches in terms of precision, recall, and F-1 scores. Summary of Contribution: In this paper, we focus on a novel and important, yet largely underexplored application of computing—how to improve civic engagement in local neighborhoods via local news sharing and consumption based on social media feeds. To address this question, we propose two new computational and data-driven methods: (1) a deep learning–based hyperlocal event detection algorithm that scans spatially and temporally to detect hyperlocal events from geotagged Twitter feeds; and (2) A personalized deep learning–based hyperlocal event recommender system that systematically integrates several contextual cues such as topical, geographical, and social proximity to recommend the detected hyperlocal events to potential users. We conduct a series of experiments to examine our proposed models. The outcomes demonstrate that our algorithms are significantly better than the state-of-the-art models and can provide users with more relevant information about the local neighborhoods that they live in, which in turn may boost their community engagement.


2022 ◽  
Vol 18 (1) ◽  
pp. 1-41
Author(s):  
Pamela Bezerra ◽  
Po-Yu Chen ◽  
Julie A. McCann ◽  
Weiren Yu

As sensor-based networks become more prevalent, scaling to unmanageable numbers or deployed in difficult to reach areas, real-time failure localisation is becoming essential for continued operation. Network tomography, a system and application-independent approach, has been successful in localising complex failures (i.e., observable by end-to-end global analysis) in traditional networks. Applying network tomography to wireless sensor networks (WSNs), however, is challenging. First, WSN topology changes due to environmental interactions (e.g., interference). Additionally, the selection of devices for running network monitoring processes (monitors) is an NP-hard problem. Monitors observe end-to-end in-network properties to identify failures, with their placement impacting the number of identifiable failures. Since monitoring consumes more in-node resources, it is essential to minimise their number while maintaining network tomography’s effectiveness. Unfortunately, state-of-the-art solutions solve this optimisation problem using time-consuming greedy heuristics. In this article, we propose two solutions for efficiently applying Network Tomography in WSNs: a graph compression scheme, enabling faster monitor placement by reducing the number of edges in the network, and an adaptive monitor placement algorithm for recovering the monitor placement given topology changes. The experiments show that our solution is at least 1,000× faster than the state-of-the-art approaches and efficiently copes with topology variations in large-scale WSNs.


Author(s):  
Nicolas Bougie ◽  
Ryutaro Ichise

Deep reinforcement learning (DRL) methods traditionally struggle with tasks where environment rewards are sparse or delayed, which entails that exploration remains one of the key challenges of DRL. Instead of solely relying on extrinsic rewards, many state-of-the-art methods use intrinsic curiosity as exploration signal. While they hold promise of better local exploration, discovering global exploration strategies is beyond the reach of current methods. We propose a novel end-to-end intrinsic reward formulation that introduces high-level exploration in reinforcement learning. Our curiosity signal is driven by a fast reward that deals with local exploration and a slow reward that incentivizes long-time horizon exploration strategies. We formulate curiosity as the error in an agent’s ability to reconstruct the observations given their contexts. Experimental results show that this high-level exploration enables our agents to outperform prior work in several Atari games.


2020 ◽  
Vol 34 (07) ◽  
pp. 10778-10785
Author(s):  
Linpu Fang ◽  
Hang Xu ◽  
Zhili Liu ◽  
Sarah Parisot ◽  
Zhenguo Li

Object detectors trained on fully-annotated data currently yield state of the art performance but require expensive manual annotations. On the other hand, weakly-supervised detectors have much lower performance and cannot be used reliably in a realistic setting. In this paper, we study the hybrid-supervised object detection problem, aiming to train a high quality detector with only a limited amount of fully-annotated data and fully exploiting cheap data with image-level labels. State of the art methods typically propose an iterative approach, alternating between generating pseudo-labels and updating a detector. This paradigm requires careful manual hyper-parameter tuning for mining good pseudo labels at each round and is quite time-consuming. To address these issues, we present EHSOD, an end-to-end hybrid-supervised object detection system which can be trained in one shot on both fully and weakly-annotated data. Specifically, based on a two-stage detector, we proposed two modules to fully utilize the information from both kinds of labels: 1) CAM-RPN module aims at finding foreground proposals guided by a class activation heat-map; 2) hybrid-supervised cascade module further refines the bounding-box position and classification with the help of an auxiliary head compatible with image-level data. Extensive experiments demonstrate the effectiveness of the proposed method and it achieves comparable results on multiple object detection benchmarks with only 30% fully-annotated data, e.g. 37.5% mAP on COCO. We will release the code and the trained models.


2020 ◽  
Vol 2020 ◽  
pp. 1-29
Author(s):  
Rehan Tariq ◽  
Zeshan Iqbal ◽  
Farhan Aadil

Technology advancement in the field of vehicular ad hoc networks (VANETs) improves smart transportation along with its many other applications. Routing in VANETs is difficult as compared to mobile ad hoc networks (MANETs); topological constraints such as high mobility, node density, and frequent path failure make the VANET routing more challenging. To scale complex routing problems, where static and dynamic routings do not work well, AI-based clustering techniques are introduced. Evolutionary algorithm-based clustering techniques are used to solve such routing problems; moth flame optimization is one of them. In this work, an intelligent moth flame optimization-based clustering (IMOC) for a drone-assisted vehicular network is proposed. This technique is used to provide maximum coverage for the vehicular node with minimum cluster heads (CHs) required for routing. Delivering optimal route by providing end-to-end connectivity with minimum overhead is the core issue addressed in this article. Node density, grid size, and transmission ranges are the performance metrics used for comparative analysis. These parameters were varied during simulations for each algorithm, and the results were recorded. A comparison was done with state-of-the-art clustering algorithms for routing such as Ant Colony Optimization (ACO), Comprehensive Learning Particle Swarm Optimization (CLPSO), and Gray Wolf Optimization (GWO). Experimental outcomes for IMOC consistently outperformed the state-of-the-art techniques for each scenario. A framework is also proposed with the support of a commercial Unmanned Aerial Vehicle (UAV) to improve routing by minimizing path creation overhead in VANETs. UAV support for clustering improved end-to-end connectivity by keeping the routing cost constant for intercluster communication in the same grid.


2020 ◽  
Vol 34 (01) ◽  
pp. 303-311 ◽  
Author(s):  
Sicheng Zhao ◽  
Yunsheng Ma ◽  
Yang Gu ◽  
Jufeng Yang ◽  
Tengfei Xing ◽  
...  

Emotion recognition in user-generated videos plays an important role in human-centered computing. Existing methods mainly employ traditional two-stage shallow pipeline, i.e. extracting visual and/or audio features and training classifiers. In this paper, we propose to recognize video emotions in an end-to-end manner based on convolutional neural networks (CNNs). Specifically, we develop a deep Visual-Audio Attention Network (VAANet), a novel architecture that integrates spatial, channel-wise, and temporal attentions into a visual 3D CNN and temporal attentions into an audio 2D CNN. Further, we design a special classification loss, i.e. polarity-consistent cross-entropy loss, based on the polarity-emotion hierarchy constraint to guide the attention generation. Extensive experiments conducted on the challenging VideoEmotion-8 and Ekman-6 datasets demonstrate that the proposed VAANet outperforms the state-of-the-art approaches for video emotion recognition. Our source code is released at: https://github.com/maysonma/VAANet.


Sign in / Sign up

Export Citation Format

Share Document