scholarly journals Fostering a high efficient waveguide diplexer to improve 5G network frequency spectrum for IoT devices in cloud

2018 ◽  
Vol 7 (2.12) ◽  
pp. 228
Author(s):  
Gautami Alagarsamy ◽  
Dr J.Shanthini

In recent days Internet of Things and 5G network connection are complement to each other. The 5G networks would surpass 4GLTE, 4G, 3G and the other networks we used. It has become a boon for the end users and corporate due to its architecture to handle the heavy data traffic of connected smart devices and large amount of smart phone users worldwide. 5G devices should support longer battery and available at low cost and consume less energy. Some smart phones have ability to charge wirelessly through inductive coupling between base and the phone. In advanced options for charging IoT devices Wireless deportation technology is integrated. To curb network congestion in denser areas and to operate at higher data rates in Ka band applications this research paper analyzes the impact of WR-28 Waveguide Diplexer to improve network frequency spectrum around 30GHz for IoT devices in cloud services. The design simulation and modeling are implemented by using Antenna Magus version 5.5 Software.  

2021 ◽  
Vol 10 (2) ◽  
pp. 34
Author(s):  
Alessio Botta ◽  
Jonathan Cacace ◽  
Riccardo De Vivo ◽  
Bruno Siciliano ◽  
Giorgio Ventre

With the advances in networking technologies, robots can use the almost unlimited resources of large data centers, overcoming the severe limitations imposed by onboard resources: this is the vision of Cloud Robotics. In this context, we present DewROS, a framework based on the Robot Operating System (ROS) which embodies the three-layer, Dew-Robotics architecture, where computation and storage can be distributed among the robot, the network devices close to it, and the Cloud. After presenting the design and implementation of DewROS, we show its application in a real use-case called SHERPA, which foresees a mixed ground and aerial robotic platform for search and rescue in an alpine environment. We used DewROS to analyze the video acquired by the drones in the Cloud and quickly spot signs of human beings in danger. We perform a wide experimental evaluation using different network technologies and Cloud services from Google and Amazon. We evaluated the impact of several variables on the performance of the system. Our results show that, for example, the video length has a minimal impact on the response time with respect to the video size. In addition, we show that the response time depends on the Round Trip Time (RTT) of the network connection when the video is already loaded into the Cloud provider side. Finally, we present a model of the annotation time that considers the RTT of the connection used to reach the Cloud, discussing results and insights into how to improve current Cloud Robotics applications.


2018 ◽  
Vol 10 (3) ◽  
pp. 61-83 ◽  
Author(s):  
Deepali Chaudhary ◽  
Kriti Bhushan ◽  
B.B. Gupta

This article describes how cloud computing has emerged as a strong competitor against traditional IT platforms by offering low-cost and “pay-as-you-go” computing potential and on-demand provisioning of services. Governments, as well as organizations, have migrated their entire or most of the IT infrastructure to the cloud. With the emergence of IoT devices and big data, the amount of data forwarded to the cloud has increased to a huge extent. Therefore, the paradigm of cloud computing is no longer sufficient. Furthermore, with the growth of demand for IoT solutions in organizations, it has become essential to process data quickly, substantially and on-site. Hence, Fog computing is introduced to overcome these drawbacks of cloud computing by bringing intelligence to the edge of the network using smart devices. One major security issue related to the cloud is the DDoS attack. This article discusses in detail about the DDoS attack, cloud computing, fog computing, how DDoS affect cloud environment and how fog computing can be used in a cloud environment to solve a variety of problems.


Author(s):  
Akashdeep Bhardwaj

This article describes how the rise of fog computing to improve cloud computing performance and the acceptance of smart devices is slowly but surely changing our future and shaping the computing environment around us. IoT integrated with advances in low cost computing, storage and power, along with high speed networks and big data, supports distributed computing. However, much like cloud computing, which are under constant security attacks and issues, distributed computing also faces similar challenges and security threats. This can be mitigated to a great extent using fog computing, which extends the limits of Cloud services to the last mile edge near to the nodes and networks, thereby increasing the performance and security levels. Fog computing also helps increase the reach and comes across as a viable solution for distributed computing. This article presents a review of the academic literature research work on the Fog Computing. The authors discuss the challenges in Fog environment and propose a new taxonomy.


2019 ◽  
pp. 1927-1951
Author(s):  
Deepali Chaudhary ◽  
Kriti Bhushan ◽  
B.B. Gupta

This article describes how cloud computing has emerged as a strong competitor against traditional IT platforms by offering low-cost and “pay-as-you-go” computing potential and on-demand provisioning of services. Governments, as well as organizations, have migrated their entire or most of the IT infrastructure to the cloud. With the emergence of IoT devices and big data, the amount of data forwarded to the cloud has increased to a huge extent. Therefore, the paradigm of cloud computing is no longer sufficient. Furthermore, with the growth of demand for IoT solutions in organizations, it has become essential to process data quickly, substantially and on-site. Hence, Fog computing is introduced to overcome these drawbacks of cloud computing by bringing intelligence to the edge of the network using smart devices. One major security issue related to the cloud is the DDoS attack. This article discusses in detail about the DDoS attack, cloud computing, fog computing, how DDoS affect cloud environment and how fog computing can be used in a cloud environment to solve a variety of problems.


2018 ◽  
Vol 1 (1) ◽  
pp. 35-49 ◽  
Author(s):  
Akashdeep Bhardwaj

This article describes how the rise of fog computing to improve cloud computing performance and the acceptance of smart devices is slowly but surely changing our future and shaping the computing environment around us. IoT integrated with advances in low cost computing, storage and power, along with high speed networks and big data, supports distributed computing. However, much like cloud computing, which are under constant security attacks and issues, distributed computing also faces similar challenges and security threats. This can be mitigated to a great extent using fog computing, which extends the limits of Cloud services to the last mile edge near to the nodes and networks, thereby increasing the performance and security levels. Fog computing also helps increase the reach and comes across as a viable solution for distributed computing. This article presents a review of the academic literature research work on the Fog Computing. The authors discuss the challenges in Fog environment and propose a new taxonomy.


Electronics ◽  
2021 ◽  
Vol 10 (5) ◽  
pp. 600
Author(s):  
Gianluca Cornetta ◽  
Abdellah Touhafi

Low-cost, high-performance embedded devices are proliferating and a plethora of new platforms are available on the market. Some of them either have embedded GPUs or the possibility to be connected to external Machine Learning (ML) algorithm hardware accelerators. These enhanced hardware features enable new applications in which AI-powered smart objects can effectively and pervasively run in real-time distributed ML algorithms, shifting part of the raw data analysis and processing from cloud or edge to the device itself. In such context, Artificial Intelligence (AI) can be considered as the backbone of the next generation of Internet of the Things (IoT) devices, which will no longer merely be data collectors and forwarders, but really “smart” devices with built-in data wrangling and data analysis features that leverage lightweight machine learning algorithms to make autonomous decisions on the field. This work thoroughly reviews and analyses the most popular ML algorithms, with particular emphasis on those that are more suitable to run on resource-constrained embedded devices. In addition, several machine learning algorithms have been built on top of a custom multi-dimensional array library. The designed framework has been evaluated and its performance stressed on Raspberry Pi III- and IV-embedded computers.


Author(s):  
Sumaiya Mushroor ◽  
Shammin Haque ◽  
Riyadh A. Amir

Background: Overuse of smart devices provides comfort and problems both physically and mentally. The aim of this study was to assess the impact of smart phone and mobile devices on human health and life.Methods: This descriptive type of cross sectional study was conducted for three months in Dhaka city among general population aged 18 to 70 years. Four hundred and forty respondents were selected by non-probability convenient sampling technique. Data were collected by face to face interview with a semi-structured pre-tested questionnaire.Results: Among 440 respondents majority (76.6%) were below 25 years where 72.0% were students. A large proportion (90.5%) used smart phones for communication, 53.4% used for less than 5 hours daily. Majority (65.7%) had other electronic devices, most common 197 (68.1%) were laptop users where 118 (40.8%) used for studying. More than half 322 (73.2%) used earphones, 91 (20.7%) had ear problems and 223 (50.7%) lacked concentration. Many 299 (68.0%) had good relationship with family members, 208 (47.3%) stated that increased use of mobile devices hampered family life, 88 (42.3%) thought it reduced quality family time. Majority users 253 (57.5%) experienced physical discomfort after prolonged use and 95 (37.7%) suffered from headache. Association between age of respondents and time spent on smart devices was statistically significant (p<0.05). There was significant (p<0.05) association between ear problem and ear phone usage.Conclusions: Excessive use of smart phones should be avoided and social awareness increased through health programmes. Potential risks of cell phones and smart devices can be avoided by limiting the use.


2020 ◽  
Vol 12 (3) ◽  
pp. 48
Author(s):  
Dimitrios Myridakis ◽  
Georgios Spathoulas ◽  
Athanasios Kakarountas ◽  
Dimitrios Schinianakis

The continuous growth of the number of Internet of Things (IoT) devices and their inclusion to public and private infrastructures has introduced new applciations to the market and our day-to-day life. At the same time, these devices create a potential threat to personal and public security. This may be easily understood either due to the sensitivity of the collected data, or by our dependability to the devices’ operation. Considering that most IoT devices are of low cost and are used for various tasks, such as monitoring people or controlling indoor environmental conditions, the security factor should be enhanced. This paper presents the exploitation of side-channel attack technique for protecting low-cost smart devices in an intuitive way. The work aims to extend the dataset provided to an Intrusion Detection Systems (IDS) in order to achieve a higher accuracy in anomaly detection. Thus, along with typical data provided to an IDS, such as network traffic, transmitted packets, CPU usage, etc., it is proposed to include information regarding the device’s physical state and behaviour such as its power consumption, the supply current, the emitted heat, etc. Awareness of the typical operation of a smart device in terms of operation and functionality may prove valuable, since any deviation may warn of an operational or functional anomaly. In this paper, the deviation (either increase or decrease) of the supply current is exploited for this reason. This work aimed to affect the intrusion detection process of IoT and proposes for consideration new inputs of interest with a collateral interest of study. In parallel, malfunction of the device is also detected, extending this work’s application to issues of reliability and maintainability. The results present 100% attack detection and this is the first time that a low-cost security solution suitable for every type of target devices is presented.


Sensors ◽  
2018 ◽  
Vol 18 (11) ◽  
pp. 3966 ◽  
Author(s):  
Safdar Marwat ◽  
Yasir Mehmood ◽  
Ahmad Khan ◽  
Salman Ahmed ◽  
Abdul Hafeez ◽  
...  

The ever-growing Internet of Things (IoT) data traffic is one of the primary research focuses of future mobile networks. 3rd Generation Partnership Project (3GPP) standards like Long Term Evolution-Advanced (LTE-A) have been designed for broadband services. However, IoT devices are mainly based on narrowband applications. Standards like LTE-A might not provide efficient spectrum utilization when serving IoT applications. The aggregation of IoT data at an intermediate node before transmission can answer the issues of spectral efficiency. The objective of this work is to utilize the low cost 3GPP fixed, inband, layer-3 Relay Node (RN) for integrating IoT traffic into 5G network by multiplexing data packets at the RN before transmission to the Base Station (BS) in the form of large multiplexed packets. Frequency resource blocks can be shared among several devices with this method. An analytical model for this scheme, developed as an r-stage Coxian process, determines the radio resource utilization and system gain achieved. The model is validated by comparing the obtained results with simulation results.


2021 ◽  
Vol 17 (3) ◽  
pp. 1-25
Author(s):  
Guangrong Zhao ◽  
Bowen Du ◽  
Yiran Shen ◽  
Zhenyu Lao ◽  
Lizhen Cui ◽  
...  

In this article, we propose, LeaD , a new vibration-based communication protocol to Lea rn the unique patterns of vibration to D ecode the short messages transmitted to smart IoT devices. Unlike the existing vibration-based communication protocols that decode the short messages symbol-wise, either in binary or multi-ary, the message recipient in LeaD receives vibration signals corresponding to bits-groups. Each group consists of multiple symbols sent in a burst and the receiver decodes the group of symbols as a whole via machine learning-based approach. The fundamental behind LeaD is different combinations of symbols (1 s or 0 s) in a group will produce unique and reproducible patterns of vibration. Therefore, decoding in vibration-based communication can be modeled as a pattern classification problem. We design and implement a number of different machine learning models as the core engine of the decoding algorithm of LeaD to learn and recognize the vibration patterns. Through the intensive evaluations on large amount of datasets collected, the Convolutional Neural Network (CNN)-based model achieves the highest accuracy of decoding (i.e., lowest error rate), which is up to 97% at relatively high bits rate of 40 bits/s. While its competing vibration-based communication protocols can only achieve transmission rate of 10 bits/s and 20 bits/s with similar decoding accuracy. Furthermore, we evaluate its performance under different challenging practical settings and the results show that LeaD with CNN engine is robust to poses, distances (within valid range), and types of devices, therefore, a CNN model can be generally trained beforehand and widely applicable for different IoT devices under different circumstances. Finally, we implement LeaD on both off-the-shelf smartphone and smart watch to measure the detailed resources consumption on smart devices. The computation time and energy consumption of its different components show that LeaD is lightweight and can run in situ on low-cost smart IoT devices, e.g., smartwatches, without accumulated delay and introduces only marginal system overhead.


Sign in / Sign up

Export Citation Format

Share Document