Enabling continuous connectivity services for ambrosus blockchain application by incorporating 5G-multilevel machine learning orchestrations

2022 ◽  
pp. 1-16
Author(s):  
Nagaraj Varatharaj ◽  
Sumithira Thulasimani Ramalingam

Most revolutionary applications extending far beyond smartphones and high configured mobile device use to the future generation wireless networks’ are high potential capabilities in recent days. One of the advanced wireless networks and mobile technology is 5G, where it provides high speed, better reliability, and amended capacity. 5 G offers complete coverage, which is accommodates any IoT device, connectivity, and intelligent edge algorithms. So that 5 G has a high demand in a wide range of commercial applications. Ambrosus is a commercial company that integrates block-chain security, IoT network, and supply chain management for medical and food enterprises. This paper proposed a novel framework that integrates 5 G technology, Machine Learning (ML) algorithms, and block-chain security. The main idea of this work is to incorporate the 5 G technology into Machine learning architectures for the Ambrosus application. 5 G technology provides continuous connection among the network user/nodes, where choosing the right user, base station, and the controller is obtained by using for ML architecture. The proposed framework comprises 5 G technology incorporate, a novel network orchestration, Radio Access Network, and a centralized distributor, and a radio unit layer. The radio unit layer is used for integrating all the components of the framework. The ML algorithm is evaluated the dynamic condition of the base station, like as IoT nodes, Ambrosus users, channels, and the route to enhance the efficiency of the communication. The performance of the proposed framework is evaluated in terms of prediction by simulating the model in MATLAB software. From the performance comparison, it is noticed that the proposed unified architecture obtained 98.6% of accuracy which is higher than the accuracy of the existing decision tree algorithm 97.1% .

Author(s):  
Shler Farhad Khorshid ◽  
Adnan Mohsin Abdulazeez ◽  
Amira Bibo Sallow

Breast cancer is one of the most common diseases among women, accounting for many deaths each year. Even though cancer can be treated and cured in its early stages, many patients are diagnosed at a late stage. Data mining is the method of finding or extracting information from massive databases or datasets, and it is a field of computer science with a lot of potentials. It covers a wide range of areas, one of which is classification. Classification may also be accomplished using a variety of methods or algorithms. With the aid of MATLAB, five classification algorithms were compared. This paper presents a performance comparison among the classifiers: Support Vector Machine (SVM), Logistics Regression (LR), K-Nearest Neighbors (K-NN), Weighted K-Nearest Neighbors (Weighted K-NN), and Gaussian Naïve Bayes (Gaussian NB). The data set was taken from UCI Machine learning Repository. The main objective of this study is to classify breast cancer women using the application of machine learning algorithms based on their accuracy. The results have revealed that Weighted K-NN (96.7%) has the highest accuracy among all the classifiers.


2022 ◽  
Vol 355 ◽  
pp. 03052
Author(s):  
Xiaobei Yan ◽  
Maode Ma

Machine Type Communication (MTC) has been emerging for a wide range of applications and services for the Internet of Things (IoT). In some scenarios, a large group of MTC devices (MTCDs) may enter the communication coverage of a new target base station simultaneously. However, the current handover mechanism specified by the Third Generation Partnership Project (3GPP) incur high signalling overhead over the access network and the core network for such scenario. Moreover, other existing solutions have several security problems in terms of failure of key forward secrecy (KFS) and lack of mutual authentication. In this paper, we propose an efficient authentication protocol for a group of MTCDs in all handover scenarios. By the proposal, the messages of two MTCDs are concatenated and sent by an authenticated group member to reduce the signalling cost. The proposed protocol has been analysed on its security functionality to show its ability to preserve user privacy and resist from major typical malicious attacks. It can be expected that the proposed scheme is applicable to all kinds of group mobility scenarios such as a platoon of vehicles or a high-speed train. The performance evaluation demonstrates that the proposed protocol is efficient in terms of computational and signalling cost.


Electronics ◽  
2020 ◽  
Vol 9 (3) ◽  
pp. 386 ◽  
Author(s):  
Raya Majid Alsharfa ◽  
Saleem Latteef Mohammed ◽  
Sadik Kamel Gharghan ◽  
Imran Khan ◽  
Bong Jun Choi

As more and more mobile multimedia services are produced, end users are increasingly demanding access to high-speed, low-latency mobile communication networks. Among them, device-to-device (D2D) communication does not need the data to be forwarded through the base station relay but allows the two mobile devices adjacent to each other to establish a direct local link under control of the base station. This flexible communication method reduces the processing bottlenecks and blind spots of the base station and can be widely used in dense user communication scenarios such as transportation systems. Aiming at the problem of high energy consumption and improved quality of service demands by the D2D users, this paper proposes a new scheme to effectively improve the user fairness and satisfaction based on the user grouping into clusters. The main idea is to create the interference graph between the D2D users which is based on the graph coloring theory and constructs the color lists of the D2D users while cellular users’ requirements are guaranteed. Finally, those D2D users who can share the same channel are grouped in the same cluster. Simulation results show that the proposed scheme outperforms the existing schemes and effectively improve system performance.


Author(s):  
E.D. Wolf

Most microelectronics devices and circuits operate faster, consume less power, execute more functions and cost less per circuit function when the feature-sizes internal to the devices and circuits are made smaller. This is part of the stimulus for the Very High-Speed Integrated Circuits (VHSIC) program. There is also a need for smaller, more sensitive sensors in a wide range of disciplines that includes electrochemistry, neurophysiology and ultra-high pressure solid state research. There is often fundamental new science (and sometimes new technology) to be revealed (and used) when a basic parameter such as size is extended to new dimensions, as is evident at the two extremes of smallness and largeness, high energy particle physics and cosmology, respectively. However, there is also a very important intermediate domain of size that spans from the diameter of a small cluster of atoms up to near one micrometer which may also have just as profound effects on society as “big” physics.


2018 ◽  
Author(s):  
Sherif Tawfik ◽  
Olexandr Isayev ◽  
Catherine Stampfl ◽  
Joseph Shapter ◽  
David Winkler ◽  
...  

Materials constructed from different van der Waals two-dimensional (2D) heterostructures offer a wide range of benefits, but these systems have been little studied because of their experimental and computational complextiy, and because of the very large number of possible combinations of 2D building blocks. The simulation of the interface between two different 2D materials is computationally challenging due to the lattice mismatch problem, which sometimes necessitates the creation of very large simulation cells for performing density-functional theory (DFT) calculations. Here we use a combination of DFT, linear regression and machine learning techniques in order to rapidly determine the interlayer distance between two different 2D heterostructures that are stacked in a bilayer heterostructure, as well as the band gap of the bilayer. Our work provides an excellent proof of concept by quickly and accurately predicting a structural property (the interlayer distance) and an electronic property (the band gap) for a large number of hybrid 2D materials. This work paves the way for rapid computational screening of the vast parameter space of van der Waals heterostructures to identify new hybrid materials with useful and interesting properties.


2020 ◽  
Author(s):  
Sina Faizollahzadeh Ardabili ◽  
Amir Mosavi ◽  
Pedram Ghamisi ◽  
Filip Ferdinand ◽  
Annamaria R. Varkonyi-Koczy ◽  
...  

Several outbreak prediction models for COVID-19 are being used by officials around the world to make informed-decisions and enforce relevant control measures. Among the standard models for COVID-19 global pandemic prediction, simple epidemiological and statistical models have received more attention by authorities, and they are popular in the media. Due to a high level of uncertainty and lack of essential data, standard models have shown low accuracy for long-term prediction. Although the literature includes several attempts to address this issue, the essential generalization and robustness abilities of existing models needs to be improved. This paper presents a comparative analysis of machine learning and soft computing models to predict the COVID-19 outbreak as an alternative to SIR and SEIR models. Among a wide range of machine learning models investigated, two models showed promising results (i.e., multi-layered perceptron, MLP, and adaptive network-based fuzzy inference system, ANFIS). Based on the results reported here, and due to the highly complex nature of the COVID-19 outbreak and variation in its behavior from nation-to-nation, this study suggests machine learning as an effective tool to model the outbreak. This paper provides an initial benchmarking to demonstrate the potential of machine learning for future research. Paper further suggests that real novelty in outbreak prediction can be realized through integrating machine learning and SEIR models.


2021 ◽  
Vol 15 ◽  
Author(s):  
Alhassan Alkuhlani ◽  
Walaa Gad ◽  
Mohamed Roushdy ◽  
Abdel-Badeeh M. Salem

Background: Glycosylation is one of the most common post-translation modifications (PTMs) in organism cells. It plays important roles in several biological processes including cell-cell interaction, protein folding, antigen’s recognition, and immune response. In addition, glycosylation is associated with many human diseases such as cancer, diabetes and coronaviruses. The experimental techniques for identifying glycosylation sites are time-consuming, extensive laboratory work, and expensive. Therefore, computational intelligence techniques are becoming very important for glycosylation site prediction. Objective: This paper is a theoretical discussion of the technical aspects of the biotechnological (e.g., using artificial intelligence and machine learning) to digital bioinformatics research and intelligent biocomputing. The computational intelligent techniques have shown efficient results for predicting N-linked, O-linked and C-linked glycosylation sites. In the last two decades, many studies have been conducted for glycosylation site prediction using these techniques. In this paper, we analyze and compare a wide range of intelligent techniques of these studies from multiple aspects. The current challenges and difficulties facing the software developers and knowledge engineers for predicting glycosylation sites are also included. Method: The comparison between these different studies is introduced including many criteria such as databases, feature extraction and selection, machine learning classification methods, evaluation measures and the performance results. Results and conclusions: Many challenges and problems are presented. Consequently, more efforts are needed to get more accurate prediction models for the three basic types of glycosylation sites.


2021 ◽  
Author(s):  
Eric J Snider ◽  
Lauren E Cornell ◽  
Brandon M Gross ◽  
David O Zamora ◽  
Emily N Boice

ABSTRACT Introduction Open-globe ocular injuries have increased in frequency in recent combat operations due to increased use of explosive weaponry. Unfortunately, open-globe injuries have one of the worst visual outcomes for the injured warfighter, often resulting in permanent loss of vision. To improve visual recovery, injuries need to be stabilized quickly following trauma, in order to restore intraocular pressure and create a watertight seal. Here, we assess four off-the-shelf (OTS), commercially available tissue adhesives for their ability to seal military-relevant corneal perforation injuries (CPIs). Materials and Methods Adhesives were assessed using an anterior segment inflation platform and a previously developed high-speed benchtop corneal puncture model, to create injuries in porcine eyes. After injury, adhesives were applied and injury stabilization was assessed by measuring outflow rate, ocular compliance, and burst pressure, followed by histological analysis. Results Tegaderm dressings and Dermabond skin adhesive most successfully sealed injuries in preliminary testing. Across a range of injury sizes and shapes, Tegaderm performed well in smaller injury sizes, less than 2 mm in diameter, but inadequately sealed large or complex injuries. Dermabond created a watertight seal capable of maintaining ocular tissue at physiological intraocular pressure for almost all injury shapes and sizes. However, application of the adhesive was inconsistent. Histologically, after removal of the Dermabond skin adhesive, the corneal epithelium was removed and oftentimes the epithelium surface penetrated into the wound and was adhered to inner stromal tissue. Conclusions Dermabond can stabilize a wide range of CPIs; however, application is variable, which may adversely impact the corneal tissue. Without addressing these limitations, no OTS adhesive tested herein can be directly translated to CPIs. This highlights the need for development of a biomaterial product to stabilize these injuries without causing ocular damage upon removal, thus improving the poor vision prognosis for the injured warfighter.


2021 ◽  
Vol 8 (1) ◽  
Author(s):  
Sungmin O. ◽  
Rene Orth

AbstractWhile soil moisture information is essential for a wide range of hydrologic and climate applications, spatially-continuous soil moisture data is only available from satellite observations or model simulations. Here we present a global, long-term dataset of soil moisture derived through machine learning trained with in-situ measurements, SoMo.ml. We train a Long Short-Term Memory (LSTM) model to extrapolate daily soil moisture dynamics in space and in time, based on in-situ data collected from more than 1,000 stations across the globe. SoMo.ml provides multi-layer soil moisture data (0–10 cm, 10–30 cm, and 30–50 cm) at 0.25° spatial and daily temporal resolution over the period 2000–2019. The performance of the resulting dataset is evaluated through cross validation and inter-comparison with existing soil moisture datasets. SoMo.ml performs especially well in terms of temporal dynamics, making it particularly useful for applications requiring time-varying soil moisture, such as anomaly detection and memory analyses. SoMo.ml complements the existing suite of modelled and satellite-based datasets given its distinct derivation, to support large-scale hydrological, meteorological, and ecological analyses.


Sign in / Sign up

Export Citation Format

Share Document