scholarly journals An IOTA-Based Service Discovery Framework for Fog Computing

Electronics ◽  
2021 ◽  
Vol 10 (7) ◽  
pp. 844
Author(s):  
Tsung-Yi Tang ◽  
Li-Yuan Hou ◽  
Tyng-Yeu Liang

With the rise in fog computing, users are no longer restricted to only accessing resources located in central and distant clouds and can request services from neighboring fog nodes distributed over networks. This can effectively reduce the network latency of service responses and the load of data centers. Furthermore, it can prevent the Internet’s bandwidth from being used up due to massive data flows from end users to clouds. However, fog-computing resources are distributed over multiple levels of networks and are managed by different owners. Consequently, the problem of service discovery becomes quite complicated. For resolving this problem, a decentralized service discovery method is required. Accordingly, this research proposes a service discovery framework based on the distributed ledger technology of IOTA. The proposed framework enables clients to directly search for service nodes through any node in the IOTA Mainnet to achieve the goals of public access and high availability and avoid network attacks to distributed hash tables that are popularly used for service discovery. Moreover, clients can obtain more comprehensive information by visiting known nodes and select a fog node able to provide services with the shortest latency. Our experimental results have shown that the proposed framework is cost-effective for distributed service discovery due to the advantages of IOTA. On the other hand, it can indeed enable clients to obtain higher service quality by automatic node selection.

2021 ◽  
Vol 13 (4) ◽  
pp. 572
Author(s):  
Gintautas Mozgeris ◽  
Ivan Balenović

The pre-requisite for sustainable management of natural resources is the availability of timely, cost-effective, and comprehensive information on the status and development trends of the management object [...]


Circulation ◽  
2008 ◽  
Vol 118 (suppl_18) ◽  
Author(s):  
Hiroshi Kaneko ◽  
Tetsuo Hatanaka ◽  
Aki Nagase ◽  
Hiroko Noguchi ◽  
Tetsuya Sakamoto ◽  
...  

Previous reports have described several facility categories as important locations for public access defibrillation (PAD) program. However, in decision-making process as to what specific facilities may deserve PAD programs, each facility needs to estimate the number of shockable cardiac arrest (CA). Such estimation depends not only on the expected number of visitors/workers but also on the profile of the visitors/workers, which may vary among types of facilities. [Methods] Nagoya City has a population of 2.2 million with ~2,000 CA annually. Locations of CA were abstracted from data collected by Nagoya FD from 2003 through 2007. Types of public facilities were tallied and grouped into 36 categories consistent with Ordinance for Enforcement of Fire Service Act, and the number of CA within each category was determined. Data on the number of visitors/workers in each category were collected from governmental databases on metropolitan statistics. The incidence rate was calculated as numbers of all CA and CA with VF per 1 billion person.year. [Results] In 8 location categories, the databases for CA and the database for numbers of visitors/workers coincided in the categorization rules of the locations. The incidence rates are shows in the table . [Discussion] The incidence rate of shockable CA varies substantially among the location categories, presumably reflecting the variations of age distribution, average length of stay and intensity of physical activities of people who gather there. Schools (teachers) and factories have the highest incidence of shockable CA. However, our data may indicate that a school with small numbers of workers (teachers) may have a low priority of a PAD program. On the contrary, a station with low incidence rate of 0.05 may be a preferred target of a PAD program if a large number of travelers are expected to pass through. Thus our study provides fundamental data to detect important and cost-effective target locations of a PAD program. Number of CA per 1 billion preson.year


Biotechnology ◽  
2019 ◽  
pp. 1910-1943
Author(s):  
Veena Gayathri Krishnaswamy

Environmental pollution has been an irrefutable fact of life for many centuries; but it has become a real problem, since the start of the industrial revolution. Discharge of these toxic compounds without treatment results in serious health risks to humans and the marine ecosystem. Several physical, chemical and biological methods have been employed for the remediation of the phenolics. Bioremediation is identified as the most efficient, cost effective and eco-friendly ways for treatment of phenolic compounds. This article is a comprehensive review on the sources of phenolic compounds, their hazards, and their fate once released into the environment; the treatment technologies employed and bioremediation of these compounds using both non-extremophlic and extremophilic organisms. The review, throws light on the enzymes involved in the remediation of phenolic compounds, highlights the importance of extremophilic organisms and biological treatment of phenol containing industrial wastewaters. Such comprehensive information on the research work performed for the remediation of phenolic compounds provide ways to explore the role played by micro organisms in the remediation of phenolic compounds, which could be applied in the remediation of phenol /contaminated sites even under extreme conditions.


2014 ◽  
Vol 2014 ◽  
pp. 1-27 ◽  
Author(s):  
Suleman Khan ◽  
Muhammad Shiraz ◽  
Ainuddin Wahid Abdul Wahab ◽  
Abdullah Gani ◽  
Qi Han ◽  
...  

Network forensics enables investigation and identification of network attacks through the retrieved digital content. The proliferation of smartphones and the cost-effective universal data access through cloud has made Mobile Cloud Computing (MCC) a congenital target for network attacks. However, confines in carrying out forensics in MCC is interrelated with the autonomous cloud hosting companies and their policies for restricted access to the digital content in the back-end cloud platforms. It implies that existing Network Forensic Frameworks (NFFs) have limited impact in the MCC paradigm. To this end, we qualitatively analyze the adaptability of existing NFFs when applied to the MCC. Explicitly, the fundamental mechanisms of NFFs are highlighted and then analyzed using the most relevant parameters. A classification is proposed to help understand the anatomy of existing NFFs. Subsequently, a comparison is given that explores the functional similarities and deviations among NFFs. The paper concludes by discussing research challenges for progressive network forensics in MCC.


2019 ◽  
Vol 93 ◽  
pp. 101893
Author(s):  
Konstantinos Skiadopoulos ◽  
Konstantinos Oikonomou ◽  
Markos Avlonitis ◽  
Konstantinos Giannakis ◽  
Dimitrios Kogias ◽  
...  

Author(s):  
Lungisani Ndlovu ◽  
◽  
Okuthe P. Kogeda ◽  
Manoj Lall

Wireless mesh networks (WMNs) are the only cost-effective networks that support seamless connectivity, wide area network (WAN) coverage, and mobility features. However, the rapid increase in the number of users on these networks has brought an upsurge in competition for available resources and services. Consequently, factors such as link congestion, data collisions, link interferences, etc. are likely to occur during service discovery on these networks. This further degrades their quality of service (QoS). Therefore, the quick and timely discovery of these services becomes an essential parameter in optimizing the performance of service discovery on WMNs. In this paper, we present the design and implementation of an enhanced service discovery model that solves the performance bottleneck incurred by service discovery on WMNs. The proposed model integrates the particle swarm optimization (PSO) and ant colony optimization (ACO) algorithms to improve QoS. We use the PSO algorithm to assign different priorities to services on the network. On the other hand, we use the ACO algorithm to effectively establish the most cost-effective path whenever each transmitter has to be searched to identify whether it possesses the requested service(s). Furthermore, we design and implement the link congestion reduction (LCR) algorithm to define the number of service receivers to be granted access to services simultaneously. We simulate, test, and evaluate the proposed model in Network Simulator 2 (NS2), against ant colony-based multi constraints, QoS-aware service selection (QSS), and FLEXIble Mesh Service Discovery (FLEXI-MSD) models. The results show an average service discovery throughput of 80%, service availability of 96%, service discovery delay of 1.8 s, and success probability of service selection of 89%.


Sign in / Sign up

Export Citation Format

Share Document