scholarly journals Fog Integrated Secured and Distributed Environment for Healthcare Industry with Software Defined Networking

2021 ◽  
Vol 1 (1) ◽  
pp. 17-30
Author(s):  
Jamal Kh-madhloom

Fog computing is a segment of cloud computing where a vast number of peripheral equipment links to the internet. The term "fog" indicates the edges of a cloud in which high performance can be achieved. Many of these devices will generate voluminous raw data as from sensors, and rather than forward all this data to cloud-based servers to be processed, the idea behind fog computing is to do as much processing as possible using computing units co-located with the data-generating devices, so that processed rather than raw data is forwarded, and bandwidth requirements are reduced. A major advantage of processing locally is that data is more often used for the same computation machine which produced the data. Also, the latency between data production and data consumption was reduced. This example is not fully original, since specially programmed hardware has long been used for signal processing. The work presents the integration of software defined networking with the association of fog environment to have the cavernous implementation patterns in the health care industry with higher degree of accuracy.

2019 ◽  
Vol 9 (1) ◽  
Author(s):  
Dan Jia ◽  
Haitao Duan ◽  
Shengpeng Zhan ◽  
Yongliang Jin ◽  
Bingxue Cheng ◽  
...  

AbstractLong developing period and cumbersome evaluation for the lubricating materials performance seriously jeopardize the successful development and application of any database system in tribological field. Such major setback can be solved effectively by implementing approaches with high throughput calculation. However, it often involves with vast number of output files, which are computed on the basis of first principle computation, having different data format from that of their experimental counterparts. Commonly, the input, storage and management of first principle calculation files and their individually test counterparts, implementing fast query and display in the database, adding to the use of physical parameters, as predicted with the performance estimated by first principle approach, may solve such setbacks. Investigation is thus performed for establishing database website specifically for lubricating materials, which satisfies both data: (i) as calculated on the basis of first principles and (ii) as obtained by practical experiment. It further explores preliminarily the likely relationship between calculated physical parameters of lubricating oil and its respectively tribological and anti-oxidative performance as predicted by lubricant machine learning model. Success of the method facilitates in instructing the obtainment of optimal design, preparation and application for any new lubricating material so that accomplishment of high performance is possible.


Author(s):  
Yanish Pradhananga ◽  
Pothuraju Rajarajeswari

The evolution of Internet of Things (IoT) brought about several challenges for the existing Hardware, Network and Application development. Some of these are handling real-time streaming and batch bigdata, real- time event handling, dynamic cluster resource allocation for computation, Wired and Wireless Network of Things etc. In order to combat these technicalities, many new technologies and strategies are being developed. Tiarrah Computing comes up with integration the concept of Cloud Computing, Fog Computing and Edge Computing. The main objectives of Tiarrah Computing are to decouple application deployment and achieve High Performance, Flexible Application Development, High Availability, Ease of Development, Ease of Maintenances etc. Tiarrah Computing focus on using the existing opensource technologies to overcome the challenges that evolve along with IoT. This paper gives you overview of the technologies and design your application as well as elaborate how to overcome most of existing challenge.


1972 ◽  
Vol 18 (9) ◽  
pp. 1013-1018
Author(s):  
M A Evenson ◽  
M A Olson

Abstract A high-speed, high-performance, continuous-flow analyzer is described that operates at two to three times the usual analysis rate without necessitating corrections of the raw data and with no decrease in accuracy or precision. At faster speeds (180-300 samples/h) inductive sample interaction (%Ii), opposite in direction to carry-over, is for the first time quantitatively measured. A correction equation for %Ii was developed, and when it is applied to raw data, the accuracy of the results are significantly improved. Operating characteristics of the high-speed analyzer are described and the desirability of automatic computer corrections is discussed for the high-speed system.


Author(s):  
Sachin Chavhan ◽  
Rahul Chavhan

In recent years India is suffering from various natural disaster which have great effect on social life of people and large burden on disaster management authority. The high frequency disaster causes a serious loss of property, life and present more complex damage. Disaster management involves the detailed process of disaster response. A High-Performance Intelligent Disaster Management System can realize the complete disaster avoidance and reduction from satellite mission planning, data production, data acquisition, with the application of remote sensing and managing integrated rapid service. The main objective of proposed work is to overcome the limitations of disaster management with a novel design and development of IoT based platform for the application of disaster management system.


2019 ◽  
Vol 17 (2) ◽  
pp. 207-214
Author(s):  
Raju Bhukya ◽  
Sumit Deshmuk

The indispensable knowledge of Deoxyribonucleic Acid (DNA) sequences and sharply reducing cost of the DNA sequencing techniques has attracted numerous researchers in the field of Genetics. These sequences are getting available at an exponential rate leading to the bulging size of molecular biology databases making large disk arrays and compute clusters inevitable for analysis.In this paper, we proposed referential DNA data compression using hadoop MapReduce Framework to process humongous amount of genetic data in distributed environment on high performance compute clusters. Our method has successfully achieved a better balance between compression ratio and the amount of time required for DNA data compression as compared to other Referential DNA Data Compression methods.


Author(s):  
Anju Shukla ◽  
Shishir Kumar ◽  
Harikesh Singh

Computational approaches contribute a significance role in various fields such as medical applications, astronomy, and weather science, to perform complex calculations in speedy manner. Today, personal computers are very powerful but underutilized. Most of the computer resources are idle; 75% of the time and server are often unproductive. This brings the sense of distributed computing, in which the idea is to use the geographically distributed resources to meet the demand of high-performance computing. The Internet facilitates users to access heterogeneous services and run applications over a distributed environment. Due to openness and heterogeneous nature of distributed computing, the developer must deal with several issues like load balancing, interoperability, fault occurrence, resource selection, and task scheduling. Load balancing is the mechanism to distribute the load among resources optimally. The objective of this chapter is to discuss need and issues of load balancing that evolves the research scope. Various load balancing algorithms and scheduling methods are analyzed that are used for performance optimization of web resources. A systematic literature with their solutions and limitations has been presented. The chapter provides a concise narrative of the problems encountered and dimensions for future extension.


Author(s):  
Manoj Himmatrao Devare

The scientist, engineers, and researchers highly need the high-performance computing (HPC) services for executing the energy, engineering, environmental sciences, weather, and life science simulations. The virtual machine (VM) or docker-enabled HPC Cloud service provides the advantages of consolidation and support for multiple users in public cloud environment. Adding the hypervisor on the top of bare metal hardware brings few challenges like the overhead of computation due to virtualization, especially in HPC environment. This chapter discusses the challenges, solutions, and opportunities due to input-output, VMM overheads, interconnection overheads, VM migration problems, and scalability problems in HPC Cloud. This chapter portrays HPC Cloud as highly complex distributed environment consisting of the heterogeneous types of architectures consisting of the different processor architectures, inter-connectivity techniques, the problems of the shared memory, distributed memory, and hybrid architectures in distributed computing like resilience, scalability, check-pointing, and fault tolerance.


Author(s):  
Manoj Himmatrao Devare

The scientist, engineers, and researchers highly need the high-performance computing (HPC) services for executing the energy, engineering, environmental sciences, weather, and life science simulations. The virtual machine (VM) or docker-enabled HPC Cloud service provides the advantages of consolidation and support for multiple users in public cloud environment. Adding the hypervisor on the top of bare metal hardware brings few challenges like the overhead of computation due to virtualization, especially in HPC environment. This chapter discusses the challenges, solutions, and opportunities due to input-output, VMM overheads, interconnection overheads, VM migration problems, and scalability problems in HPC Cloud. This chapter portrays HPC Cloud as highly complex distributed environment consisting of the heterogeneous types of architectures consisting of the different processor architectures, inter-connectivity techniques, the problems of the shared memory, distributed memory, and hybrid architectures in distributed computing like resilience, scalability, check-pointing, and fault tolerance.


Sign in / Sign up

Export Citation Format

Share Document