scholarly journals Implementation Of Docker Container On Local Network By Applying Reverse Proxy

2021 ◽  
Vol 3 (1) ◽  
pp. 34-39
Author(s):  
Henni Endah Wahanani ◽  
Mohammad Idhom ◽  
Kiki Yuniar Kristiawan

Virtualization is an implementation of a software. Virtualization technology has changed the direction of the computer industrial revolution by reducing capital costs and operating costs. The availability of a virtualization will also increase the availability of higher services and data protection mechanisms. Docker is configured to create multiple containers on a network, each container containing one image. The three containers will be created in one compose where each container is connected to each other for WordPress configuration and two composers will be created. Furthermore, from each compose a reverse proxy configuration is carried out which aims to set a different domain address. Lighten computer performance and can reduce the required storage so as to make hosting more effective and efficient. Containers also provide a security advantage over complete control over management running on separate, isolated containers.

2017 ◽  
pp. 144-150
Author(s):  
Peter W. Rein ◽  
M. Getaz ◽  
A. Raghunandan ◽  
N. du Pleissis ◽  
H. Saleh ◽  
...  

A new design for syrup and juice clarifiers is presented. The design takes advantage of the considerably improved performance of clarifiers incorporating lamella plates, and the reasons for the improvement are outlined. Computational fluid dynamics (CFD) work done to simulate the performance is summarised. This design enables the residence time to be dramatically reduced and the simplified design leads to cheaper and better clarifiers. Practical experience with factory scale units is described, confirming the good flow characteristics. The results of preliminary test work on a factory syrup clarifier are presented, which is also shown to operate efficiently as a phosphatation clarifier. In addition the performance of a full-scale juice clarifier has been evaluated and compared with the performance of a Rapidorr clarifier. This work confirms the considerable advantages which this type of design provides, in realising substantial reductions in residence time, capital costs and operating costs.


Author(s):  
Thamer Al-rousan

Along with the introduction of HTML5, a new user storage technologies; particularly, Web SQL Database, Web Storage, and Indexed Database API have emerged. The common goal of these storage technologies is to overcome the limitations of legacy of user-side storage mechanisms. All these technologies have many privacy and security concerns, and the main threat is user tracking. In this context, this study investigates the usage of these technologies and to find out which one of these technologies is primarily used by user trackers, and to calculate their frequency in context of 3rd-party tracking code. The result exposes that the adoption of Web Storage most commonly used amongst the three storage technologies. Motivated by the investigation results, this study examines the degree of protection which the popular web browsers supply to prevent privacy violations. The result reveals that the protection mechanisms that are provided by web browsers are almost the same, and in many occasions privacy violations do exist


2021 ◽  
Vol 11 (22) ◽  
pp. 10996
Author(s):  
Jongbeom Lim

As Internet of Things (IoT) and Industrial Internet of Things (IIoT) devices are becoming increasingly popular in the era of the Fourth Industrial Revolution, the orchestration and management of numerous fog devices encounter a scalability problem. In fog computing environments, to embrace various types of computation, cloud virtualization technology is widely used. With virtualization technology, IoT and IIoT tasks can be run on virtual machines or containers, which are able to migrate from one machine to another. However, efficient and scalable orchestration of migrations for mobile users and devices in fog computing environments is not an easy task. Naïve or unmanaged migrations may impinge on the reliability of cloud tasks. In this paper, we propose a scalable fog computing orchestration mechanism for reliable cloud task scheduling. The proposed scalable orchestration mechanism considers live migrations of virtual machines and containers for the edge servers to reduce both cloud task failures and suspended time when a device is disconnected due to mobility. The performance evaluation shows that our proposed fog computing orchestration is scalable while preserving the reliability of cloud tasks.


2019 ◽  
pp. 1393-1407
Author(s):  
R. Deepthi Crestose Rebekah ◽  
Dhanaraj Cheelu ◽  
M. Rajasekhara Babu

Cloud computing is one of the most exciting technologies due to its ability to increase flexibility and scalability for computer processes, while reducing cost associated with computing. It is important to share the data securely, efficiently, and flexibly in cloud storage. Existing data protection mechanisms such as symmetric encryption techniques are unsuccessful in preventing data sharing securely. This article suggests Key aggregate cryptosystem which produce constant size ciphertexts in order to delegate decryption rights for any set of ciphertexts. The uniqueness is that one can aggregate any number of secret keys and make them as compact as a single key. This compact aggregate key can be easily sent to others with very limited secure storage.


2008 ◽  
Vol 57 (10) ◽  
pp. 1487-1493 ◽  
Author(s):  
S. Lindtner ◽  
H. Schaar ◽  
H. Kroiss

During a six-year period the Austrian Benchmarking System was developed. The main objectives of this benchmarking system are the development of process indicators, identification of best performance and determination of cost reduction potentials. Since 2004 this system is operated via an internet platform and automated to a large extent. Every year twenty to thirty treatment plants use the web-based access to this benchmarking platform. The benchmarking procedure comprises data acquisition, data evaluation including reporting and organised exchange of experience for the treatment plant managers. The process benchmarking method links the real costs with four defined main processes and two support processes. For wastewater treatment plants with a design capacity >100,000 PE these processes are further split up into sub-processes. For each (sub-) process the operating costs are attributed to six cost elements. The specific total yearly costs and the yearly operating costs of all (sub-)processes are related to the measured mean yearly pollution load of the plant expressed in population equivalents (PE110: 110 gCOD/d corresponding to 60 g BOD5/d)). The specific capital costs are related to the design capacity (PE). The paper shows the benchmarking results of 6 Austrian plants with a design capacity >100,000 PE representing approximately 30% of the Austrian municipal wastewater treatment plant capacity.


1994 ◽  
Vol 34 (2) ◽  
pp. 130
Author(s):  
G.S. Foley

In 1987 a total of 163 exploration wells were drilled in Australia's onshore basins. In 1993 the number was 47 and although the forecast for 1994 is slightly higher, activity levels over the next few years are expected to stay low. During the 1987—93 period over 60 per cent of all exploration wells were drilled in the Cooper/Eromanga and Bowen/Surat basins. Not a single exploration well was drilled in a number of basins during the period. There is a general perception amongst industry and investors that the majority of Australian's onshore basins are not prospective. A review of past exploration pro­grams in the frontier and emerging basins suggests that this perception is valid. As a result, the smaller companies, which are responsible for the majority of wells drilled in such basins, have found it diffi­cult to attract risk capital and, consequently, activ­ity levels have fallen to the current levels. Not withstanding the results of past exploration efforts, detailed financial analysis of the best oil plays in the Canning, Perth and Surat basins suggests that the potential returns from exploration and develop­ment activities are extremely attractive. Forecast internal rates of return exceed 50 per cent. Each play was subjected to sensitivity analysis to deter­mine the break-even point for exploration and de­velopment success rates, field sizes, well volumes, initial production rates, exploration and develop­ment capital costs, fixed and variable operating costs and corporate tax rates. The results suggest that the economics are considerably more robust than generally believed. The task confronting in­dustry is to convince the stock market that attrac­tive returns can be generated from at least three onshore basins so capital can be raised to exploit available opportunities.


2014 ◽  
Vol 54 (2) ◽  
pp. 514
Author(s):  
Steven Page ◽  
Lyvonne Ly ◽  
Ryan Edge ◽  
Scott Campbell ◽  
Dominic Dowling

In its journey from well to beneficial use, CSG-produced water passes through multiple systems and processes. Understanding and managing these inter- and intra-system interfaces is vital to a successful outcome for capital and operating costs, water quality, brine management, and overall asset integrity. This extended abstract discusses a number of case studies and outcomes as described below. Optimising the gathering system—wells and trunk lines Whole lifecycle-downhole pressure operating costs (OPEX) versus gathering line size capital costs (CAPEX) Pipe size standardisation: trunk line OPEX versus trunk line size/cost; pressure versus materials Reliability and availability—node-to-node system analysis, influence, and conjunctive use Optimum network architecture and water treatment facility (WTF) location WTFs and water storages—protecting the core What’s in the water—water blending, pipeline corrosion management, and well/drilling products Reliability and maintenance: bigger WTF and water storages versus spares strategy and reliability management Manage inter plant streams—recovery costs less that waste management Treated water end use and brine management—a product people want Know your end user—getting it right early is a win for everyone Guidelines, regulation, and best practice—a potentially volatile mix? Optimising the number and location of WTFs based on end use—value versus risk Brine—commodity versus waste management; an understanding of the product, market, and risks is vital


Author(s):  
Ronald Baker ◽  
Robert Peters ◽  
Edul Chikhliwala

Multicomponent Infrared Gas analyzers have been a workhorse as Continuous Emissions Monitoring Systems (CEMS) in the waste-to-energy (WTE) application for the past two decades. It is the technique of choice for many facilities. With obsolescence for electronics, instrumentation and data acquisition systems (DAS) averaging less than 10 years, the earlier multicomponent CEMS are being upgraded to what is now a third generation of that technology. This paper describes the evolution of the three generations of multicomponent CEMS. The evaluation of this technology in the WTE application encompasses the operating histories of nearly two dozen facilities demonstrating compliance with this type of CEMS. Specific details explaining the sampling systems, analyzer optics & controls, interface and communication with plant distributed control systems, and DAS systems are presented. Relative accuracy test audit (RATA) results, CEMS availability histories and annual maintenance costs are reviewed presenting a unique insight into both initial capital costs and operating costs. Actual annual man-hour totals for preventive maintenance (PM), unscheduled maintenance, and annual consumable parts costs are provided. Advances in computer capabilities have provided an opportunity for CEMS functions to not only become more comprehensive but also more robust. Key among these advances is the ability for factory-support services to be provided not only for the software platform but now even down to the basic auditing parameters of the analyzers themselves. Third generation CEMS now feature remote access of the analyzers from the instrumentation repair shop, the vendor’s factory or from the company’s technical service center.


2018 ◽  
Vol 245 ◽  
pp. 06006 ◽  
Author(s):  
Olga Gamayunova ◽  
Mikhail Petrichenko ◽  
Tatyana Musorina ◽  
Eliza Gumerova

On the example of a typical residential multi-apartment building, a feasibility study was carried out on the choice of energy-saving measures for the thermal insulation of facades. The decision to increase the energy efficiency of the building was made on the basis of calculating the loss of thermal energy through the external walls. Based on the parameters of the heating period, capital costs for additional thermal insulation of facades and calculated values of operating costs for heating, the optimum thickness of the additional layer of insulation is determined, in which the payback period assumes a minimum value.


Sign in / Sign up

Export Citation Format

Share Document