scholarly journals Implementasi Virtual Interface Menggunakan Metode EOIP Tunnel Pada Jaringan WAN PT. Indo Matra Lestari

2020 ◽  
Vol 6 (1) ◽  
pp. 103-110
Author(s):  
Sidik Sidik ◽  
Ade Sudaryana ◽  
Rame Santoso

Computer networks have become an important point in companies that have many branch offices to coordinate the transfer of data. PT Indo Matra Lestari's connection uses a VPN system using the PPTP method. Data Center is used as a VPN server, the client is the Head Office and Citereup Branch Offices. Between the Head Office and the Citereup Branch Office there is no direct connection so access to data made between the Head Office and the Citereup Branch Office is slow, because the data must pass through the Data Center before reaching its destination. Moreover, the data accessed is private to the company and only accessed on the local network. The solution used to create a direct and secure network path between the Head Office and Branch Offices is to use the EoIP Tunnel on the proxy router. Tunneling method in EoIP can make network bridging between proxy devices, EoIP Tunnel will change to Virtual Interface on the proxy router so that it is as if the proxy router is connected locally. Tunnel ID on the EoIP Tunnel functions as a tunneling path security. The application of the EoIP Tunnel makes the point to point connection point between Mikrotik devices faster in data access because the data access is directed to the destination. In order for this EoIP Tunnel connection to run optimally and well, a network management is needed in managing internet bandwidth usage

1999 ◽  
Vol 33 (3) ◽  
pp. 55-66 ◽  
Author(s):  
L. Charles Sun

An interactive data access and retrieval system, developed at the U.S. National Oceanographic Data Genter (NODG) and available at <ext-link ext-link-type="uri" href="http://www.node.noaa.gov">http://www.node.noaa.gov</ext-link>, is presented in this paper. The purposes of this paper are: (1) to illustrate the procedures of quality control and loading oceanographic data into the NODG ocean databases and (2) to describe the development of a system to manage, visualize, and disseminate the NODG data holdings over the Internet. The objective of the system is to provide ease of access to data that will be required for data assimilation models. With advances in scientific understanding of the ocean dynamics, data assimilation models require the synthesis of data from a variety of resources. Modern intelligent data systems usually involve integrating distributed heterogeneous data and information sources. As the repository for oceanographic data, NOAA’s National Oceanographic Data Genter (NODG) is in a unique position to develop such a data system. In support of the data assimilation needs, NODG has developed a system to facilitate browsing of the oceanographic environmental data and information that is available on-line at NODG. Users may select oceanographic data based on geographic areas, time periods and measured parameters. Once the selection is complete, users may produce a station location plot, produce plots of the parameters or retrieve the data.


2018 ◽  
Vol 8 (10) ◽  
pp. 1914 ◽  
Author(s):  
Lincheng Jiang ◽  
Yumei Jing ◽  
Shengze Hu ◽  
Bin Ge ◽  
Weidong Xiao

Identifying node importance in complex networks is of great significance to improve the network damage resistance and robustness. In the era of big data, the size of the network is huge and the network structure tends to change dynamically over time. Due to the high complexity, the algorithm based on the global information of the network is not suitable for the analysis of large-scale networks. Taking into account the bridging feature of nodes in the local network, this paper proposes a simple and efficient ranking algorithm to identify node importance in complex networks. In the algorithm, if there are more numbers of node pairs whose shortest paths pass through the target node and there are less numbers of shortest paths in its neighborhood, the bridging function of the node between its neighborhood nodes is more obvious, and its ranking score is also higher. The algorithm takes only local information of the target nodes, thereby greatly improving the efficiency of the algorithm. Experiments performed on real and synthetic networks show that the proposed algorithm is more effective than benchmark algorithms on the evaluation criteria of the maximum connectivity coefficient and the decline rate of network efficiency, no matter in the static or dynamic attack manner. Especially in the initial stage of attack, the advantage is more obvious, which makes the proposed algorithm applicable in the background of limited network attack cost.


2018 ◽  
Vol 2 (02) ◽  
Author(s):  
Natasia Iroth ◽  
Greyshella Sesdi Mamangkey ◽  
Lidia M Mawikere

Insurance is an institute who have important role in the economy. Companies are asked to manage and compile their financial statements, because that is an important thing that will greatly affect the company's productivity. For this reason, in every company have to apply be valid accounting standards to make information that is can understood by wearer of financial statements. In the presentation of financial statements, the income statement is a very important component, especially for calculation’s company expenses and income. Then it is necessary to recognize and record properly and in accordance with be valid standards. The aim of this research is to analyze the conformity of recognition of company income and expenses and PSAK No. 28 concerning Accounting for loss insurance contracts. The method used is the analysis of profit and loss statements with components in which the income and expense of PT Asuransi Adira Dinamika branch of Manado with PSAK No. 28. From the results of this study it was concluded that PT Asuransi Adira Dinamika Manado branch has recognized revenue and expenses appropriately according to standards, namely revenues recognized at the time of issuance of policies (contracts) and claims expenses recognized as expenses when issuing work orders (SPK) from headquarters. The company applies the accrual basis method in recognition of revenues and expenses, where transactions are recorded and reported at the time of the event and not when cash or cash equivalents are received (paid). Each company transaction is inputted systemically from the branch office to the head office in accordance with the detailed classification.Keywords : Revenue, Expense, Insurance, PSAK. N0 28


Author(s):  
Shirley Wong ◽  
Victoria Schuckel ◽  
Simon Thompson ◽  
David Ford ◽  
Ronan Lyons ◽  
...  

IntroductionThere is no power for change greater than a community discovering what it cares about.1 The Health Data Platform (HDP) will democratize British Columbia’s (population of approximately 4.6 million) health sector data by creating common enabling infrastructure that supports cross-organization analytics and research used by both decision makers and cademics. HDP will provide streamlined, proportionate processes that provide timelier access to data with increased transparency for the data consumer and provide shared data related services that elevate best practices by enabling consistency across data contributors, while maintaining continued stewardship of their data. HDP will be built in collaboration with Swansea University following an agile pragmatic approach starting with a minimum viable product. Objectives and ApproachBuild a data sharing environment that harnesses the data and the understanding and expertise about health data across academe, decision makers, and clinicians in the province by: Enabling a common harmonized approach across the sector on: Data stewardship Data access Data security and privacy Data management Data standards To: Enhance data consumer data access experience Increase process consistency and transparency Reduce burden of liberating data from a data source Build trust in the data and what it is telling us and therefore the decisions made Increase data accessibility safely and responsibly Working within the jurisdiction’s existing legislation, the Five Safes Privacy and Security Framework will be implemented, tailored to address the requirements of data contributors. ResultsThe minimum viable product will provide the necessary enabling infrastructure including governance to enable timelier access, safely to administrative data to a limited set of data consumers. The MVP will be expanded with another release planned for early 2021. Conclusion / ImplicationsCollaboration with Swansea University has enabled BC to accelerate its journey to increasing timelier access to data, safely and increasing the maturity of analytics by creating the enabling infrastructure that promotes collaboration and sharing of data and data approaches. 1 Margaret Wheatley


Author(s):  
Денис Валерьевич Сикулер

В статье выполнен обзор 10 ресурсов сети Интернет, позволяющих подобрать данные для разнообразных задач, связанных с машинным обучением и искусственным интеллектом. Рассмотрены как широко известные сайты (например, Kaggle, Registry of Open Data on AWS), так и менее популярные или узкоспециализированные ресурсы (к примеру, The Big Bad NLP Database, Common Crawl). Все ресурсы предоставляют бесплатный доступ к данным, в большинстве случаев для этого даже не требуется регистрация. Для каждого ресурса указаны характеристики и особенности, касающиеся поиска и получения наборов данных. В работе представлены следующие сайты: Kaggle, Google Research, Microsoft Research Open Data, Registry of Open Data on AWS, Harvard Dataverse Repository, Zenodo, Портал открытых данных Российской Федерации, World Bank, The Big Bad NLP Database, Common Crawl. The work presents review of 10 Internet resources that can be used to find data for different tasks related to machine learning and artificial intelligence. There were examined some popular sites (like Kaggle, Registry of Open Data on AWS) and some less known and specific ones (like The Big Bad NLP Database, Common Crawl). All included resources provide free access to data. Moreover in most cases registration is not needed for data access. Main features are specified for every examined resource, including regarding data search and access. The following sites are included in the review: Kaggle, Google Research, Microsoft Research Open Data, Registry of Open Data on AWS, Harvard Dataverse Repository, Zenodo, Open Data portal of the Russian Federation, World Bank, The Big Bad NLP Database, Common Crawl.


Author(s):  
Rogério Aparecido Sá Ramalho ◽  
Ricardo César Gonçalves Sant'Ana ◽  
Francisco Carlos Paletta

The acceleration of the development of digital technologies and the increase of the capillarity of their effects present new challenges to the praxis related to the treatment and informational flows and those that are object of study of information science. This chapter is based on a theoretical study that analyzes information science contributions in the data science era, analyzing from the Cynefin Framework to the new contemporary informational demands generated by the increasing predominance of data access and use. In order to establish the relationship between the skills expected from the information science professional and its relationship with access to data, the Cynefin Framework was used as a basis to establish a perspective of analyzing the skills involved in each of the phases of the life cycle of the data.


2019 ◽  
Vol 2 (1) ◽  
pp. 45-54 ◽  
Author(s):  
Kimberly M. Scott ◽  
Melissa Kline

As more researchers make their data sets openly available, the potential of secondary data analysis to address new questions increases. However, the distinction between primary and secondary data analysis is unnecessarily confounded with the distinction between confirmatory and exploratory research. We propose a framework, akin to library-book checkout records, for logging access to data sets in order to support confirmatory analysis when appropriate. This system would support a standard form of preregistration for secondary data analysis, allowing authors to demonstrate that their plans were registered prior to data access. We discuss the critical elements of such a system, its strengths and limitations, and potential extensions.


2018 ◽  
Vol 113 ◽  
pp. 109-118
Author(s):  
Yang Qin ◽  
Weihong Yang ◽  
Xiao Ai ◽  
Lingjian Chen

2020 ◽  
Vol 114 (1) ◽  
pp. 124-128

On October 3, 2019, the United States and the United Kingdom reached a bilateral agreement to facilitate more efficient data access between the two countries for law enforcement purposes. The Agreement on Access to Electronic Data for the Purpose of Countering Serious Crime (U.S.-UK Data Access Agreement) was signed by U.S. Attorney General William Barr and UK Home Secretary Priti Patel. This is the first such agreement made by the United States after the passage of the 2018 Clarifying Lawful Overseas Use of Data (CLOUD) Act, which authorizes and structures future bilateral agreements on data sharing. Pursuant to the CLOUD Act, Congress has 180 days following receipt of a notification regarding the U.S.-UK Data Access Agreement to block its entry into force via a joint resolution, which would require a majority vote in both houses of Congress and either presidential signature or a subsequent congressional override of a presidential veto.


Author(s):  
Kuo-Chi Fang ◽  
Husnu S. Narman ◽  
Ibrahim Hussein Mwinyi ◽  
Wook-Sung Yoo

Due to the growth of internet-connected devices and extensive data analysis applications in recent years, cloud computing systems are largely utilized. Because of high utilization of cloud storage systems, the demand for data center management has been increased. There are several crucial requirements of data center management, such as increase data availability, enhance durability, and decrease latency. In previous works, a replication technique is mostly used to answer those needs according to consistency requirements. However, most of the works consider full data, popular data, and geo-distance-based replications by considering storage and replication cost. Moreover, the previous data popularity based-techniques rely on the historical and current data access frequencies for replication. In this article, the authors approach this problem from a distinct aspect while developing replication techniques for a multimedia data center management system which can dynamically adapt servers of a data center by considering popularity prediction in each data access location. Therefore, they first label data objects from one to ten to track access frequencies of data objects. Then, they use those data access frequencies from each location to predict the future access frequencies of data objects to determine the replication levels and locations to replicate the data objects, and store the related data objects to close storage servers. To show the efficiency of the proposed methods, the authors conduct an extensive simulation by using real data. The results show that the proposed method has an advantage over the previous works in terms of data availability and increases the data availability up to 50%. The proposed method and related analysis can assist multimedia service providers to enhance their service qualities.


Sign in / Sign up

Export Citation Format

Share Document