global computing
Recently Published Documents


TOTAL DOCUMENTS

134
(FIVE YEARS 17)

H-INDEX

15
(FIVE YEARS 2)

2022 ◽  
Vol 65 (1) ◽  
pp. 5-5
Author(s):  
Andrew A. Chien
Keyword(s):  

2021 ◽  
Author(s):  
Emre Erturk

The current global computing curriculum guidelines including MISI2016, IT2017 and IS2020 are built to promote and facilitate competency-based higher education programs development and to enhance graduate employability. Their applications however are facing challenges in understanding, interpretation and operationalization. Taking data analytics and data engineering, this study shows how these guidelines are used to discover and analyze competencies, the boundaries between typical IT and IS programs and between IS undergraduate and postgraduate programs and further, the gaps for these programs to fill to incorporate professional practice competencies. The global skills frameworks are invoked and SFIA 7 is used to assist analysis.


2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Mian Wang

Mobile edge computing is a very popular technology now. It was proposed to eliminate the problem of lack of global computing resources. This article aims to study the use of the latest mobile edge computing technology to study the mobile information system for appreciation, exchange, and management of the traditional ceramic industry. The whole article uses mobile edge computing technology. It enters the network using wireless methods and provides recent users with the required services and cloud computing functions, allowing users to easily query the information and data they want, plus mobile. The information system enables people to use mobile phones, tablets, and other mobile terminals to query information in the ceramic industry and perform functions such as appreciation, communication, and management. From 2016 to 2020, our country’s ceramic industry exports have increased from US$3.067 billion to US$6.826 billion. Traditional ceramics in our country have been loved by various industries at home and abroad. The number of employees in the ceramic industry has also increased to 5 million, an increase of 30% year-on-year. The ceramic industry is also very promising in the long term.


Author(s):  
David C. Lane ◽  
Claire Goode

This paper describes the functionality, scalability, and cost of implementing and maintaining a suite of open source technologies, which have supported hundreds of thousands of learners in the past year, on an information technology infrastructure budget of less than US$10,000 per year. In addition, it reviews pedagogical opportunities offered by a fully open digital learning ecosystem, as well as benefits for learners and educators alike. The Open Education Resource universitas (OERu) is an international consortium made up of 36 publicly funded institutions and the OER Foundation. The OERu currently offers first-year postsecondary courses through OER-based micro-courses with pathways to gain stackable micro-credentials, convertible to academic credit toward recognised university qualifications. The OERu, adhering to open principles (Wiley, 2014b), has created an open source Next Generation Digital Learning Ecosystem (NGDLE) to meet the needs of learners, consortium partners, and OERu collaborators. The NGDLE—a distributed, loosely coupled component model, consisting entirely of free and open source software (FOSS)—is a global computing infrastructure created to reach learners wherever they are. All OERu services are hosted on commodity FOSS infrastructure, conferring significant advantages and creating opportunities for institutions adopting any of these services to enhance education opportunities at minimal cost. The NGDLE can also increase technological autonomy and resilience while providing exceptional learning opportunities and agency for learners and educators alike.


Information ◽  
2020 ◽  
Vol 11 (12) ◽  
pp. 581
Author(s):  
Henry Zárate Ceballos ◽  
Jorge Eduardo Ortiz Triviño

Due to the growth of users and linked devices in networks, there is an emerging need for dynamic solutions to control and manage computing and network resources. This document proposes a Distributed Wireless Operative System on a Mobile Ad-hoc Network (MANET) to manage and control computing resources in relation to several virtual resources linked in a wireless network. This prototype has two elements: a local agent that works on each physical node to manage the computing resources (e.g., virtual resources and distributed applications) and an orchestrator agent that monitors, manages, and deploys policies on each physical node. These elements arrange the local and global computing resources to provide a quality service to the users of the Ad-hoc cluster. The proposed S.O.V.O.R.A. model (Operating Virtualized System oriented to Ad-hoc networks) defines primitives, commands, virtual structures, and modules to operate as a distributed wireless operating system.


Author(s):  
Sara Diani

As a complex system, our body acts as a whole system connected to the environmental incitements. It is ordered, coherent, and tries to maintain the least possible entropy, saving the greatest amount of energy. In order to explain the dynamics of the systemic regulative network, a theoretical and speculative model is proposed, with a comprehensive approach that allows seeing the entire regulative system as a continuous unicuum. This paper covers two themes: 1) the connections between the quantum level and the classical one, through some principles of the QFT and through the Coherence Domains. The system is modeled as a field described by the wave function, with synchronous and consistent events, driven in a global computing by the quantum potential Q. The quantum potential implies the non-locality, and it needs only ultra-weak waves to occur, so it may explain how the rapid and global activation of the organism in response to perturbation/punctiform information works. The initial hypothesis is that some consistent quantum phenomena are amplified through the systemic regulative network until they become macroscopic observable. This is possible because of Coherence Domains. 2) The reactions of the different systemic networks to perturbations/punctiform information, with the attempt to model and measure information in biology, going beyond the Shannon and Turing theories. Hopfield Networks and an informational point of view are used to address the crucial informational and organizational role of proteins and nucleic acids.


2020 ◽  
Vol 15 ◽  

There is no doubt that the economic and computing activity related to the digital sector will ramp up faster in the present decade than in the last. Moreover, computing infrastructure is one of three major drivers of new electricity use alongsidefuture and current hydrogen production and battery electric vehicles charging. Here is proposed a trajectory in this decade for CO2 emissions associated with this digitalization and its share of electricity and energy generation as a whole. The roadmap for major sources of primary energy and electricity and associated CO2 emissions areprojected and connected to the probable power use of the digital industry. The truncation error for manufacturing related CO2 emissions may be 0.8 Gt or more indicating a larger share of manufacturing and absolute digital CO2 emissions.While remaining at a moderate share of global CO2 emissions (4-5%), the resulting digital CO2 emissions will likely rise from 2020 to 2030. The opposite may only happen if the electricity used to run especially data centers and production plants is produced locally (next to the data centers and plants) from renewable sources and data intensity metrics grow slower than expected.


Author(s):  
Sara Diani

As a Complex System, our body acts as a whole system connected to the environmental incitements. It is ordered, coherent, tries to maintain the least possible entropy, saving the greatest amount of energy. We can observe its active systemic response to environmental information both when it is healthy and ill. To explain the dynamics of the systemic regulative network a theoretical model is proposed, with a comprehensive approach that allows seeing the entire regulative syste m as a continuous unicuum. The paper analyzes two points of view: 1) the connections between the quantum level and the classical one, through some principles of the QFT and through the Coherence Domains. The system is modeled as a field described by the wa ve function, with synchronous and consistent events, driven in a global computing by the quantum potential Q. The quantum potential implies the non locality, and it needs only ultra weak waves to occur, so it explains how the rapid and global activation of the organism in response to punctiform information work. The initial hypothesis is that some consistent quantum phenomena are amplified through the systemic regulative network until they become macroscopic observable. This is possible because of Coherence Domains. 2) The reactions of the different systemic networks to perturbations/punctiform information, with the first attempt to model and measure information in biology, going beyond the Shannon and Turing theories. Hopfield Networks and an informational point of view are used to address the crucial informational and organizational role of proteins and nucleic acids. With this new frame we could develop innovative therapeutic strategies, and also evolve new experimental way to make our clinical observation more precise.


2020 ◽  
Vol 245 ◽  
pp. 03029
Author(s):  
Julia Andreeva ◽  
Alexey Anisenkov ◽  
Alessandro Di Girolamo ◽  
Alessandra Forti ◽  
Stephen Jones ◽  
...  

The WLCG project aimed to develop, build, and maintain a global computing facility for storage and analysis of the LHC data. While currently most of the LHC computing resources are being provided by the classical grid sites, over the last years the LHC experiments have been using more and more public clouds and HPCs, and this trend will certainly continue. The heterogeneity of the LHC computing resources is not limited to the procurement mode. It also implies variety of storage solutions and types of computer architecture which represent new challenges for the topology and configuration description of the LHC computing resources. The WLCG Information infrastructure has to evolve in order to meet these challenges and to be flexible enough to follow technology innovation. It should provide a complete and reliable description of all types of the storage and computing resources to ensure their effective use. This implies changes at all levels, starting from the primary information providers, through data publishing, transportation mechanism and central aggregators. This paper describes the proposed changes in the WLCG Information Infrastructure, their implementation and deployment.


2020 ◽  
Vol 245 ◽  
pp. 07024
Author(s):  
Robert Gardner ◽  
Lincoln Bryant ◽  
Shawn McKee ◽  
Judith Stephen ◽  
Ilija Vukotic ◽  
...  

One of the most costly factors in providing a global computing infrastructure such as the WLCG is the human effort in deployment, integration, and operation of the distributed services supporting collaborative computing, data sharing and delivery, and analysis of extreme scale datasets. Furthermore, the time required to roll out global software updates, introduce new service components, or prototype novel systems requiring coordinated deployments across multiple facilities is often increased by communication latencies, staff availability, and in many cases expertise required for operations of bespoke services. While the WLCG (and distributed systems implemented throughout HEP) is a global service platform, it lacks the capability and flexibility of a modern platform-as-a-service including continuous integration/continuous delivery (CI/CD) methods, development-operations capabilities (DevOps, where developers assume a more direct role in the actual production infrastructure), and automation. Most importantly, tooling which reduces required training, bespoke service expertise, and the operational effort throughout the infrastructure, most notably at the resource endpoints (sites), is entirely absent in the current model. In this paper, we explore ideas and questions around potential NoOps models in this context: what is realistic given organizational policies and constraints? How should operational responsibility be organized across teams and facilities? What are the technical gaps? What are the social and cybersecurity challenges? Conversely what advantages does a NoOps model deliver for innovation and for accelerating the pace of delivery of new services needed for the HL-LHC era? We will describe initial work along these lines in the context of providing a data delivery network supporting IRIS-HEP DOMA R&D.


Sign in / Sign up

Export Citation Format

Share Document