computing power
Recently Published Documents


TOTAL DOCUMENTS

1049
(FIVE YEARS 445)

H-INDEX

33
(FIVE YEARS 8)

2022 ◽  
Vol 21 (1) ◽  
pp. 1-29
Author(s):  
Lanshun Nie ◽  
Chenghao Fan ◽  
Shuang Lin ◽  
Li Zhang ◽  
Yajuan Li ◽  
...  

With the technology trend of hardware and workload consolidation for embedded systems and the rapid development of edge computing, there has been increasing interest in supporting parallel real-time tasks to better utilize the multi-core platforms while meeting the stringent real-time constraints. For parallel real-time tasks, the federated scheduling paradigm, which assigns each parallel task a set of dedicated cores, achieves good theoretical bounds by ensuring exclusive use of processing resources to reduce interferences. However, because cores share the last-level cache and memory bandwidth resources, in practice tasks may still interfere with each other despite executing on dedicated cores. Such resource interferences due to concurrent accesses can be even more severe for embedded platforms or edge servers, where the computing power and cache/memory space are limited. To tackle this issue, in this work, we present a holistic resource allocation framework for parallel real-time tasks under federated scheduling. Under our proposed framework, in addition to dedicated cores, each parallel task is also assigned with dedicated cache and memory bandwidth resources. Further, we propose a holistic resource allocation algorithm that well balances the allocation between different resources to achieve good schedulability. Additionally, we provide a full implementation of our framework by extending the federated scheduling system with Intel’s Cache Allocation Technology and MemGuard. Finally, we demonstrate the practicality of our proposed framework via extensive numerical evaluations and empirical experiments using real benchmark programs.


Sensors ◽  
2022 ◽  
Vol 22 (2) ◽  
pp. 640
Author(s):  
Dhananjay Singh ◽  
Mario Divan ◽  
Madhusudan Singh

The term IoT (Internet of Things) constitutes the quickly developing advanced gadgets with highest computing power with in a constrained VLSI design space [...]


Author(s):  
Carmen Isensee ◽  
Kai-Michael Griese ◽  
Frank Teuteberg

AbstractIn recent years, various studies have highlighted the opportunities of artificial intelligence (AI) for our society. For example, AI solutions can help reduce pollution, waste, or carbon footprints. On the other hand, there are also risks associated with the use of AI, such as increasing inequality in society or high resource consumption for computing power. This paper explores the question how corporate culture influences the use of artificial intelligence in terms of sustainable development. This type of use includes a normative element and is referred to in the paper as sustainable artificial intelligence (SAI). Based on a bibliometric literature analysis, we identify features of a sustainability-oriented corporate culture. We offer six propositions examining the influence of specific manifestations on the handling of AI in the sense of SAI. Thus, if companies want to ensure that SAI is realized, corporate culture appears as an important indicator and influencing factor at the same time.


2022 ◽  
pp. 146906672110733
Author(s):  
Sean Sebastian Hughes ◽  
Marcus M. K. Hughes ◽  
Rasmus Voersaa Jonsbo ◽  
Carsten Uhd Nielsen ◽  
Frants Roager Lauritsen ◽  
...  

Beer is a complex mix of more than 7700 compounds, around 800 of which are volatile. While GC-MS has been actively employed in the analysis of the volatome of beer, this method is challenged by the complex nature of the sample. Herein, we explored the possible of using membrane-inlet mass spectrometry (MIMS) coupled to KNIME to characterize local Danish beers. KNIME stands for Konstanz Information Miner and is a free open-source data processing software which comes with several prebuilt nodes, that, when organized, result in data processing workflows allowing swift analysis of data with outputs that can be visualized in the desired format. KNIME has been shown to be promising in automation of large datasets and requires very little computing power. In fact, most of the computations can be carried out on a regular PC. Herein, we have utilized a KNIME workflow for data visualization of MIMS data to understand the global volatome of beers. Feature identification was not possible as of now but with a combination of MIMS and a KNIME workflow, we were able to distinguish beers from different micro-breweries located in Denmark, laying the foundation for the use of MIMS in future analysis of the beer volatome.


2022 ◽  
Vol 6 (1) ◽  
pp. 5
Author(s):  
Giuseppe Di Modica ◽  
Orazio Tomarchio

In the past twenty years, we have witnessed an unprecedented production of data worldwide that has generated a growing demand for computing resources and has stimulated the design of computing paradigms and software tools to efficiently and quickly obtain insights on such a Big Data. State-of-the-art parallel computing techniques such as the MapReduce guarantee high performance in scenarios where involved computing nodes are equally sized and clustered via broadband network links, and the data are co-located with the cluster of nodes. Unfortunately, the mentioned techniques have proven ineffective in geographically distributed scenarios, i.e., computing contexts where nodes and data are geographically distributed across multiple distant data centers. In the literature, researchers have proposed variants of the MapReduce paradigm that obtain awareness of the constraints imposed in those scenarios (such as the imbalance of nodes computing power and of interconnecting links) to enforce smart task scheduling strategies. We have designed a hierarchical computing framework in which a context-aware scheduler orchestrates computing tasks that leverage the potential of the vanilla Hadoop framework within each data center taking part in the computation. In this work, after presenting the features of the developed framework, we advocate the opportunity of fragmenting the data in a smart way so that the scheduler produces a fairer distribution of the workload among the computing tasks. To prove the concept, we implemented a software prototype of the framework and ran several experiments on a small-scale testbed. Test results are discussed in the last part of the paper.


2022 ◽  
Vol 12 (2) ◽  
pp. 531
Author(s):  
Camilo Denis González ◽  
Daniel Frias Mena ◽  
Alexi Massó Muñoz ◽  
Omar Rojas ◽  
Guillermo Sosa-Gómez

Conventional electronic voting systems use a centralized scheme. A central administration of these systems manages the entire voting process and has partial or total control over the database and the system itself. This creates some problems, accidental or intentional, such as possible manipulation of the database and double voting. Many of these problems have been solved thanks to permissionless blockchain technologies in new voting systems; however, the classic consensus method of such blockchains requires specific computing power during each voting operation. This has a significant impact on power consumption, compromises the efficiency and increases the system latency. However, using a permissioned blockchain improves efficiency and reduces system energy consumption, mainly due to the elimination of the typical consensus protocols used by public blockchains. The use of smart contracts provides a secure mechanism to guarantee the accuracy of the voting result and make the counting procedure public and protected against fraudulent actions, and contributes to preserving the anonymity of the votes. Its adoption in electronic voting systems can help mitigate part of these problems. Therefore, this paper proposes a system that ensures high reliability by applying enterprise blockchain technology to electronic voting, securing the secret ballot. In addition, a flexible network configuration is presented, discussing how the solution addresses some of the security and reliability issues commonly faced by electronic voting system solutions.


Sensors ◽  
2022 ◽  
Vol 22 (1) ◽  
pp. 379
Author(s):  
Grzegorz Piecuch ◽  
Rafał Żyła

The article presents an extensive analysis of the literature related to the diagnosis of the extrusion process and proposes a new, unique method. This method is based on the observation of the punch displacement signal in relation to the die, and then approximation of this signal using a polynomial. It is difficult to find in the literature even an attempt to solve the problem of diagnosing the extrusion process by means of a simple distance measurement. The dominant feature is the use of strain gauges, force sensors or even accelerometers. However, the authors managed to use the displacement signal, and it was considered a key element of the method presented in the article. The aim of the authors was to propose an effective method, simple to implement and not requiring high computing power, with the possibility of acting and making decisions in real time. At the input of the classifier, authors provided the determined polynomial coefficients and the SSE (Sum of Squared Errors) value. Based on the SSE values only, the decision tree algorithm performed anomaly detection with an accuracy of 98.36%. With regard to the duration of the experiment (single extrusion process), the decision was made after 0.44 s, which is on average 26.7% of the extrusion experiment duration. The article describes in detail the method and the results achieved.


2022 ◽  
Vol 14 (1) ◽  
pp. 21
Author(s):  
Weiwei Zhang ◽  
Xin Ma ◽  
Yuzhao Zhang ◽  
Ming Ji ◽  
Chenghui Zhen

Due to the arbitrariness of the drone’s shooting angle of view and camera movement and the limited computing power of the drone platform, pedestrian detection in the drone scene poses a greater challenge. This paper proposes a new convolutional neural network structure, SMYOLO, which achieves the balance of accuracy and speed from three aspects: (1) By combining deep separable convolution and point convolution and replacing the activation function, the calculation amount and parameters of the original network are reduced; (2) by adding a batch normalization (BN) layer, SMYOLO accelerates the convergence and improves the generalization ability; and (3) through scale matching, reduces the feature loss of the original network. Compared with the original network model, SMYOLO reduces the accuracy of the model by only 4.36%, the model size is reduced by 76.90%, the inference speed is increased by 43.29%, and the detection target is accelerated by 33.33%, achieving minimization of the network model volume while ensuring the detection accuracy of the model.


Sensors ◽  
2022 ◽  
Vol 22 (1) ◽  
pp. 329
Author(s):  
Congming Tan ◽  
Shuli Cheng ◽  
Liejun Wang

Recently, many super-resolution reconstruction (SR) feedforward networks based on deep learning have been proposed. These networks enable the reconstructed images to achieve convincing results. However, due to a large amount of computation and parameters, SR technology is greatly limited in devices with limited computing power. To trade-off the network performance and network parameters. In this paper, we propose the efficient image super-resolution network via Self-Calibrated Feature Fuse, named SCFFN, by constructing the self-calibrated feature fuse block (SCFFB). Specifically, to recover the high-frequency detail information of the image as much as possible, we propose SCFFB by self-transformation and self-fusion of features. In addition, to accelerate the network training while reducing the computational complexity of the network, we employ an attention mechanism to elaborate the reconstruction part of the network, called U-SCA. Compared with the existing transposed convolution, it can greatly reduce the computation burden of the network without reducing the reconstruction effect. We have conducted full quantitative and qualitative experiments on public datasets, and the experimental results show that the network achieves comparable performance to other networks, while we only need fewer parameters and computational resources.


2022 ◽  
pp. 59-73
Author(s):  
Saurabh Tiwari ◽  
Prakash Chandra Bahuguna ◽  
Jason Walker

There will be a revolution in industry and society as a result of Industry 5.0. Human-robot co-working, also known as cobots, is a key component of Industry 5.0. Industry 5.0 will overcome all the limitations of the previous industrial revolution. Humans and machines will work together in this revolution to increase the efficiency of processes by utilising human brainpower and creativity. To solve complex problems more efficiently and with less human intervention, Industry 5.0 provides a strong foundation for advanced digital manufacturing systems through interconnected networks, and it's designed to communicate with other systems, as well as powerful computing power. To enhance customer satisfaction, Industry 5.0 involves a shift from mass customization to mass personalization along with a shift from digital usage of data to intelligent use of data for sustainable development. On the basis of comparative analysis, this chapter outlines Industry 5.0's definition, its elements and components, and its application and future scope paradigm.


Sign in / Sign up

Export Citation Format

Share Document