software failures
Recently Published Documents


TOTAL DOCUMENTS

177
(FIVE YEARS 41)

H-INDEX

16
(FIVE YEARS 3)

2022 ◽  
pp. 31-49
Author(s):  
Jorge Barbosa

The possibility that computers, in particular, personal computers, can be used for harmful actions affecting global computer systems as a whole, due to two main reasons: (1) hardware and / or software failures, which are caused by problems related to their manufacture which must be solved by their respective manufacturers and (2) failures due to actions or inactions of their users, in particular people with low computer skills, people of very low age groups, e.g. children, or very old age groups, e.g. ageing people, or others without a minimum of computer skills. This problem is aggravated by the continuous proliferation of equipment, namely mobile devices, IOT devices and others that have Internet connectivity, namely through a browser. There are the possible ways in the area of cyber education that can contribute to cyber resilience of society and these are developed in this work.


2021 ◽  
Author(s):  
Danyang Zheng ◽  
Gangxiang Shen ◽  
Yongcheng Li ◽  
Xiaojun Cao ◽  
Biswanath Mukherjee

<p>In the upcoming 5G-and-beyond era, ultra-reliable low-latency communication (URLLC) services will be ubiquitous in edge networks. To improve network performance and quality of service (QoS), URLLC services could be delivered via a sequence of software-based network functions, also known as service function chains (SFCs). Towards reliable SFC delivery, it is imperative to incorporate deterministic fault tolerance during SFC deployment. However, deploying an SFC with deterministic fault tolerance is challenging because the protection mechanism needs to consider protection against physical/virtual network failures and hardware/software failures jointly. Against multiple and diverse failures, this work investigates how to effectively deliver an SFC in optical edge networks with deterministic fault tolerance while minimizing wavelength resource consumption. We introduce a protection augmented graph, called <i>k</i>-connected service function slices layered graph (KC-SLG), protecting against <i>k</i>-1 fiber link failures and <i>k</i>-1 server failures. We formulate a novel problem called deterministic-fault-tolerant SFC embedding and propose an effective algorithm, called most candidate first SF slices layered graph embedding (MCF-SE). MCF-SE employs two proposed techniques: <i>k</i>-connected network slicing (KC-NS) and <i>k</i>-connected function slicing (KC-FS). Through thorough mathematical proof, we show that KC-NS is <i>2</i>-approximate. For KC-FS, we demonstrate that <i>k</i> = 3 provides the best cost-efficiency. Our experimental results also show that the proposed MCF-SE achieves deterministic-fault-tolerant service delivery and performs better than the schemes directly extended from existing work regarding survivability and average cost-efficiency.</p>


2021 ◽  
Author(s):  
Danyang Zheng ◽  
Gangxiang Shen ◽  
Yongcheng Li ◽  
Xiaojun Cao ◽  
Biswanath Mukherjee

<p>In the upcoming 5G-and-beyond era, ultra-reliable low-latency communication (URLLC) services will be ubiquitous in edge networks. To improve network performance and quality of service (QoS), URLLC services could be delivered via a sequence of software-based network functions, also known as service function chains (SFCs). Towards reliable SFC delivery, it is imperative to incorporate deterministic fault tolerance during SFC deployment. However, deploying an SFC with deterministic fault tolerance is challenging because the protection mechanism needs to consider protection against physical/virtual network failures and hardware/software failures jointly. Against multiple and diverse failures, this work investigates how to effectively deliver an SFC in optical edge networks with deterministic fault tolerance while minimizing wavelength resource consumption. We introduce a protection augmented graph, called <i>k</i>-connected service function slices layered graph (KC-SLG), protecting against <i>k</i>-1 fiber link failures and <i>k</i>-1 server failures. We formulate a novel problem called deterministic-fault-tolerant SFC embedding and propose an effective algorithm, called most candidate first SF slices layered graph embedding (MCF-SE). MCF-SE employs two proposed techniques: <i>k</i>-connected network slicing (KC-NS) and <i>k</i>-connected function slicing (KC-FS). Through thorough mathematical proof, we show that KC-NS is <i>2</i>-approximate. For KC-FS, we demonstrate that <i>k</i> = 3 provides the best cost-efficiency. Our experimental results also show that the proposed MCF-SE achieves deterministic-fault-tolerant service delivery and performs better than the schemes directly extended from existing work regarding survivability and average cost-efficiency.</p>


Electronics ◽  
2021 ◽  
Vol 10 (23) ◽  
pp. 3047
Author(s):  
Kolade Olorunnife ◽  
Kevin Lee ◽  
Jonathan Kua

Recent years have seen the rapid adoption of Internet of Things (IoT) technologies, where billions of physical devices are interconnected to provide data sensing, computing and actuating capabilities. IoT-based systems have been extensively deployed across various sectors, such as smart homes, smart cities, smart transport, smart logistics and so forth. Newer paradigms such as edge computing are developed to facilitate computation and data intelligence to be performed closer to IoT devices, hence reducing latency for time-sensitive tasks. However, IoT applications are increasingly being deployed in remote and difficult to reach areas for edge computing scenarios. These deployment locations make upgrading application and dealing with software failures difficult. IoT applications are also increasingly being deployed as containers which offer increased remote management ability but are more complex to configure. This paper proposes an approach for effectively managing, updating and re-configuring container-based IoT software as efficiently, scalably and reliably as possible with minimal downtime upon the detection of software failures. The approach is evaluated using docker container-based IoT application deployments in an edge computing scenario.


Author(s):  
M. Shaheda Begum

Abstract: Motivated by the exponential growth and the huge success of cloud data services bring the cloud common place for data to be not only stored in the cloud, but also shared across multiple users. Our scheme also has the added feature of access control in which only valid users are able to decrypt the stored information. Unfortunately, the integrity of cloud data is subject to skepticism due to the existence of hardware/software failures and human errors. Several mechanisms have been designed to allow both data owners and public verifiers to efficiently audit cloud data integrity without retrieving the entire data from the cloud server. However, public auditing on the integrity of shared data with these existing mechanisms will inevitably reveal confidential information—identity privacy—to public verifiers. In this paper, we propose a novel privacy-preserving mechanism that supports public auditing on shared data stored in the cloud. In particular, we exploit ring signatures to compute verification metadata needed to audit the correctness of shared data. With our mechanism, the identity of the signer on each block in shared data is kept private from public verifiers, who are able to efficiently verify shared data integrity without retrieving the entire file. In addition, our mechanism is able to perform multiple auditing tasks simultaneously instead of verifying them one by one. Our experimental results demonstrate the effectiveness and efficiency of our mechanism when auditing shared data integrity. Keywords: Public auditing, privacy-preserving, shared data, cloud computing


2021 ◽  
Vol 8 (2) ◽  
pp. 023-031
Author(s):  
Monday Eze ◽  
Charles Okunbor

Software Engineering is a branch of Computer Science that evolved as a result of urgent need to deal with decades of software crisis, characterized by low theoretical knowledge and practice of the construction of error-free and efficient software. The introduction of well-organized scientific, engineering and management strategies in the process of software development no doubt led to major breakthroughs, and solutions to software failures. One of the obvious game-changer in this regard is the evolution of Software Development Life Cycle, also known as Software Process Model for driving the different phases of software construction. A sound understanding of the process model is therefore inevitable, not just for software developers, but also to users and researchers. Such a theoretical cum practical understanding will enhance decisions on which process model is best for a particular job or perspective. This invariably, contributes immensely to the probability of success or failure of the project in question. Thus, the necessity for this research. This work presents an unambiguous expository of selected software development model variants. A total of four process model variants were studied, in a theoretical, visual and analytical manner. The variants were analyzed using strength versus weakness (SVW) tabular scenario. This work was concluded by presenting guides towards choice of these models. This research is expected to be a useful reference to software practitioners and researchers.


2021 ◽  
Vol 1 (2) ◽  
pp. 82-93
Author(s):  
I.V. Kovalev ◽  
M.V. Saramud ◽  
V.V. Losev ◽  
A.A. Koltashev

The developed method and tools for verification and confirmation of onboard software are presented, which guarantee its compliance with all established functional and non-functional requirements throughout the entire life cycle of cross-platform onboard software. This approach allows not only to increase the fault tolerance of the control system software during operation, but also allows collecting statistics on the operation of software components in the process of real functioning of all subsystems. This information allows you to identify possible situations in which software failures appear, which allows you to develop more reliable software components in the future. The results of the operation of the version control function of the onboard software in the simulation environment are presented. The process of collecting statistics for identifying faulty versions is described.


2021 ◽  
Vol 1 (2) ◽  
pp. 22-33
Author(s):  
I.V. Kovalev ◽  
M.V. Saramud ◽  
V.V. Losev ◽  
A.A. Koltashev

The developed method and tools for verification and confirmation of onboard software are presented, which guarantee its compliance with all established functional and non-functional requirements throughout the entire life cycle of cross-platform onboard software. This approach allows not only to increase the fault tolerance of the control system software during operation, but also allows collecting statistics on the operation of software components in the process of real functioning of all subsystems. This information allows you to identify possible situations in which software failures appear, which allows you to develop more reliable software components in the future. The results of the operation of the version control function of the onboard software in the simulation environment are presented. The process of collecting statistics for identifying faulty versions is described.


2021 ◽  
Vol 11 (14) ◽  
pp. 6335
Author(s):  
Yifan Li ◽  
Hong-Zhong Huang ◽  
Tingyu Zhang

Hard-and-software integrated systems such as command and control systems (C4ISR systems) are typical systems that are comprised of both software and hardware, the failures of such devices result from complicated common cause failures and common (or shared) signals that make classical reliability analysis methods will be not applicable. To this end, this paper applies the Goal-Oriented (GO) methodology to detailed analyze the reliability of a C4ISR system. The reliability as well as the failure probability of the C4ISR system, are reached based on the GO model constructed. At the component level, the reliability of units of the C4ISR system is computed. Importance analysis of failures of such a system is completed by the qualitative analysis capability of the GO model, by which critical failures of hardware failures like communication module failures and motherboard module failures as well as software failures like network module application software failures and decompression module software failures are ascertained. This method of this paper contributes to the reliability analysis of all hard-and-software integrated systems.


2021 ◽  
pp. 111043
Author(s):  
Domenico Cotroneo ◽  
Luigi De Simone ◽  
Pietro Liguori ◽  
Roberto Natella

Sign in / Sign up

Export Citation Format

Share Document