software upgrades
Recently Published Documents


TOTAL DOCUMENTS

79
(FIVE YEARS 20)

H-INDEX

10
(FIVE YEARS 1)

2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Adarsh Anand ◽  
Subhrata Das ◽  
Mohini Agarwal ◽  
Shinji Inoue

PurposeIn the current market scenario, software upgrades and updates have proved to be very handy in improving the reliability of the software in its operational phase. Software upgrades help in reinventing working software through major changes, like functionality addition, feature enhancement, structural changes, etc. In software updates, minor changes are undertaken which help in improving software performance by fixing bugs and security issues in the current version of the software. Through the current proposal, the authors wish to highlight the economic benefits of the combined use of upgrade and update service. A cost analysis model has been proposed for the same.Design/methodology/approachThe article discusses a cost analysis model highlighting the distinction between launch time and time to end the testing process. The number of bugs which have to be catered in each release has been determined which also consists of the count of latent bugs of previous version. Convolution theory has been utilized to incorporate the joint role of tester and user in bug detection into the model. The cost incurred in debugging process was determined. An optimization model was designed which considers the reliability and budget constraints while minimizing the total debugging cost. This optimization was used to determine the release time and testing stop time.FindingsThe proposal is backed by real-life software bug dataset consisting of four releases. The model was able to successfully determine the ideal software release time and the testing stop time. An increased profit is generated by releasing the software earlier and continues testing long after its release.Originality/valueThe work contributes positively to the field by providing an effective optimization model, which was able to determine the economic benefit of the combined use of upgrade and update service. The model can be used by management to determine their timelines and cost that will be incurred depending on their product and available resources.


2021 ◽  
Vol 11 (22) ◽  
pp. 10581
Author(s):  
Costel-Ciprian Raicu ◽  
George-Călin Seriţan ◽  
Bogdan-Adrian Enache ◽  
Marilena Stănculescu

Headlights’ development for the automotive industry is gaining a lot of volatility due to frequent changes in features, styling and design, hardware interfaces, and software upgrades required by the OEM, supplier, or new trends in regulations. Standard development models based on V-cycle compliant with CMMI are not responding with reactivity on constant changes. The article proposes an approach based on mixed development strategies over the different core domains with Lean, Scrum, Feature-Driven Development, and VDI to satisfy the APQP milestones, with a proposal of a canvas-type model, the rapid delivery of headlights is portrayed. The efficiency and effectiveness of the model are assessed based on the assumed number of changes for new high-end headlights, based on experience and real cases. A delivery baseline LED-based Headlight development—planned versus actual—chart is presented and explained.


2021 ◽  
pp. 165-171
Author(s):  
Abirami S.K ◽  
Keerthika J

This article examines Cloud-based Design and Manufacturing from a critical standpoint (CBDM). Cloud technology has lately found its way into the realm of computer-assisted product creation. Corporations could explore substituting existing own CAD software licences with Design software as a cloud - based service as the first part of implementation. Installing a CAD program via the cloud on a carrier's server and incurring a miniscule proportion of the initial licensing price on a pay-per-use basis is definitely tempting. Furthermore, time and money-consuming software upgrades and operational issues are no longer an issue. We provide an introduction of cloud technology and the intrinsic features that drive its usage in both the business and education domains for dispersed and interactive design and production. Cloud Technology is a hotly debated Information Technology (IT) model that is expected to have a major effect on how businesses are run in the future. While cloud technology was first proposed in the late 60s, it was only recently that it became a viable part of day-to-day IT systems, thanks to the Internet's increasing prevalence and other modern improvements in information communication technology (ICT).


2021 ◽  
Vol 2 (6) ◽  
pp. 1-31
Author(s):  
Riccardo Bassoli ◽  
Frank H.P. Fitzek ◽  
Emilio Calvanese Strinati

The study and design of 5G seems to have reached its end and 5G communication systems are currently under deployment. In parallel, 5G standardization is as Release 16, which is going to complete the definition and the design guidelines of the 5G radio access network. Because of that, the interest of the scientific and industrial communities has already started focusing on the future 6G communication networks. The preliminary definition of future technology trends towards 2030, given by major standardization bodies, and the flagship 6G projects worldwide have started proposing various visions about what 6G will be. Side by side, various scientific articles, addressing the initial characterisation of 6G, have also been published. However, considering the promises of 5G, can 6G represent a significant technological advancement to justify a so-called new generation? In fact, now, 5G softwarized networks may just imply continuous network software upgrades (as it happens for the Internet) instead of new generations every ten years. This article starts describing the main characteristics that made 5G a breakthrough in telecommunications, also briefly introducing the network virtualisation and computing paradigms that have reformed telecommunications. Next, by providing rigorous definition of the terminology and a survey of the principal 6G visions proposed, the paper tries to establish important motivations and characteristics that can really justify the need for and the novelty of future 6G communication networks.


Author(s):  
D. Ricci ◽  
L. Cabona ◽  
C. Righi ◽  
A. La Camera ◽  
F. Nicolosi ◽  
...  

We present technical, instrumental, and software upgrades completed and planned at astronomical observatory called "Osservatorio Astronomico Regionale Parco Antola, Fascia" (OARPAF), hosting an 80cm, alt-az Cassegrain-Nasmyth telescope. The observatory, located in the Ligurian Apennines, can currently be operated either for scientific (photometry camera) or amateur (ocular) observations, by switching the tertiary mirror between the two Nasmyth foci using a manual handle. The main scientific observational topics are related up to now to exoplanetary transits, QSOs, and gravitationally lensed quasars, and results are being recently published. A remotization and robotization strategy of the entire structure (telescope, dome, instruments, sensors and monitoring) have been set up and it is in progress. We report the current upgrades, mainly related for what concerns the "hardware" side to the robotization of the dome. On the instrumentation side, a new modular support for instruments with spectrophotometric capabilities is on a preliminary design phase, improving the telescope performances and broadening the potential science fields. In this framework, the procurement of spectrophotometric material has started. On the software side, an innovative web-based software relying on websockets and node.js can already be used to control the camera, and it will be extended to manage the other components of the instrument, of the observatory, and of the image database storage.


2021 ◽  
Vol 7 ◽  
pp. 65-70
Author(s):  
Tsvetelina Simeonova

The aim of the work is to consider and compare the features of distributed systems, SCADA and IIoT. Both SCADA and IoT include sensors and data collection. Although they differ in many respects, they share a common goal. The idea of a smart grid leads to the integration of SCADA and IoT. SCADA is useful for monitoring and managing installations or industrial equipment. The Internet of Things is a collection of physical devices with different implementations, software upgrades, sensors, actuators, and network connectivity, all of which work together to enable objects to connect and exchange data. The focus of the article is the consideration, as distributed systems and in comparative terms, of the development of industrial technologies IoT and SCADA. The presentation will include a summary of the characteristics of these two technologies and a structural-functional analysis of the efficiency in the integration of the latest generations of SCADA systems in the functionality of IoT. The possibilities for integration are shown, as well as the prerequisites for this. The results can be used as recommendations in the areas of design and operation.


2021 ◽  
Vol 23 (06) ◽  
pp. 767-774
Author(s):  
Niveditha. V.K ◽  
◽  
Dr. Kiran. V ◽  
Avinash Pathak ◽  
◽  
...  

The fast evolution pace of various technologies such as the Internet of Things (IoT), Cloud Computing and the world moving towards digitalization created an increased need for data centers than ever before. Data centers support a wide range of internet services, including web hosting, e-commerce, and social networking. In recent years huge data centers have been owned and run by tech giants like Google, Facebook, Microsoft, etc., and these firms are known as Hyper-scalers. Hyper-scalers are the next big thing, ready to fundamentally alter the internet world for data storage through a variety of services supplied by them across all technological domains. The tool for automatic software upgrade focuses on having a seamless upgrade for the devices in the datacenters mainly in huge data centers owned by the hyper-scalers. This paper mainly focuses on the technologies used in developing the tool for automatic software upgrade, an overview of how the tool is developed, and its features. By deploying this tool in the datacenters, it supports them in delivering more efficient services.


2021 ◽  
Author(s):  
Jason Grealey ◽  
Loïc Lannelongue ◽  
Woei-Yuh Saw ◽  
Jonathan Marten ◽  
Guillaume Meric ◽  
...  

AbstractBioinformatic research relies on large-scale computational infrastructures which have a non-zero carbon footprint. So far, no study has quantified the environmental costs of bioinformatic tools and commonly run analyses. In this study, we estimate the bioinformatic carbon footprint (in kilograms of CO2 equivalent units, kgCO2e) using the freely available Green Algorithms calculator (www.green-algorithms.org). We assess (i) bioinformatic approaches in genome-wide association studies (GWAS), RNA sequencing, genome assembly, metagenomics, phylogenetics and molecular simulations, as well as (ii) computation strategies, such as parallelisation, CPU (central processing unit) vs GPU (graphics processing unit), cloud vs. local computing infrastructure and geography. In particular, for GWAS, we found that biobank-scale analyses emitted substantial kgCO2e and simple software upgrades could make GWAS greener, e.g. upgrading from BOLT-LMM v1 to v2.3 reduced carbon footprint by 73%. Switching from the average data centre to a more efficient data centres can reduce carbon footprint by ~34%. Memory over-allocation can be a substantial contributor to an algorithm’s carbon footprint. The use of faster processors or greater parallelisation reduces run time but can lead to, sometimes substantially, greater carbon footprint. Finally, we provide guidance on how researchers can reduce power consumption and minimise kgCO2e. Overall, this work elucidates the carbon footprint of common analyses in bioinformatics and provides solutions which empower a move toward greener research.


2021 ◽  
Vol 2 (2) ◽  
pp. 1-25
Author(s):  
Emekcan Aras ◽  
Stéphane Delbruel ◽  
Fan Yang ◽  
Wouter Joosen ◽  
Danny Hughes

The Internet of Things (IoT) is being deployed in an ever-growing range of applications, from industrial monitoring to smart buildings to wearable devices. Each of these applications has specific computational requirements arising from their networking, system security, and edge analytics functionality. This diversity in requirements motivates the need for adaptable end-devices, which can be re-configured and re-used throughout their lifetime to handle computation-intensive tasks without sacrificing battery lifetime. To tackle this problem, this article presents Chimera, a low-power platform for research and experimentation with reconfigurable hardware for the IoT end-devices. Chimera achieves flexibility and re-usability through an architecture based on a Flash Field Programmable Gate Array (FPGA) with a reconfigurable software stack that enables over-the-air hardware and software evolution at runtime. This adaptability enables low-cost hardware/software upgrades on the end-devices and an increased ability to handle computationally-intensive tasks. This article describes the design of the Chimera hardware platform and software stack, evaluates it through three application scenarios, and reviews the factors that have thus far prevented FPGAs from being utilized in IoT end-devices.


Sign in / Sign up

Export Citation Format

Share Document