scholarly journals Collaborative Prognostics for Machine Fleets Using a Novel Federated Baseline Learner

2021 ◽  
Vol 13 (1) ◽  
Author(s):  
Vibhorpandhare ◽  
Xiaodong Jia ◽  
Jay Lee

Difficulty in obtaining enough run-to-fail datasets is a major barrier that impedes the widespread acceptance of Prognostic and Health Management (PHM) technology in many applications. Recent progress in federated learning demonstrates great potential to overcome such difficulty because it allows one to train PHM models based on distributed databases without direct data sharing. Therefore, this technology can overcome local data scarcity challenges by training the PHM model based on multi-party databases. To demonstrate the ability of federated learning to enhance the robustness and reliability of PHM models, this paper proposes a novel federated Gaussian Mixture Model (GMM) algorithm to build universal baseline models based on distributed databases. A systematic methodology to perform collaborative prognostics is further presented using the proposed federated GMM algorithm. The usefulness and performance are validated through a simulated dataset and the NASA Turbofan Engine Dataset. The proposed federated approach with parameter sharing is shown to perform at par with the traditional approach with data sharing. The proposed model further demonstrates improved robustness of predictions made collaboratively keeping the data private compared to local predictions. Federated collaborative learning can serve as a catalyst for the adaptation of business models based on the servitization of assets in the era of Industry 4.0. The methodology facilitates effective learning of asset health conditions for data-scarce organizations by collaborating with other organizations preserving data privacy. This is most suitable for a servitization model for Overall Equipment Manufacturers who sell to multiple organizations.

Electronics ◽  
2021 ◽  
Vol 10 (5) ◽  
pp. 560
Author(s):  
Andrea Bonci ◽  
Simone Fiori ◽  
Hiroshi Higashi ◽  
Toshihisa Tanaka ◽  
Federica Verdini

The prospect and potentiality of interfacing minds with machines has long captured human imagination. Recent advances in biomedical engineering, computer science, and neuroscience are making brain–computer interfaces a reality, paving the way to restoring and potentially augmenting human physical and mental capabilities. Applications of brain–computer interfaces are being explored in applications as diverse as security, lie detection, alertness monitoring, gaming, education, art, and human cognition augmentation. The present tutorial aims to survey the principal features and challenges of brain–computer interfaces (such as reliable acquisition of brain signals, filtering and processing of the acquired brainwaves, ethical and legal issues related to brain–computer interface (BCI), data privacy, and performance assessment) with special emphasis to biomedical engineering and automation engineering applications. The content of this paper is aimed at students, researchers, and practitioners to glimpse the multifaceted world of brain–computer interfacing.


2015 ◽  
Vol 19 (06) ◽  
pp. 1540009 ◽  
Author(s):  
SARAH MAHDJOUR

What do growth-oriented business models look like? While several economic theories, such as the theory of the firm, are based on the assumption that firms aim to maximise their profits, past research has shown that growth intention is heterogeneous among firms and that many business owners prefer to keep their firm at a size that they can manage with few resources. This paper explores the relationship of growth intention and business models, based on a sample of 135 German ICT businesses. Following an exploratory approach, Mann–Whitney U tests are applied to analyse how different business model designs correspond with different levels of growth intention. The results indicate that growth intention relates to business owners’ decisions regarding the provision of consulting services, the level of standardisation in offered products and services, the choice of addressed markets, the implementation of competitive strategies based on cost efficiency and of revenue streams based on one-time- and performance-based payments. Furthermore, the results show that growth oriented firms are no more likely than non-growth oriented firms to adapt their business models dynamically to changed internal or external conditions.


2021 ◽  
Vol 73 (05) ◽  
pp. 52-53
Author(s):  
Judy Feder

This article, written by JPT Technology Editor Judy Feder, contains highlights of paper OTC 30794, “Digitalization Deployed: Lessons Learned From Early Adopters,” by John Nixon, Siemens, prepared for the 2020 Offshore Technology Conference, originally scheduled to be held in Houston, 4–7 May. The paper has not been peer reviewed. Copyright 2020 Offshore Technology Conference. Reproduced by permission. With full-scale digital transformation of oil and gas an inevitability, the industry can benefit by examining the strategies of industries such as automotive, manufacturing, marine, and aerospace that have been early adopters. This paper discusses how digital technologies are being applied in other verticals and how they can be leveraged to optimize life-cycle performance, drive down costs, and decouple market volatility from profitability for offshore oil and gas facilities. Barriers to Digital Adoption Despite the recent dramatic growth in use of digital tools to harness the power of data, the industry as a whole has remained conservative in its pace of digital adoption. Most organizations continue to leverage technology in disaggregated fashion. This has resulted in an operating environment in which companies can capture incremental inefficiencies and cost savings on a local level but have been largely unable to cause any discernible effect on operating or business models. Although the recent market downturn constrained capital budgets significantly, an ingrained risk-averse culture is also to blame. Other often-cited reasons for the industry’s reluctance to digitally transform include cost of downtime, cyber-security and data privacy, and limited human capital. A single offshore oil and gas facility failure or plant trip can result in millions of dollars in production losses. Therefore, any solution that has the potential to affect a process or its safety negatively must be proved before being implemented. Throughout its history, the industry has taken a conservative approach when adopting new technologies, even those designed to prevent unplanned downtime. Although many current technologies promise increases of 1 to 2% in production efficiency, these gains become insignificant in the offshore industry if risk exists that deployment of the technology could in any way disrupt operations. Cybersecurity and data privacy are perhaps the most-significant concerns related to adoption of digital solutions by the industry, and they are well-founded. Much of today’s offshore infrastructure was not designed with connectivity or the Internet of Things in mind. Digital capabilities have simply been bolted on. In a recent survey of oil and gas executives, more than 60% of respondents said their organization’s industrial control systems’ protection and security were inadequate, and over two-thirds said they had experienced at least one cybersecurity attack in the previous year. Given this reality, it is no surprise that offshore operators have been reluctant to connect their critical assets. They are also cautious about sharing performance data with vendors and suppliers. This lack of collaboration and connectivity has inevitably slowed the pace of digital transformation, the extent to which it can be leveraged, and the value it can generate.


10.2196/13046 ◽  
2020 ◽  
Vol 8 (2) ◽  
pp. e13046 ◽  
Author(s):  
Mengchun Gong ◽  
Shuang Wang ◽  
Lezi Wang ◽  
Chao Liu ◽  
Jianyang Wang ◽  
...  

Background Patient privacy is a ubiquitous problem around the world. Many existing studies have demonstrated the potential privacy risks associated with sharing of biomedical data. Owing to the increasing need for data sharing and analysis, health care data privacy is drawing more attention. However, to better protect biomedical data privacy, it is essential to assess the privacy risk in the first place. Objective In China, there is no clear regulation for health systems to deidentify data. It is also not known whether a mechanism such as the Health Insurance Portability and Accountability Act (HIPAA) safe harbor policy will achieve sufficient protection. This study aimed to conduct a pilot study using patient data from Chinese hospitals to understand and quantify the privacy risks of Chinese patients. Methods We used g-distinct analysis to evaluate the reidentification risks with regard to the HIPAA safe harbor approach when applied to Chinese patients’ data. More specifically, we estimated the risks based on the HIPAA safe harbor and limited dataset policies by assuming an attacker has background knowledge of the patient from the public domain. Results The experiments were conducted on 0.83 million patients (with data field of date of birth, gender, and surrogate ZIP codes generated based on home address) across 33 provincial-level administrative divisions in China. Under the Limited Dataset policy, 19.58% (163,262/833,235) of the population could be uniquely identifiable under the g-distinct metric (ie, 1-distinct). In contrast, the Safe Harbor policy is able to significantly reduce privacy risk, where only 0.072% (601/833,235) of individuals are uniquely identifiable, and the majority of the population is 3000 indistinguishable (ie the population is expected to share common attributes with 3000 or less people). Conclusions Through the experiments based on real-world patient data, this work illustrates that the results of g-distinct analysis about Chinese patient privacy risk are similar to those from a previous US study, in which data from different organizations/regions might be vulnerable to different reidentification risks under different policies. This work provides reference to Chinese health care entities for estimating patients’ privacy risk during data sharing, which laid the foundation of privacy risk study about Chinese patients’ data in the future.


Author(s):  
Jing Yang ◽  
Quan Zhang ◽  
Kunpeng Liu ◽  
Peng Jin ◽  
Guoyi Zhao

In recent years, electricity big data has extensive applications in the grid companies across the provinces. However, certain problems are encountered including, the inability to generate an ideal model using the isolated data possessed by each company, and the priority concerns for data privacy and safety during big data application and sharing. In this pursuit, the present research envisaged the application of federated learning to protect the local data, and to build a uniform model for different companies affiliated to the State Grid. Federated learning can serve as an essential means for realizing the grid-wide promotion of the achievements of big data applications, while ensuring the data safety.


2021 ◽  
Author(s):  
Yahia Zakaria ◽  
Mayada Hadhoud ◽  
Magda Fayek

Deep learning for procedural level generation has been explored in many recent works, however, experimental comparisons with previous works are rare and usually limited to the work they extend upon. This paper's goal is to conduct an experimental study on four recent deep learning procedural level generators for Sokoban to explore their strengths and weaknesses. The methods will be bootstrapping conditional generative models, controllable & uncontrollable procedural content generation via reinforcement learning (PCGRL) and generative playing networks. We will propose some modifications to either adapt the methods to the task or improve their efficiency and performance. For the bootstrapping method, we propose using diversity sampling to improve the solution diversity, auxiliary targets to enhance the models' quality and Gaussian mixture models to improve the sample quality. The results show that diversity sampling at least doubles the unique plan count in the generated levels. On average, auxiliary targets increases the quality by 24% and sampling conditions from Gaussian mixture models increases the sample quality by 13%. Overall, PCGRL shows superior quality and diversity while generative adversarial networks exhibit the least control confusion when trained with diversity sampling and auxiliary targets.


2021 ◽  
Author(s):  
◽  
Dominik Mann

<p>Designing and strategically developing viable business models is vital for value creation and capture and in turn for the survival and performance of entrepreneurial ventures. However, the widely held firm-centric and static business model perspective appears inadequate to reflect the realities of increasingly blurred industry boundaries, interconnected economies, and the resulting collapse of incumbent value chains. This PhD thesis adds understanding of the dynamic business model development process from an ecosystem perspective. The evolution of ten entrepreneurial ventures’ business models was documented and investigated through longitudinal in-depth case studies over twelve months. Analysing and comparing the cases revealed strategies that resulted in the development of effective interactive structures and robust value co-creation and capture mechanisms. The development of interactive structures, i.e. firm-ecosystem fits, was either supported by a focused or diversified ecosystem integration approach underpinned by heterogeneous interdependencies of value proposition and business model components across ecosystems. The obtained insights allowed the derivation of sets of capabilities that supported the business model development process and enhanced entrepreneurial ventures’ chances of survival. The findings have several implications for advancements of the business model theory. In particular they indicate what integration strategies can inform entrepreneurs’ and managers’ business model design and execution strategies for operating in increasingly complex ecosystems.</p>


2016 ◽  
Vol 13 (1) ◽  
pp. 204-211
Author(s):  
Baghdad Science Journal

The internet is a basic source of information for many specialities and uses. Such information includes sensitive data whose retrieval has been one of the basic functions of the internet. In order to protect the information from falling into the hands of an intruder, a VPN has been established. Through VPN, data privacy and security can be provided. Two main technologies of VPN are to be discussed; IPSec and Open VPN. The complexity of IPSec makes the OpenVPN the best due to the latter’s portability and flexibility to use in many operating systems. In the LAN, VPN can be implemented through Open VPN to establish a double privacy layer(privacy inside privacy). The specific subnet will be used in this paper. The key and certificate will be generated by the server. An authentication and key exchange will be based on standard protocol SSL/TLS. Various operating systems from open source and windows will be used. Each operating system uses a different hardware specification. Tools such as tcpdump and jperf will be used to verify and measure the connectivity and performance. OpenVPN in the LAN is based on the type of operating system, portability and straightforward implementation. The bandwidth which is captured in this experiment is influenced by the operating system rather than the memory and capacity of the hard disk. Relationship and interoperability between each peer and server will be discussed. At the same time privacy for the user in the LAN can be introduced with a minimum specification.


Author(s):  
Poovizhi. M ◽  
Raja. G

Using Cloud Storage, users can tenuously store their data and enjoy the on-demand great quality applications and facilities from a shared pool of configurable computing resources, without the problem of local data storage and maintenance. However, the fact that users no longer have physical possession of the outsourced data makes the data integrity protection in Cloud Computing a formidable task, especially for users with constrained dividing resources. From users’ perspective, including both individuals and IT systems, storing data remotely into the cloud in a flexible on-demand manner brings tempting benefits: relief of the burden for storage management, universal data access with independent geographical locations, and avoidance of capital expenditure on hardware, software, and personnel maintenances, etc. To securely introduce an effective Sanitizer and third party auditor (TPA), the following two fundamental requirements have to be met: 1) TPA should be able to capably audit the cloud data storage without demanding the local copy of data, and introduce no additional on-line burden to the cloud user; 2) The third party auditing process should take in no new vulnerabilities towards user data privacy. In this project, utilize and uniquely combine the public auditing protocols with double encryption approach to achieve the privacy-preserving public cloud data auditing system, which meets all integrity checking without any leakage of data. To support efficient handling of multiple auditing tasks, we further explore the technique of online signature to extend our main result into a multi-user setting, where TPA can perform multiple auditing tasks simultaneously. We can implement double encryption algorithm to encrypt the data twice and stored cloud server in Electronic Health Record applications.


Sign in / Sign up

Export Citation Format

Share Document