scholarly journals Implementing Risk Management in Pervasive and IoT Environments

The pervasive nature of networked things envision various risks as these digital devices generate high volume of data with variable nature. The technological growth is also a product of highly interrelated complex data, so it becomes a strong argument for risks management in pervasive and internet of things environments. This research analyzes many risks present in pervasive and IoT environments. The paper elaborates various risk analysis strategies in the pervasive and IoT environments which are highly configurable in nature. The paper has implemented risk management in pervasive applications by providing risky code insights by a smart software. The risky regions of software code are analyzed by the software and managed on priority. The state of art constructs a strong case for establishing interrelationships between risks management and quality assurance in big computation environments.

Author(s):  
Ana Paula Ferreira Martins Pignaton ◽  
Marcelo Oliveira ◽  
Gabriela Veroneze ◽  
Marcelo Pereira

The process of research, development and innovation (RD&I) is very important for the technological growth of a nation. Because they are activities with a considerable level of uncertainty, they require a lot of planning, with the execution of projects and a robust risk management. In the Western Amazon is one of the largest industrial parks in Brazil, subsidized by tax benefits from Law No. 8,387 / 1991, which also establishes the obligation for companies that manufacture computer goods to allocate part of their revenue for the implementation of projects of Research and Development. Studies show that the results of RD&I activities carried out in the Western Amazon, as well as in Brazil, are deficient. Thus, this work aimed to present a risk management model that can contribute to a better management of processes related to RD&I in the Western Amazon. The methodology consisted of a scientific research of applied nature, whose data were collected through bibliographic, documentary and case study research, and later they were analyzed in a qualitative way. It was concluded that the risk management model presented here is a practical manual composed of a detailed procedural rite of great value for managers who want to implement a risk management program in their organizations and, specifically in the case of SUFRAMA, represents a support instrument to guide the management of risks inherent to RD&I activities developed in the Western Amazon,  enhancing its results and contributing to expand the technological development of the region.


2019 ◽  
Vol 4 (2) ◽  
Author(s):  
Rizki Andre Handika ◽  
Solikhati Indah Purwaningrum ◽  
Resti Ayu Lestari

<p>PM <sup>10</sup> Pollutant is an air particulate that cannot be detected by a nose hair. It contains carcinogenic and non-carcinogenic chemical components. This study, therefore, aims to quantify the concentration of PM <sup>10</sup> and identify the risks of the non-carcinogenic type’s exposure to the public’s health in the commercial area of Pasar Jambi sub-district. Measurement of PM  concentration was performed on Sunday (weekend) and Monday (weekday) using high volume air sampler (HVAS). Furthermore, questionnaire and interviewing were implemented on 95 people amounting to 12% of the total population. The result shows that PM <sup>10</sup>  concentrations were observed to have exceeded ambient air quality standards of 196.9 µg/m3 on weekend and 2.094 µg/m3 weekday. Further- more, the average concentration of Al and Mn in PM <sup>10</sup>  were 1.69384 µg/m3 and 0.04191 µg/m3 respectively. Although the public health activity was already at the risk of PM10 non-carcinogenic exposure in the commercial district (i.e RQ &gt; 1), there has notbeen any environmental health risks for the non-carcinogenic metals (Al and Mn) to the society. Therefore, risk management is carried out to protect the population from PM risks. Risk management comprises calculating the safe concentration, duration, frequency, and time of exposure on these weekend and weekday</p>


2020 ◽  
Vol 26 (4) ◽  
Author(s):  
Marty Lipa ◽  

Quality Risk Management (QRM) and Knowledge Management (KM) have been positioned as co-enablers to the Pharmaceutical Quality System since the 2010 issuance of ICH Q10. Yet these disciplines have remained largely distinct and disconnected in practice. This paper presents a two-part literature review on this topic. First is a review of how other industries have connected risk management and knowledge management. This is followed by a review of relevant biopharmaceutical industry regulatory guidance to explore expectations for how risk, risk management, knowledge and knowledge management are interdependent. The results suggest there is a strong argument in favor of linking risk management and knowledge management and other industries have demonstrated benefits in doing so. Furthermore, the review of the biopharmaceutical industry regulatory guidance shows the clear and persistent benefits of connecting the expectations of managing risk and knowledge together. A key conclusion is that risk varies inversely with knowledge application and therefore, a lower level of risk to quality (and ultimately to the patient) can be achieved through risk management practices when a thoughtful and programmatic approach to knowledge management is in place, providing the best possible knowledge to assess and control risk.


2017 ◽  
Vol 12 (1) ◽  
pp. 39-53
Author(s):  
Stefan Schwerd ◽  
Richard Mayr

Nowadays computer mediated communication (CMC) and the high volume of computed and stored information is getting a business on its own. Information is collected, aggregated, analyzed and used to create real business advantage and value but also risks within companies and also outside on the markets in a high volume. On the other hand, single individuals still need to deal and interpret this sheer mass of increasing information continuously. The change in information management and handling triggers the ongoing changes in decision makings on the operational level as well as on the strategic level. Information is a good sold itself and triggered an own industry of information brokerage. It opens the question of trust and correctness into the information itself but also into the information source and opens a complete new, not modelled yet discipline of Information Risk Management. Currently no model exists in science to measure Information Risk Management where as there is a highly increasing demand to measure case-based applicability and success of Information Risk-Management (IRM) activities in a broader context. The authors propose a new model for IRM and derive a qualitative prove of variables/measure and a quantitative empiric-norm as a base for further perception comparison with specifically targeted groups. Keywords: information risk management, management theory, decision making, enterprise risk management.


2011 ◽  
Vol 16 (3) ◽  
pp. 338-347 ◽  
Author(s):  
Anne Kümmel ◽  
Paul Selzer ◽  
Martin Beibel ◽  
Hanspeter Gubler ◽  
Christian N. Parker ◽  
...  

High-content screening (HCS) is increasingly used in biomedical research generating multivariate, single-cell data sets. Before scoring a treatment, the complex data sets are processed (e.g., normalized, reduced to a lower dimensionality) to help extract valuable information. However, there has been no published comparison of the performance of these methods. This study comparatively evaluates unbiased approaches to reduce dimensionality as well as to summarize cell populations. To evaluate these different data-processing strategies, the prediction accuracies and the Z′ factors of control compounds of a HCS cell cycle data set were monitored. As expected, dimension reduction led to a lower degree of discrimination between control samples. A high degree of classification accuracy was achieved when the cell population was summarized on well level using percentile values. As a conclusion, the generic data analysis pipeline described here enables a systematic review of alternative strategies to analyze multiparametric results from biological systems.


2019 ◽  
Vol 21 (4) ◽  
pp. 1224-1237
Author(s):  
Dimitri Guala ◽  
Christoph Ogris ◽  
Nikola Müller ◽  
Erik L L Sonnhammer

Abstract The vast amount of experimental data from recent advances in the field of high-throughput biology begs for integration into more complex data structures such as genome-wide functional association networks. Such networks have been used for elucidation of the interplay of intra-cellular molecules to make advances ranging from the basic science understanding of evolutionary processes to the more translational field of precision medicine. The allure of the field has resulted in rapid growth of the number of available network resources, each with unique attributes exploitable to answer different biological questions. Unfortunately, the high volume of network resources makes it impossible for the intended user to select an appropriate tool for their particular research question. The aim of this paper is to provide an overview of the underlying data and representative network resources as well as to mention methods of integration, allowing a customized approach to resource selection. Additionally, this report will provide a primer for researchers venturing into the field of network integration.


2016 ◽  
Vol 81 (2) ◽  
pp. 273-293 ◽  
Author(s):  
Seth Quintus ◽  
Melinda s. Allen ◽  
Thegn N. Ladefoged

AbstractMuch attention has been paid to the role of increased food production in the development of social complexity. However, increased food production is only one kind of agricultural process, and some changes in agronomic practices were geared toward stabilizing production or counteracting periodic shortfalls. The intersection between these latter strategies and sociopolitical development are poorly understood, while the long-term value of risk management strategies is often hypothesized but empirically not well demonstrated. We address these issues using recent archaeological data from the Samoan Archipelago, Polynesia. We investigate variability in, and the development of, one type of agricultural infrastructure: ditch- and-parcel complexes. In the context of Samoa’s high-volume rainfall, recurrent cyclones, and steep topography, these novel risk management facilities offered production stability and, by extension, long-term selective benefits to both emergent elites and the general populace. Their effectiveness against known hazards is demonstrated by hydrologicai modeling, while their long-term success is indicated by increased distribution and size over time. Additionally, based on their morphologies, funetional properties, chronology, and spatial patterning, we argue that this infrastructure could have been effectively used by emergent elites to gain political advantage, particularly in conjunction with environmental perturbations that created production bottlenecks or shortfalls.


Author(s):  
Qi Wang ◽  
Zequn Qin ◽  
Feiping Nie ◽  
Yuan Yuan

Representing high-volume and high-order data is an essential problem, especially in machine learning field. Although existing two-dimensional (2D) discriminant analysis achieves promising performance, the single and linear projection features make it difficult to analyze more complex data. In this paper, we propose a novel convolutional two-dimensional linear discriminant analysis (2D LDA) method for data representation. In order to deal with nonlinear data, a specially designed Convolutional Neural Networks (CNN) is presented, which can be proved having the equivalent objective function with common 2D LDA. In this way, the discriminant ability can benefit from not only the nonlinearity of Convolutional Neural Networks, but also the powerful learning process. Experiment results on several datasets show that the proposed method performs better than other state-of-the-art methods in terms of classification accuracy.


Symmetry ◽  
2018 ◽  
Vol 10 (12) ◽  
pp. 698 ◽  
Author(s):  
Shabana Ramzan ◽  
Imran Bajwa ◽  
Rafaqut Kazmi

Handling complexity in the data of information systems has emerged into a serious challenge in recent times. The typical relational databases have limited ability to manage the discrete and heterogenous nature of modern data. Additionally, the complexity of data in relational databases is so high that the efficient retrieval of information has become a bottleneck in traditional information systems. On the side, Big Data has emerged into a decent solution for heterogenous and complex data (structured, semi-structured and unstructured data) by providing architectural support to handle complex data and by providing a tool-kit for efficient analysis of complex data. For the organizations that are sticking to relational databases and are facing the challenge of handling complex data, they need to migrate their data to a Big Data solution to get benefits such as horizontal scalability, real-time interaction, handling high volume data, etc. However, such migration from relational databases to Big Data is in itself a challenge due to the complexity of data. In this paper, we introduce a novel approach that handles complexity of automatic transformation of existing relational database (MySQL) into a Big data solution (Oracle NoSQL). The used approach supports a bi-fold transformation (schema-to-schema and data-to-data) to minimize the complexity of data and to allow improved analysis of data. A software prototype for this transformation is also developed as a proof of concept. The results of the experiments show the correctness of our transformations that outperform the other similar approaches.


Sign in / Sign up

Export Citation Format

Share Document