Semantification of Large Corpora of Technical Documentation

Author(s):  
Sebastian Furth ◽  
Joachim Baumeister

The complexity of machines has grown dramatically in the past years. Today, they are built as a complex functional network of mechanics, electronics, and hydraulics. Thus, the technical documentation became a fundamental source for service technicians in their daily work. The technicians need fast and focused access methods to handle the massive volumes of documentation. For this reason, semantic search emerged as the new system paradigm for the presentation of technical documentation. However, the existent large corpora of legacy documentation are usually not semantically prepared. This fact creates an invincible gap between new technological opportunities and the actual data quality at companies. This chapter presents a novel and comprehensive approach for the semantification of large volumes of legacy technical documents. The approach espescially tackles the veracity and variety existent in technical documentation and makes explicit use of their typical characteristics. The experiences with the implementation and the learned benefits are discussed in industrial case studies.

2014 ◽  
Vol 23 (1) ◽  
pp. 42-54 ◽  
Author(s):  
Tanya Rose Curtis

As the field of telepractice grows, perceived barriers to service delivery must be anticipated and addressed in order to provide appropriate service delivery to individuals who will benefit from this model. When applying telepractice to the field of AAC, additional barriers are encountered when clients with complex communication needs are unable to speak, often present with severe quadriplegia and are unable to position themselves or access the computer independently, and/or may have cognitive impairments and limited computer experience. Some access methods, such as eye gaze, can also present technological challenges in the telepractice environment. These barriers can be overcome, and telepractice is not only practical and effective, but often a preferred means of service delivery for persons with complex communication needs.


Author(s):  
Vishnu Sharma ◽  
Vijay Singh Rathore ◽  
Chandikaditya Kumawat

Software reuse can improve software quality with the reducing cost and development time. Systematic reuse plan enhances cohesion and reduces coupling for better testability and maintainability. Software reuse approach can be adopted at the highest extent if relevant software components can be easily searched, adapted and integrated into new system. Large software industries hold their own well managed component libraries containing well tested software component with the project category based classification .Access to these repositories are very limited. Software reuse is facing so many problems and still not so popular. This is due to issues of general access, efficient search and adoption of software component. This paper propose a framework which resolves all of the above issues with providing easy access to components, efficient incremental semantics based search, repository management, versioning of components.


2017 ◽  
Author(s):  
Sayamindu Dasgupta ◽  
Benjamin Mako Hill

In this paper, we present Scratch Community Blocks, a new system that enables children to programmatically access, analyze, and visualize data about their participation in Scratch, an online community for learning computer programming. At its core, our approach involves a shift in who analyzes data: from adult data scientists to young learners themselves. We first introduce the goals and design of the system and then demonstrate it by describing example projects that illustrate its functionality. Next, we show through a series of case studies how the system engages children in not only representing data and answering questions with data but also in self-reflection about their own learning and participation.


2021 ◽  
Author(s):  
Aurore Lafond ◽  
Maurice Ringer ◽  
Florian Le Blay ◽  
Jiaxu Liu ◽  
Ekaterina Millan ◽  
...  

Abstract Abnormal surface pressure is typically the first indicator of a number of problematic events, including kicks, losses, washouts and stuck pipe. These events account for 60–70% of all drilling-related nonproductive time, so their early and accurate detection has the potential to save the industry billions of dollars. Detecting these events today requires an expert user watching multiple curves, which can be costly, and subject to human errors. The solution presented in this paper is aiming at augmenting traditional models with new machine learning techniques, which enable to detect these events automatically and help the monitoring of the drilling well. Today’s real-time monitoring systems employ complex physical models to estimate surface standpipe pressure while drilling. These require many inputs and are difficult to calibrate. Machine learning is an alternative method to predict pump pressure, but this alone needs significant labelled training data, which is often lacking in the drilling world. The new system combines these approaches: a machine learning framework is used to enable automated learning while the physical models work to compensate any gaps in the training data. The system uses only standard surface measurements, is fully automated, and is continuously retrained while drilling to ensure the most accurate pressure prediction. In addition, a stochastic (Bayesian) machine learning technique is used, which enables not only a prediction of the pressure, but also the uncertainty and confidence of this prediction. Last, the new system includes a data quality control workflow. It discards periods of low data quality for the pressure anomaly detection and enables to have a smarter real-time events analysis. The new system has been tested on historical wells using a new test and validation framework. The framework runs the system automatically on large volumes of both historical and simulated data, to enable cross-referencing the results with observations. In this paper, we show the results of the automated test framework as well as the capabilities of the new system in two specific case studies, one on land and another offshore. Moreover, large scale statistics enlighten the reliability and the efficiency of this new detection workflow. The new system builds on the trend in our industry to better capture and utilize digital data for optimizing drilling.


Author(s):  
Philip Rocco ◽  
Jessica A. J. Rich ◽  
Katarzyna Klasa ◽  
Kenneth A. Dubin ◽  
Daniel Béland

Abstract Context: While the World Health Organization (WHO) has established guidance on COVID-19 surveillance, little is known about implementation of these guidelines in federations, which fragment authority across multiple levels of government. This study examines how subnational governments in federal democracies collect and report data on COVID-19 cases and mortality associated with COVID-19. Methods: We collected data from subnational government websites in 15 federal democracies to construct indices of COVID-19 data quality. Using bivariate and multivariate regression, we analyzed the relationship between these indices and indicators of state capacity, the decentralization of resources and authority, and the quality of democratic institutions. We supplement these quantitative analyses with qualitative case studies of subnational COVID-19 data in Brazil, Spain, and the United States. Findings: Subnational governments in federations vary in their collection of data on COVID-19 mortality, testing, hospitalization, and demographics. There are statistically significant associations (p<0.05) between subnational data quality and key indicators of public health system capacity, fiscal decentralization, and the quality of democratic institutions. Case studies illustrate the importance of both governmental and civil-society institutions that foster accountability. Conclusions: The quality of subnational COVID-19 surveillance data in federations depends in part on public health system capacity, fiscal decentralization, and the quality of democracy.


2021 ◽  
Vol 3 (1) ◽  
pp. 16-27
Author(s):  
Daniel Maxwell ◽  
Peter Hailey

Famine means destitution, increased severe malnutrition, disease, excess death and the breakdown of institutions and social norms. Politically, it means a failure of governance – a failure to provide the most basic of protections. Because of both its human and political meanings, ‘famine’ can be a shocking term. This is turn makes the analysis – and especially declaration – of famine a very sensitive subject. This paper synthesises the findings from six case studies of the analysis of extreme food insecurity and famine to identify the political constraints to data collection and analysis, the ways in which these are manifested, and emergent good practice to manage these influences. The politics of information and analysis are the most fraught where technical capacity and data quality are the weakest. Politics will not be eradicated from analysis but can and must be better managed.


2020 ◽  
pp. 60-85
Author(s):  
Irene Bernhard

In this chapter, the focus is on incentives for inclusive e-government. Five case studies of the implementation of contact centers in Swedish municipalities are described and discussed. The research methods used are mainly qualitative interviews with different categories of municipal personnel and with citizens. The main conclusion is that the implementation seems to contribute to increased accessibility of municipal services, even for those citizens who might have problems using Internet services. The study indicates a development towards increased equal treatment of citizens and a contribution to reducing problems related to the “digital divide.” Municipal services became more adapted to citizens' needs by using citizen-centric methods during the development process and in the daily work of the contact centers. The implementation of municipal contact centers can thus be seen as indicating incentives for local e-democracy and a step towards inclusive e-government, although there is still a need to go further in this direction.


Author(s):  
Latif Al-Hakim ◽  
Hongjiang Xu

Organisational decision-makers have experienced the adverse effects of decisions based on information of inferior quality. Millions of dollars have been spent on information systems to improve data quality (DQ)1 as well as the skills and capacity of IT professionals. It is an important issue that the IT professionals align their work within the expectation of the organization’s vision. This chapter provides some theoretical background to DQ and establishes a link between DQ, performance-importance analysis and work alignment. Four case studies are presented to support the theory developed in this chapter and to answer the question as to whether the IT professionals consider DQ issues differently from other information users.


Author(s):  
Jan Bosch ◽  
Helena Holmström Olsson ◽  
Ivica Crnkovic

Artificial intelligence (AI) and machine learning (ML) are increasingly broadly adopted in industry. However, based on well over a dozen case studies, we have learned that deploying industry-strength, production quality ML models in systems proves to be challenging. Companies experience challenges related to data quality, design methods and processes, performance of models as well as deployment and compliance. We learned that a new, structured engineering approach is required to construct and evolve systems that contain ML/DL components. In this chapter, the authors provide a conceptualization of the typical evolution patterns that companies experience when employing ML as well as an overview of the key problems experienced by the companies that they have studied. The main contribution of the chapter is a research agenda for AI engineering that provides an overview of the key engineering challenges surrounding ML solutions and an overview of open items that need to be addressed by the research community at large.


2015 ◽  
Vol 16 (1) ◽  
pp. 44-53
Author(s):  
Laura Ferguson

Purpose – The purpose of this paper is to highlight the actions needed and organisations to make a difference to the problem of loneliness in old age. Design/methodology/approach – Draws on the work of the Campaign to End Loneliness in collaboration with hundreds of organisations worldwide to document what has been done so far and to provide exemplars and imagined case studies based on collected experience to identify potential relevant actions. Findings – Many hundreds of organisations worldwide are recognising the need to support older peoples’ connections and abilities to engage with their communities. However, these need to be better mapped and coordinated. Practical implications – Innovative work is already being done to tackle loneliness needs to be more systematically supported and promoted. Originality/value – Identifies how much has already changed in terms of recognising and addressing loneliness but that a more comprehensive approach to support is needed.


Sign in / Sign up

Export Citation Format

Share Document