domain expertise
Recently Published Documents


TOTAL DOCUMENTS

206
(FIVE YEARS 52)

H-INDEX

13
(FIVE YEARS 0)



2021 ◽  
Author(s):  
Iraj Ershaghi ◽  
Milad A. Ershaghi ◽  
Fatimah Al-Ruwai

Abstract A serious issue facing many oil and gas companies is the uneasiness among the traditional engineering talents to learn and adapt to the changes brought about by digital transformation. The transformation has been expected as the human being is limited in analyzing problems that are multidimensional and there are difficulties in doing analysis on a large scale. But many companies face human factor issues in preparing the traditional staff to realize the potential of adaptation of AI (Artificial Intelligence) based decision making. As decision-making in oil and gas industry is growing in complexity, acceptance of digital based solutions remains low. One reason can be the lack of adequate interpretability. The data scientist and the end-users should be able to assure that the prediction is based on correct set of assumptions and conform to accepted domain expertise knowledge. A proper set of questions to the experts can include inquiries such as where the information comes from, why certain information is pertinent, what is the relationship of components and also would several experts agree on such an assignment. Among many, one of the main concerns is the trustworthiness of applying AI technologies There are limitations of current continuing education approaches, and we suggest improvements that can help in such transformation. It takes an intersection of human judgment and the power of computer technology to make a step-change in accepting predictions by (ML) machine learning. A deep understanding of the problem, coupled with an awareness of the key data, is always the starting point. The best solution strategy in petroleum engineering adaptation of digital technologies requires effective participation of the domain experts in algorithmic-based preprocessing of data. Application of various digital solutions and technologies can then be tested to select the best solution strategies. For illustration purposes, we examine a few examples where digital technologies have significant potentials. Yet in all, domain expertise and data preprocessing are essential for quality control purposes



Author(s):  
J Michael Barton

The Department of Defense High Performance Computing Modernization Program celebrates its 30th birthday in 2021. It was created to modernize the supercomputer capability of Department of Defense laboratories and test centers and continues to excel in that mission, providing hardware, software, networks and domain expertise. We describe the Program, the environment in which it was created, the people who helped bring it into existence, and future directions.



Author(s):  
Weijian Ni ◽  
Tong Liu ◽  
Qingtian Zeng ◽  
Nengfu Xie

Domain terminologies are a basic resource for various natural language processing tasks. To automatically discover terminologies for a domain of interest, most traditional approaches mostly rely on a domain-specific corpus given in advance; thus, the performance of traditional approaches can only be guaranteed when collecting a high-quality domain-specific corpus, which requires extensive human involvement and domain expertise. In this article, we propose a novel approach that is capable of automatically mining domain terminologies using search engine's query log—a type of domain-independent corpus of higher availability, coverage, and timeliness than a manually collected domain-specific corpus. In particular, we represent query log as a heterogeneous network and formulate the task of mining domain terminology as transductive learning on the heterogeneous network. In the proposed approach, the manifold structure of domain-specificity inherent in query log is captured by using a novel network embedding algorithm and further exploited to reduce the need for the manual annotation efforts for domain terminology classification. We select Agriculture and Healthcare as the target domains and experiment using a real query log from a commercial search engine. Experimental results show that the proposed approach outperforms several state-of-the-art approaches.



2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Chris Nichols

Purpose This paper aims to clarify the relationship between professional services companies and changing customer expectations. It proposes following the digital transformation process and outlines how companies can adopt agile, digital-first ways of doing business to tackle major long-term pain points. Design/methodology/approach This paper’s author draws on the author’s deep domain expertise on delivering digital transformation projects to businesses and organisations in a variety of industries, including professional services. The author explains the crucial applications for technology to help industry leaders address key business pain points. Findings This paper provides insights into how companies have an opportune moment to build long-term digital foundations for greater management, process efficiency and collaboration – with data-driven reporting, end-to-end business management solutions, dedicated HR modules and greater connectivity capabilities. This paper demonstrates that a digital-first approach can help companies achieve higher levels of customer engagement and secure their place in a highly competitive market. Originality/value This paper fulfils an identified need to explain how professional services companies can embark on digital transformation journeys to tackle outdated and manual ways of doing business.



PLoS ONE ◽  
2021 ◽  
Vol 16 (11) ◽  
pp. e0259734
Author(s):  
Benjamin Schiek

In research portfolio planning contexts, an estimate of research policy and project synergies/tradeoffs (i.e. covariances) is essential to the optimal leveraging of institution resources. The data by which to make such estimates generally do not exist. Research institutions may often draw on domain expertise to fill this gap, but it is not clear how such ad hoc information can be quantified and fed into an optimal resource allocation workflow. Drawing on principal components analysis, I propose a method for “reverse engineering” synergies/tradeoffs from domain expertise at both the policy and project level. I discuss extensions to other problems and detail how the method can be fed into a research portfolio optimization workflow. I also briefly discuss the relevance of the proposed method in the context of the currently toxic relations between research communities and the donors that fund them.



2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Ghulam Mustafa ◽  
Muhammad Usman ◽  
Lisu Yu ◽  
Muhammad Tanvir afzal ◽  
Muhammad Sulaiman ◽  
...  

AbstractEvery year, around 28,100 journals publish 2.5 million research publications. Search engines, digital libraries, and citation indexes are used extensively to search these publications. When a user submits a query, it generates a large number of documents among which just a few are relevant. Due to inadequate indexing, the resultant documents are largely unstructured. Publicly known systems mostly index the research papers using keywords rather than using subject hierarchy. Numerous methods reported for performing single-label classification (SLC) or multi-label classification (MLC) are based on content and metadata features. Content-based techniques offer higher outcomes due to the extreme richness of features. But the drawback of content-based techniques is the unavailability of full text in most cases. The use of metadata-based parameters, such as title, keywords, and general terms, acts as an alternative to content. However, existing metadata-based techniques indicate low accuracy due to the use of traditional statistical measures to express textual properties in quantitative form, such as BOW, TF, and TFIDF. These measures may not establish the semantic context of the words. The existing MLC techniques require a specified threshold value to map articles into predetermined categories for which domain knowledge is necessary. The objective of this paper is to get over the limitations of SLC and MLC techniques. To capture the semantic and contextual information of words, the suggested approach leverages the Word2Vec paradigm for textual representation. The suggested model determines threshold values using rigorous data analysis, obviating the necessity for domain expertise. Experimentation is carried out on two datasets from the field of computer science (JUCS and ACM). In comparison to current state-of-the-art methodologies, the proposed model performed well. Experiments yielded average accuracy of 0.86 and 0.84 for JUCS and ACM for SLC, and 0.81 and 0.80 for JUCS and ACM for MLC. On both datasets, the proposed SLC model improved the accuracy up to 4%, while the proposed MLC model increased the accuracy up to 3%.



2021 ◽  
Author(s):  
Angela Zhang ◽  
S Shailja ◽  
Cezar Borba ◽  
Yishen Miao ◽  
Michael Goebel ◽  
...  

This paper presents a deep-learning based workflow to detect synapses and predict their neurotransmitter type in the primitive chordate Ciona intestinalis (Ciona) EM images. Identifying synapses from electron microscopy (EM) images to build a full map of connections between neurons is a labor-intensive process and requires significant domain expertise. Automation of synapse detection and classification would hasten the generation and analysis of connectomes. Furthermore, inferences concerning neuron type and function from synapse features are in many cases difficult to make. Finding the connection between synapse structure and function is an important step in fully understanding a connectome. Activation maps derived from the convolutional neural network provide insights on important features of synapses based on cell type and function. The main contribution of this work is in the differentiation of synapses by neurotransmitter type through the structural information in their EM images. This enables prediction of neurotransmitter types for neurons in Ciona which were previously unknown. The prediction model with code is available on Github.



2021 ◽  
Author(s):  
Christian Haudenschild ◽  
Louis Vaickus ◽  
Joshua Levy

Vast quantities of electronic patient medical data are currently being collated and processed in large federated data repositories. For instance, TriNetX, Inc., a global health research network, has access to more than 300 million patients, sourced from healthcare organizations, biopharmaceutical companies, and contract research organizations. As such, pipelines that are able to algorithmically extract huge quantities of patient data from multiple modalities present opportunities to leverage machine learning and deep learning approaches with the possibility of generating actionable insight. In this work, we present a modular, semi-automated end-to-end machine and deep learning pipeline designed to interface with a federated network of structured patient data. This proof-of-concept pipeline is disease-agnostic, scalable, and requires little domain expertise and manual feature engineering in order to quickly produce results for the case of a user-defined binary outcome event. We demonstrate the pipeline's efficacy with three different disease workflows, with high discriminatory power achieved in all cases.



2021 ◽  
Author(s):  
Fatai Adesina Anifowose

Abstract The petroleum industry has continued to show more interest in the application of artificial intelligence (AI). Most professional gatherings now have sub-themes to highlight AI applications. Similarly, the number of publications featuring AI applications has increased. The industry is facing the challenge of scaling up the applications to practical and impactful levels. Most of the applications end up in technical publications and narrow proofs of concept. For the industry's digital transformation objective to be fully achieved, efforts are required to overcome the current limitations. This paper discusses possible causes of the prevailing challenges and prescribes a number of recommendations to overcome them. The recommendations include ways to handle data shortage and unavailability issues, and how AI projects can be designed to provide more impactful solutions, regenerate missing or incomplete logs, and provide alternative workflows to estimate certain reservoir properties. The results of three successful applications are presented to demonstrate the efficacy of the recommendations. The first application estimates a log of reservoir rock cementation factors from wireline data to overcome the limitation of the conventional approach of using a constant value. The second application used the machine learning methodology to regenerate missing logs possibly due to tool failure or bad hole conditions. The third application provides an alternative approach to estimate reservoir rock grain size to overcome the challenges of the conventional core description. Tips on how these applications can be integrated to create a bigger impact on exploration and production (E&P) workflows are shared. It is hoped that this paper will enrich the current AI implementation strategy and practice. It will also encourage increased synergy and collaborative integration of domain expertise and AI methods to make better impact and achieve the digital transformation of E&P business goals.



Sign in / Sign up

Export Citation Format

Share Document