ranked list
Recently Published Documents


TOTAL DOCUMENTS

118
(FIVE YEARS 72)

H-INDEX

12
(FIVE YEARS 5)

2022 ◽  
Vol 40 (2) ◽  
pp. 1-29
Author(s):  
Yaoxin Pan ◽  
Shangsong Liang ◽  
Jiaxin Ren ◽  
Zaiqiao Meng ◽  
Qiang Zhang

The task of personalized product search aims at retrieving a ranked list of products given a user’s input query and his/her purchase history. To address this task, we propose the PSAM model, a Personalized, Sequential, Attentive and Metric-aware (PSAM) model, that learns the semantic representations of three different categories of entities, i.e., users, queries, and products, based on user sequential purchase historical data and the corresponding sequential queries. Specifically, a query-based attentive LSTM (QA-LSTM) model and an attention mechanism are designed to infer users dynamic embeddings, which is able to capture their short-term and long-term preferences. To obtain more fine-grained embeddings of the three categories of entities, a metric-aware objective is deployed in our model to force the inferred embeddings subject to the triangle inequality, which is a more realistic distance measurement for product search. Experiments conducted on four benchmark datasets show that our PSAM model significantly outperforms the state-of-the-art product search baselines in terms of effectiveness by up to 50.9% improvement under NDCG@20. Our visualization experiments further illustrate that the learned product embeddings are able to distinguish different types of products.


2021 ◽  
Vol 923 (2) ◽  
pp. 144
Author(s):  
Caprice L. Phillips ◽  
Ji Wang ◽  
Sarah Kendrew ◽  
Thomas P. Greene ◽  
Renyu Hu ◽  
...  

Abstract Exoplanets with radii between those of Earth and Neptune have stronger surface gravity than Earth, and can retain a sizable hydrogen-dominated atmosphere. In contrast to gas giant planets, we call these planets gas dwarf planets. The James Webb Space Telescope (JWST) will offer unprecedented insight into these planets. Here, we investigate the detectability of ammonia (NH3, a potential biosignature) in the atmospheres of seven temperate gas dwarf planets using various JWST instruments. We use petitRadTRANS and PandExo to model planet atmospheres and simulate JWST observations under different scenarios by varying cloud conditions, mean molecular weights (MMWs), and NH3 mixing ratios. A metric is defined to quantify detection significance and provide a ranked list for JWST observations in search of biosignatures in gas dwarf planets. It is very challenging to search for the 10.3–10.8 μm NH3 feature using eclipse spectroscopy with the Mid-Infrared Instrument (MIRI) in the presence of photon and a systemic noise floor of 12.6 ppm for 10 eclipses. NIRISS, NIRSpec, and MIRI are feasible for transmission spectroscopy to detect NH3 features from 1.5–6.1 μm under optimal conditions such as a clear atmosphere and low MMWs for a number of gas dwarf planets. We provide examples of retrieval analyses to further support the detection metric that we use. Our study shows that searching for potential biosignatures such as NH3 is feasible with a reasonable investment of JWST time for gas dwarf planets given optimal atmospheric conditions.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Jinzen Ikebe ◽  
Munenori Suzuki ◽  
Aya Komori ◽  
Kaito Kobayashi ◽  
Tomoshi Kameda

AbstractEnzymes with low regioselectivity of substrate reaction sites may produce multiple products from a single substrate. When a target product is produced industrially using these enzymes, the production of non-target products (byproducts) causes adverse effects such as increased processing costs for purification and the amount of raw material. Thus it is required the development of modified enzymes to reduce the amount of byproducts’ production. In this paper, we report a method called mutation site prediction for enhancing the regioselectivity of substrate reaction sites (MSPER). MSPER takes conformational data for docking poses of an enzyme and a substrate as input and automatically generates a ranked list of mutation sites to destabilize docking poses for byproducts while maintaining those for target products in silico. We applied MSPER to the enzyme cytochrome P450 CYP102A1 (BM3) and the two substrates to enhance the regioselectivity for four target products with different reaction sites. The 13 of the total 14 top-ranked mutation sites predicted by MSPER for the four target products succeeded in selectively enhancing the regioselectivity up to 6.4-fold. The results indicate that MSPER can distinguish differences of substrate structures and the reaction sites, and can accurately predict mutation sites to enhance regioselectivity without selection by directed evolution screening.


2021 ◽  
Author(s):  
Manu Ujjwal ◽  
Gaurav Modi ◽  
Srungeer Simha

Abstract A key to successful Well, Reservoir and Facilities Management (WRFM) is to have an up-to-date opportunity funnel. In large mature fields, WRFM opportunity identification is heavily dependent on effective exploitation of measured & interpreted data. This paper presents a suite of data driven workflows, collectively called WRFM Opportunity Finder (WOF), that generates ranked list of opportunities across the WRFM opportunity spectrum. The WOF was developed for a mature waterflooded asset with over 500 active wells and over 30 years of production history. The first step included data collection and cleanup using python routines and its integration into an interactive visualization dashboard. The WOF used this data to generate ranked list of following opportunity types: (a) Bean-up/bean-down candidates (b) Watershut-off candidates (c) Add-perf candidates (d) PLT/ILT data gathering candidates, and (e) well stimulation candidates. The WOF algorithms, implemented using python, largely comprised of rule-based workflows with occasional use of machine learning in intermediate steps. In a large mature asset, field/reservoir/well reviews are typically conducted area by area or reservoir by reservoir and is therefore a slow process. It is challenging to have an updated holistic overview of opportunities across the field which can allow prioritization of optimal opportunities. Though the opportunity screening logic may be linked to clear physics-based rules, its maturation is often difficult as it requires processing and integration of large volumes of multi-disciplinary data through laborious manual review processes. The WOF addressed these issues by leveraging data processing algorithms that gathered data directly from databases and applied customized data processing routines. This led to reduction in data preparation and integration time by 90%. The WOF used workflows linked to petroleum engineering principles to arrive at ranked lists of opportunities with a potential to add 1-2% increment in oil production. The integrated visualization dashboard allowed quick and transparent validation of the identified opportunities and their ranking basis using a variety of independent checks. The results from WOF will inform a range of business delivery elements such as workover & data gathering plan, exception-based-surveillance and facilities debottlenecking plan. WOF exploits the best of both worlds - physics-based solutions and data driven techniques. It offers transparent logic which are scalable and replicable to a variety of settings and hence has an edge over pure machine learning approaches. The WOF accelerates identification of low capex/no-capex opportunities using existing data. It promotes maximization of returns on already made investments and hence lends resilience to business in the low oil price environment.


2021 ◽  
Author(s):  
Kimia Zandbiglari ◽  
Farhad Ameri ◽  
Mohammad Javadi

Abstract The unstructured data available on the websites of manufacturing suppliers can provide useful insights into the technological and organizational capabilities of manufacturers. However, since the data is often represented in an unstructured form using natural language text, it is difficult to efficiently search and analyze the capability data and learn from it. The objective of this work is to propose a set of text analytics techniques to enable automated classification and ranking of suppliers based on their capability narratives. The supervised classification and semantic similarity measurement methods used in this research are supported by a formal thesaurus that uses SKOS (Simple Knowledge Organization System) for its syntax and semantics. Normalized Google Distance (NGD) was used as a metric for measuring the relatedness of terms. The proposed framework was validated experimentally using a hypothetical search scenario. The results indicate that the generated ranked list shows a high correlation with human judgment specially if the query concept vector and supplier concept vector belong to the same class. However, the correlation decreases when multiple overlapping classes of suppliers are mixed together. The findings of this research can be used to improve the precision and reliability of Capability Language Processing (CLP) tools and methods.


Author(s):  
Sazid Zaman Khan ◽  
Alan Colman ◽  
Iqbal H. Sarker

A large number of smart devices (things) are being deployed with the swift development of Internet of Things (IOT). These devices, owned by different organizations, have a wide variety of services to offer over the web. During a natural disaster or emergency (i.e., a situation), for example, relevant IOT services can be found and put to use. However, appropriate service matching methods are required to find the relevant services. Organizations that manage situation responses and organizations that provide IOT services are likely to be independent of each other, and therefore it is difficult for them to adopt a common ontological model to facilitate the service matching. Moreover, there exists a large conceptual gap between the domain of discourse for situations and the domain of discourse for services, which cannot be adequately bridged by existing techniques. In this paper, we address these issues and propose a new method, WikiServe, to identify IOT services that are functionally relevant to a given situation. Using concepts (terms) from situation and service descriptions, WikiServe employs Wikipedia as a knowledge source to bridge the conceptual gap between situation and service descriptions and match functionally relevant IOT services for a situation. It uses situation terms to retrieve situation related articles from Wikipedia. Then it creates a ranked list of services for the situation using the weighted occurrences of service terms in weighted situation articles. WikiServe performs better than a commonly used baseline method in terms of Precision, Recall and F measure for service matching.


Author(s):  
Sazid Zaman Khan ◽  
Alan Colman ◽  
Iqbal H. Sarker

A large number of smart devices (things) are being deployed with the swift development of Inter- net of Things (IOT). These devices, owned by different organizations, have a wide variety of services to offer over the web. During a natural disaster or emergency (i.e., a situation), for example, relevant IOT services can be found and put to use. However, appropriate service matching methods are required to find the relevant services. Organizations that manage situation responses and organizations that provide IOT services are likely to be independent of each other, and therefore it is difficult for them to adopt a common ontological model to facilitate the service matching. Moreover, there exists a large conceptual gap between the domain of discourse for situations and the domain of discourse for services, which cannot be adequately bridged by existing techniques. In this paper, we address these issues and propose a new method, WikiServe, to identify IOT services that are functionally relevant to a given situation. Using concepts (terms) from situation and service descriptions, WikiServe employs Wikipedia as a knowledge source to bridge the conceptual gap between situation and service descriptions and match functionally relevant IOT services for a situation. It uses situation terms to retrieve situation related articles from Wikipedia. Then it creates a ranked list of services for the situation using the weighted occurrences of service terms in weighted situation articles. WikiServe performs better than a commonly used baseline method in terms of Precision, Recall and F measure for service matching.


2021 ◽  
Author(s):  
Neil R. Smalheiser ◽  
Arthur W. Holt

Objective. Evidence synthesis teams, physicians, policy makers, and patients and their families all have an interest in following the outcomes of clinical trials and would benefit from being able to evaluate both the results posted in trial registries and in the publications that arise from them. Manual searching for publications arising from a given trial is a laborious and uncertain process. We sought to create a statistical model to automatically identify PubMed articles likely to report clinical outcome results from each registered trial in ClinicalTrials.gov. Materials and Methods. A machine learning-based model was trained on pairs (publications linked to specific registered trials). Multiple features were constructed based on the degree of matching between the PubMed article metadata and specific fields of the trial registry, as well as matching with the set of publications already known to be linked to that trial. Results. Evaluation of the model using NCT-linked articles as gold standard showed that they tend to be top ranked (median best rank = 1.0), and 91% of them are ranked in the top ten. Discussion. Based on this model, we have created a free, public web based tool at http://arrowsmith.psych.uic.edu/cgi-bin/arrowsmith_uic/TrialPubLinking/trial_pub_link_start.cgithat, given any registered trial in ClinicalTrials.gov, presents a ranked list of the PubMed articles in order of estimated probability that they report clinical outcome data from that trial. The tool should greatly facilitate studies of trial outcome results and their relation to the original trial designs.


2021 ◽  
Author(s):  
Yue Zhao ◽  
Ajay Anand ◽  
Gaurav Sharma

<div>We develop and evaluate an automated data-driven framework for providing reviewer recommendations for submitted manuscripts. Given inputs comprising a set of manuscripts for review and a listing of a pool of prospective reviewers, our system uses a publisher database to extract papers authored by the reviewers from which a Paragraph Vector (doc2vec ) neural network model is learned and used to obtain vector space embeddings of documents. Similarities between embeddings of an individual reviewer’s papers and a manuscript are then used to compute manuscript-reviewer match scores and to generate a ranked list of recommended reviewers for each manuscript. Our mainline proposed system uses full text versions of the reviewers’ papers, which we demonstrate performs significantly better than models developed based on abstracts alone, which has been the predominant paradigm in prior work. Direct retrieval of reviewer’s manuscripts from a publisher database reduces reviewer burden, ensures up-to-date data, and eliminates the potential for misuse through data manipulation. We also propose a useful evaluation methodology that addresses hyperparameter selection and enables indirect comparisons with alternative approaches and on prior datasets. Finally, the work also contributes a large scale retrospective reviewer matching dataset and evaluation that we hope will be useful for further research in this field. Our system is quite effective; for the mainline approach, expert judges rated 38% of the recommendations as Very Relevant; 33% as Relevant; 24% as Slightly Relevant; and only 5% as Irrelevant.</div>


2021 ◽  
Author(s):  
Yue Zhao ◽  
Ajay Anand ◽  
Gaurav Sharma

<div>We develop and evaluate an automated data-driven framework for providing reviewer recommendations for submitted manuscripts. Given inputs comprising a set of manuscripts for review and a listing of a pool of prospective reviewers, our system uses a publisher database to extract papers authored by the reviewers from which a Paragraph Vector (doc2vec ) neural network model is learned and used to obtain vector space embeddings of documents. Similarities between embeddings of an individual reviewer’s papers and a manuscript are then used to compute manuscript-reviewer match scores and to generate a ranked list of recommended reviewers for each manuscript. Our mainline proposed system uses full text versions of the reviewers’ papers, which we demonstrate performs significantly better than models developed based on abstracts alone, which has been the predominant paradigm in prior work. Direct retrieval of reviewer’s manuscripts from a publisher database reduces reviewer burden, ensures up-to-date data, and eliminates the potential for misuse through data manipulation. We also propose a useful evaluation methodology that addresses hyperparameter selection and enables indirect comparisons with alternative approaches and on prior datasets. Finally, the work also contributes a large scale retrospective reviewer matching dataset and evaluation that we hope will be useful for further research in this field. Our system is quite effective; for the mainline approach, expert judges rated 38% of the recommendations as Very Relevant; 33% as Relevant; 24% as Slightly Relevant; and only 5% as Irrelevant.</div>


Sign in / Sign up

Export Citation Format

Share Document