information needs
Recently Published Documents


TOTAL DOCUMENTS

5739
(FIVE YEARS 1815)

H-INDEX

79
(FIVE YEARS 10)

2022 ◽  
Vol 40 (4) ◽  
pp. 1-24
Author(s):  
Yongqi Li ◽  
Wenjie Li ◽  
Liqiang Nie

In recent years, conversational agents have provided a natural and convenient access to useful information in people’s daily life, along with a broad and new research topic, conversational question answering (QA). On the shoulders of conversational QA, we study the conversational open-domain QA problem, where users’ information needs are presented in a conversation and exact answers are required to extract from the Web. Despite its significance and value, building an effective conversational open-domain QA system is non-trivial due to the following challenges: (1) precisely understand conversational questions based on the conversation context; (2) extract exact answers by capturing the answer dependency and transition flow in a conversation; and (3) deeply integrate question understanding and answer extraction. To address the aforementioned issues, we propose an end-to-end Dynamic Graph Reasoning approach to Conversational open-domain QA (DGRCoQA for short). DGRCoQA comprises three components, i.e., a dynamic question interpreter (DQI), a graph reasoning enhanced retriever (GRR), and a typical Reader, where the first one is developed to understand and formulate conversational questions while the other two are responsible to extract an exact answer from the Web. In particular, DQI understands conversational questions by utilizing the QA context, sourcing from predicted answers returned by the Reader, to dynamically attend to the most relevant information in the conversation context. Afterwards, GRR attempts to capture the answer flow and select the most possible passage that contains the answer by reasoning answer paths over a dynamically constructed context graph . Finally, the Reader, a reading comprehension model, predicts a text span from the selected passage as the answer. DGRCoQA demonstrates its strength in the extensive experiments conducted on a benchmark dataset. It significantly outperforms the existing methods and achieves the state-of-the-art performance.


Semantic Web technology is not new as most of us contemplate; it has evolved over the years. Linked Data web terminology is the name set recently to the Semantic Web. Semantic Web is a continuation of Web 2.0 and it is to replace existing technologies. It is built on Natural Language processing and provides solutions to most of the prevailing issues. Web 3.0 is the version of Semantic Web caters to the information needs of half of the population on earth. This paper links two important current concerns, the security of information and enforced online education due to COVID-19 with Semantic Web. The Steganography requirement for the Semantic web is discussed elaborately, even though encryption is applied which is inadequate in providing protection. Web 2.0 issues concerning online education and semantic Web solutions have been discussed. An extensive literature survey has been conducted related to the architecture of Web 3.0, detailed history of online education, and Security architecture. Finally, Semantic Web is here to stay and data hiding along with encryption makes it robust.


2022 ◽  
Vol 40 (3) ◽  
pp. 1-47
Author(s):  
Ameer Albahem ◽  
Damiano Spina ◽  
Falk Scholer ◽  
Lawrence Cavedon

In many search scenarios, such as exploratory, comparative, or survey-oriented search, users interact with dynamic search systems to satisfy multi-aspect information needs. These systems utilize different dynamic approaches that exploit various user feedback granularity types. Although studies have provided insights about the role of many components of these systems, they used black-box and isolated experimental setups. Therefore, the effects of these components or their interactions are still not well understood. We address this by following a methodology based on Analysis of Variance (ANOVA). We built a Grid Of Points that consists of systems based on different ways to instantiate three components: initial rankers, dynamic rerankers, and user feedback granularity. Using evaluation scores based on the TREC Dynamic Domain collections, we built several ANOVA models to estimate the effects. We found that (i) although all components significantly affect search effectiveness, the initial ranker has the largest effective size, (ii) the effect sizes of these components vary based on the length of the search session and the used effectiveness metric, and (iii) initial rankers and dynamic rerankers have more prominent effects than user feedback granularity. To improve effectiveness, we recommend improving the quality of initial rankers and dynamic rerankers. This does not require eliciting detailed user feedback, which might be expensive or invasive.


2022 ◽  
Vol 40 (4) ◽  
pp. 1-32
Author(s):  
Alexander Frummet ◽  
David Elsweiler ◽  
Bernd Ludwig

As conversational search becomes more pervasive, it becomes increasingly important to understand the users’ underlying information needs when they converse with such systems in diverse domains. We conduct an in situ study to understand information needs arising in a home cooking context as well as how they are verbally communicated to an assistant. A human experimenter plays this role in our study. Based on the transcriptions of utterances, we derive a detailed hierarchical taxonomy of diverse information needs occurring in this context, which require different levels of assistance to be solved. The taxonomy shows that needs can be communicated through different linguistic means and require different amounts of context to be understood. In a second contribution, we perform classification experiments to determine the feasibility of predicting the type of information need a user has during a dialogue using the turn provided. For this multi-label classification problem, we achieve average F1 measures of 40% using BERT-based models. We demonstrate with examples which types of needs are difficult to predict and show why, concluding that models need to include more context information in order to improve both information need classification and assistance to make such systems usable.


2022 ◽  
Vol 40 (2) ◽  
pp. 1-29
Author(s):  
Jun Yang ◽  
Weizhi Ma ◽  
Min Zhang ◽  
Xin Zhou ◽  
Yiqun Liu ◽  
...  

Recommendation in legal scenario (Legal-Rec) is a specialized recommendation task that aims to provide potential helpful legal documents for users. While there are mainly three differences compared with traditional recommendation: (1) Both the structural connections and textual contents of legal information are important in the Legal-Rec scenario, which means feature fusion is very important here. (2) Legal-Rec users prefer the newest legal cases (the latest legal interpretation and legal practice), which leads to a severe new-item problem. (3) Different from users in other scenarios, most Legal-Rec users are expert and domain-related users. They often concentrate on several topics and have more stable information needs. So it is important to accurately model user interests here. To the best of our knowledge, existing recommendation work cannot handle these challenges simultaneously. To address these challenges, we propose a legal information enhanced graph neural network–based recommendation framework (LegalGNN). First, a unified legal content and structure representation model is designed for feature fusion, where the Heterogeneous Legal Information Network (HLIN) is constructed to connect the structural features (e.g., knowledge graph) and contextual features (e.g., the content of legal documents) for training. Second, to model user interests, we incorporate the queries users issued in legal systems into the HLIN and link them with both retrieved documents and inquired users. This extra information is not only helpful for estimating user preferences, but also valuable for cold users/items (with less interaction history) in this scenario. Third, a graph neural network with relational attention mechanism is applied to make use of high-order connections in HLIN for Legal-Rec. Experimental results on a real-world legal dataset verify that LegalGNN outperforms several state-of-the-art methods significantly. As far as we know, LegalGNN is the first graph neural model for legal recommendation.


2022 ◽  
Vol 14 (2) ◽  
pp. 407
Author(s):  
Jongjin Seo ◽  
Haklim Choi ◽  
Young-Suk Oh

Aerosols in the atmosphere play an essential role in the radiative transfer process due to their scattering, absorption, and emission. Moreover, they interrupt the retrieval of atmospheric properties from ground-based and satellite remote sensing. Thus, accurate aerosol information needs to be obtained. Herein, we developed an optimal-estimation-based aerosol optical depth (AOD) retrieval algorithm using the hyperspectral infrared downwelling emitted radiance of the Atmospheric Emitted Radiance Interferometer (AERI). The proposed algorithm is based on the phenomena that the thermal infrared radiance measured by a ground-based remote sensor is sensitive to the thermodynamic profile and degree of the turbid aerosol in the atmosphere. To assess the performance of algorithm, AERI observations, measured throughout the day on 21 October 2010 at Anmyeon, South Korea, were used. The derived thermodynamic profiles and AODs were compared with those of the European center for a reanalysis of medium-range weather forecasts version 5 and global atmosphere watch precision-filter radiometer (GAW-PFR), respectively. The radiances simulated with aerosol information were more suitable for the AERI-observed radiance than those without aerosol (i.e., clear sky). The temporal variation trend of the retrieved AOD matched that of GAW-PFR well, although small discrepancies were present at high aerosol concentrations. This provides a potential possibility for the retrieval of nighttime AOD.


Author(s):  
أحمد ماهر خفاجة شحاتة

Despite the availability of millions of information resources on the internet, the Arabic digital content represents a relatively small percentage compared with the information available in other languages. The size of Arabic content, the lack of an adequate number of Arabic databases that organize this content and make it available to the Arab reader, and the lack of novelty and originality are the main issues that feature the Arabic content on the internet. The aim of the current study is to clarify the Arab scholars’ perception regarding the quality, reliability, and suitability of Arabic digital content that is available on the internet. A quantitative approach was adopted in this study in order to answer the research questions. A questionnaire was distributed online among a sample of Arab scholars to determine the quality and reliability of the Arabic digital content. Moreover, the questionnaire tried to identify the extent to which the current Arabic digital content meets the growing information needs, to identify the Arab scholars’ uses of Arabic content, and to discover the criteria that determine the digital content suitability. The findings of this study revealed that Arab scholars believe that Arabic digital content is weak and there is a lack of originality. In addition, the results indicated that Arabic digital content on the internet does not satisfy the scholars' needs which enforce them to use English information resources to compensates for the lack of Arabic resources. The study recommended the necessity of establishing mechanisms to support Arabic digital content and increase the academic institutions' role in enhancing Arabic digital content by encouraging and supporting scholarly research in the Arabic language.


2022 ◽  
Vol In Press (In Press) ◽  
Author(s):  
Azam Sabahi ◽  
Farkhondeh Asadi ◽  
Shahin Shadnia ◽  
Reza Rabiei ◽  
Azamossadat Hosseini

Background: The prevalence of poisoning is on the rise in Iran. A poisoning registry is a key source of information about poisoning patterns used for decision-making and healthcare provision, and a minimum dataset (MDS) is a prerequisite for developing a registry. Objectives: This study aimed to design a MDS for a poisoning registry. Methods: This applied study was conducted in 2021. A poisoning MDS was developed with a four-stage process: (1) conducting a systematic review of the Web of Science, Scopus, PubMed, and EMBASE, (2) examining poisoning-related websites and online forms, (3) classification of data elements in separate meetings with three toxicology specialists, and (4) validating data elements using the two-stage Delphi technique. A researcher-made checklist was employed for this purpose. The content validity of the checklist was examined based on the opinions of five health information management and medical informatics experts with respect to the topic of the study. Its test-retest reliability was also confirmed with the recruitment of 25 experts (r = 0.8). Results: Overall, 368 data elements were identified from the articles and forms, of which 358 were confirmed via the two-stage Delphi technique and classified into administrative (n = 88) and clinical data elements (n = 270). Conclusions: The creation of a poisoning registry requires identifying the information needs of healthcare centers, and an integrated and comprehensive framework should be developed to meet these needs. To this end, a MDS contains the essential data elements that form a framework for integrated and standard data collection.


Sign in / Sign up

Export Citation Format

Share Document