simple query
Recently Published Documents


TOTAL DOCUMENTS

33
(FIVE YEARS 5)

H-INDEX

3
(FIVE YEARS 0)

Author(s):  
A. Zamzuri ◽  
I. Hassan ◽  
A. Abdul Rahman

Abstract. A new version of the Land Administration Domain Model (LADM) has been discussed and is under further development in ISO/TC 211 on Geographic Information. One of the extending parts is where the model can accommodate complex and advanced marine properties and cadastral objects. Currently, the fundamentals part of this new version (LADM Edition II) has been examined by the committee, and a few elements need to be considered, especially for marine space georegulation. Based on the possibility of embedding LADM with marine cadastre as agreed by several researchers, the concept of marine cadastre data model within land administration context has been anticipated in many countries (e.g., Canada, Greece, Turkey, Australia, and Malaysia). Part of the research focused on constructing and developing the appropriate data models to manage marine spaces and resources most effectively. Several studies have attempted to establish a conceptual model for marine cadastre in Malaysia. However, there is still no acceptable marine data model. Thus, this paper proposed a marine data model for Malaysia based on the international standard, LADM. The approach, by definition, can be applied to the marine environment in terms of controlling and modelling a variety of rights, responsibilities, and restrictions. The Unified Modelling Language (UML) application was utilized to construct the conceptual and technical models via Enterprise Architect as part of the validation process. The data model was constructed within the marine's concept in Malaysia to meet international standards. The features of the data model were also discussed in the FIG workshop (9th LADM International Workshop 2021). The experiment on the data model also includes 3D visualization and simple query.


2021 ◽  
Vol Volume 17, Issue 4 ◽  
Author(s):  
Nils Vortmeier ◽  
Thomas Zeume

Given a graph whose nodes may be coloured red, the parity of the number of red nodes can easily be maintained with first-order update rules in the dynamic complexity framework DynFO of Patnaik and Immerman. Can this be generalised to other or even all queries that are definable in first-order logic extended by parity quantifiers? We consider the query that asks whether the number of nodes that have an edge to a red node is odd. Already this simple query of quantifier structure parity-exists is a major roadblock for dynamically capturing extensions of first-order logic. We show that this query cannot be maintained with quantifier-free first-order update rules, and that variants induce a hierarchy for such update rules with respect to the arity of the maintained auxiliary relations. Towards maintaining the query with full first-order update rules, it is shown that degree-restricted variants can be maintained.


2021 ◽  
Author(s):  
Srihari Vemuru ◽  
Eric John ◽  
Shrisha Rao

Humans can easily parse and find answers to complex queries such as "What was the capital of the country of the discoverer of the element which has atomic number 1?" by breaking them up into small pieces, querying these appropriately, and assembling a final answer. However, contemporary search engines lack such capability and fail to handle even slightly complex queries. Search engines process queries by identifying keywords and searching against them in knowledge bases or indexed web pages. The results are, therefore, dependent on the keywords and how well the search engine handles them. In our work, we propose a three-step approach called parsing, tree generation, and querying (PTGQ) for effective searching of larger and more expressive queries of potentially unbounded complexity. PTGQ parses a complex query and constructs a query tree where each node represents a simple query. It then processes the complex query by recursively querying a back-end search engine, going over the corresponding query tree in postorder. Using PTGQ makes sure that the search engine always handles a simpler query containing very few keywords. Results demonstrate that PTGQ can handle queries of much higher complexity than standalone search engines.


2021 ◽  
Author(s):  
Srihari Vemuru ◽  
Eric John ◽  
Shrisha Rao

Humans can easily parse and find answers to complex queries such as "What was the capital of the country of the discoverer of the element which has atomic number 1?" by breaking them up into small pieces, querying these appropriately, and assembling a final answer. However, contemporary search engines lack such capability and fail to handle even slightly complex queries. Search engines process queries by identifying keywords and searching against them in knowledge bases or indexed web pages. The results are, therefore, dependent on the keywords and how well the search engine handles them. In our work, we propose a three-step approach called parsing, tree generation, and querying (PTGQ) for effective searching of larger and more expressive queries of potentially unbounded complexity. PTGQ parses a complex query and constructs a query tree where each node represents a simple query. It then processes the complex query by recursively querying a back-end search engine, going over the corresponding query tree in postorder. Using PTGQ makes sure that the search engine always handles a simpler query containing very few keywords. Results demonstrate that PTGQ can handle queries of much higher complexity than standalone search engines.


Author(s):  
A. Hairuddin ◽  
S. Azri ◽  
U. Ujang ◽  
M. G. Cuétara ◽  
G. M. Retortillo ◽  
...  

Abstract. 3D city model is a representation of urban area in digital format that contains building and other information. The current approaches are using photogrammetry and laser scanning to develop 3D city model. However, these techniques are time consuming and quite costly. Besides that, laser scanning and photogrammetry need professional skills and expertise to handle hardware and tools. In this study, videogrammetry is proposed as a technique to develop 3D city model. This technique uses video frame sequences to generate point cloud. Videos are processed using EyesCloud3D by eCapture. EyesCloud3D allows user to upload raw data of video format to generate point clouds. There are five main phases in this study to generate 3D city model which are calibration, video recording, point cloud extraction, 3D modeling and 3D city model representation. In this study, 3D city model with Level of Detail 2 is produced. Simple query is performed from the database to retrieve the attributes of the 3D city model.


2018 ◽  
Vol 14 (2) ◽  
pp. 124-146
Author(s):  
Kento Goto ◽  
Misato Kotani ◽  
Motomichi Toyama

Purpose Currently, the results of database acquisition are variously expressed, but it seems that users’ understanding degree will be improved by expressing some search results such as images of products of shopping sites in three dimensions rather than two dimensions. Therefore, this paper aims to propose a system for automatically generating 3D virtual museum that arranges 3D objects with various layouts from the acquisition result of relation database by SuperSQL query. Design/methodology/approach The study extended the SuperSQL to generate 3D virtual reality museum using declarative queries on relational data stored in a database. Findings This system made it possible to generate various three-dimensional virtual spaces with different layouts through simple queries. Originality/value It can be said that this system is useful in that a complicated three-dimensional virtual space can be generated by describing a simple query and a different three-dimensional virtual space can be generated by slightly changing the query or database content. When creating a virtual museum, if there are too many exhibitions or when changing the layout, the burden on the user will be high. But in this system, it is possible to automatically generate various virtual museums easily and reduce the burden on users.


2018 ◽  
Vol 2 ◽  
pp. e25589
Author(s):  
Scott Chamberlain

There is a large amount of publicly available biodiversity data from many different data sources. When doing research, one ideally interacts with biodiversity data programmatically so their work is reproducible. The entry point to biodiversity data records is largely through taxonomic names, or common names in some cases (e.g., birds). However, many researchers have a phylogeny focused project, meaning taxonomic names are not the ideal interface to biodiversity data. Ideally, it would be simple to programmatically go from a phylogeny to biodiversity records through a phylogeny based query. I'll discuss a new project `phylodiv` (https://github.com/ropensci/phylodiv/) that attempts to facilitate phylogeny based biodiversity data collection (see Fig. 1). The project takes the form of an R software package. The idea is to make the user interface take essentially two inputs: a phylogeny and a phylogeny based question. Behind the scenes we'll do many things, including gathering taxonomic names and hierarchies for the taxa in the phylogeny, send queries to GBIF (or other data sources), and map the results. The user will of course have control over the behind the scenes parts, but I imagine the majority use case will be to input a phylogeny and a question and expect an answer back. We already have R tools to do nearly all parts of the work-flow shown above: there's a large number of phylogeny tools, `taxize`/`taxizedb` can handle taxonomic name collection, while `rgbif` can handle interaction with GBIF, and there's many mapping options in R. There are a few areas that need work still however. First, there's not yet a clear way to do a phylogeny based query. Ideally a user will be able to express a simple query like "taxon A vs. its sister group". That's simple to imagine, but to implement that in software is another thing. Second, users ideally would like answers back - in this case a map of occurrences - relatively quickly to be able to iterate on their research work-flow. The most likely solution to this will be to use GBIF's map tile service to visualize binned occurrence data, but we'll need to explore this in detail to make sure it works.


CJEM ◽  
2017 ◽  
Vol 20 (1) ◽  
pp. 21-27 ◽  
Author(s):  
Jean-Marc Chauny ◽  
Martin Marquis ◽  
Jean Paquet ◽  
Gilles Lavigne ◽  
Alexis Cournoyer ◽  
...  

AbstractObjectiveThe management of acute pain constitutes an essential skill of emergency department (ED) physicians. However, the accurate assessment of pain intensity and relief represents a clinically challenging undertaking. Some studies have proposed to define effective pain relief as the patient’s refusal for additional analgesic administration. The aim of this study was to verify whether such a refusal is effectively indicative of pain relief.MethodsThis prospective cohort study included ED patients who received single or multiple doses of pain medication for an acute pain problem. Patients were evaluated for pain relief using one Likert scale and two dichotomous questions: Is your pain relieved? and Do you want more analgesics? Non-relieved patients were further analysed using a checklist as to the reasons behind their refusal for supplemental pain medication.ResultsWe have recruited 378 adult patients with a mean age of 50.3 years (±19.1); 60% were women and had an initial mean pain level of 7.3 (±2.0) out of 10. We observed that 68 out of 244 patients who were adequately relieved from pain asked for more analgesics (28%), whereas 51 out of 134 patients who were not relieved from pain refused supplemental drugs (38%). Reasons for refusal included wanting to avoid side effects, feeling sufficiently relieved, and disliking the medication’s effects.ConclusionOver a third of ED patients in acute pain were not relieved but refused supplemental pain medication. Patients have reported legitimate reasons to decline further analgesics, and this refusal cannot be used as an indication of pain relief.


Author(s):  
Ellysa Tjandra ◽  
Monica Widiasri

Abstrak— Saat ini Jurusan Teknik Informatika Universitas ’X’ mewajibkan mahasiswa yang telah selesai tugas akhir untuk mengumpulkan hasil karya mereka dalam bentuk softcopy (CD) yang berisi program aplikasi dan dokumentasi, serta hardcopy (dalam bentuk buku laporan dan jurnal). Karya tersebut disimpan di perpustakaan secara fisik dan beberapa data disimpan di Digital Library Universitas ’X’. Namun keterbatasan sistem yang ada saat ini menyebabkan kesulitan pencarian hasil karya tugas akhir, karena teknik/metode yang digunakan untuk melakukan pencarian dibuat dalam bentuk query sederhana dengan kriteria yang masih terbatas, tanpa pengurutan dengan peringkat. Selain itu, kepala lab di jurusan juga menemui kesulitan dalam melakukan pemetaan bidang keahlian dari tugas akhir yang dikerjakan oleh mahasiswa di masing-masing lab. Berbagai permasalahan tersebut melatarbelakangi penelitian ini, sehingga diperlukan adanya sistem yang dapat membantu jurusan dalam menyimpan hasil karya tugas akhir mahasiswa, mempermudah pencarian, serta menampilkannya. Pencarian tugas akhir pada penelitian ini berdasarkan query yang diinput oleh pengguna menggunakan metode pencarian fungsi Okapi BM25. Dengan fungsi peringkat Okapi BM25 maka hasil karya dapat ditampilkan dengan urutan peringkat sesuai relevansinya.Kata Kunci— repositori, tugas akhir, Okapi, BM25Abstract— Every student in Department of Informatics University 'X' who has completed final project must submit their work in both softcopy (CD) – which containing the documentation and application software – and hardcopy, containing final project documentation and journal report. Currently students hardcopies are stored physically in the university library and some information are stored in the Digital Library System of University 'X'. But there are some difficulties of finding those information in the current system. The system only use simple query method with very limited criteria. The query results are being displayed by the system without any ranking method. In addition, Kalab (lab manager in the department) also encountered difficulties in mapping expertise areas of students final project in their lab. The purpose of this research aims to build a repository system to help the department to store students final project results, also makes it easy to search, display, and being managed. The system also clusters, ranks and displays a set of students final project documents according to some given queries using Okapi BM25 function.Keywords— repository, final project, Okapi, BM25


Sign in / Sign up

Export Citation Format

Share Document