Efficient retrieval of ontology fragments using an interval labeling scheme

2009 ◽  
Vol 179 (24) ◽  
pp. 4151-4173 ◽  
Author(s):  
Victoria Nebot ◽  
Rafael Berlanga
2019 ◽  
Vol 8 (3) ◽  
pp. 4602-4611

In recent studies, Ontology construction plays an important role in translating raw text into useful knowledge. The proposed methodology supports efficient retrieval using multidimensional theory and implements integrated data training techniques before enter the trial process. The proposed approach has used the Semantic and Thematic Graph Generation Process to extract useful knowledge, and uses data mining techniques and web solutions to present knowledge as well as improve search speed and information retrieval accuracy. Established ontology can help clarify what it means for different ideas and relationships. Due to the rise of the ontology repository, the process of matching can take a long time. To avoid this, the method produces a hierarchical structure with in-depth interpretation of the data. A system is designed to remove domain dependencies using a dynamic labeling scheme using basic theorem, and the results show that it is possible to automatically and independently construct an independent domain


2014 ◽  
Vol 1 (1) ◽  
pp. 89 ◽  
Author(s):  
Dong-ling Xu ◽  
Chris Foster ◽  
Ying Hu ◽  
Jian-bo Yang

Author(s):  
Kyungkoo Jun

Background & Objective: This paper proposes a Fourier transform inspired method to classify human activities from time series sensor data. Methods: Our method begins by decomposing 1D input signal into 2D patterns, which is motivated by the Fourier conversion. The decomposition is helped by Long Short-Term Memory (LSTM) which captures the temporal dependency from the signal and then produces encoded sequences. The sequences, once arranged into the 2D array, can represent the fingerprints of the signals. The benefit of such transformation is that we can exploit the recent advances of the deep learning models for the image classification such as Convolutional Neural Network (CNN). Results: The proposed model, as a result, is the combination of LSTM and CNN. We evaluate the model over two data sets. For the first data set, which is more standardized than the other, our model outperforms previous works or at least equal. In the case of the second data set, we devise the schemes to generate training and testing data by changing the parameters of the window size, the sliding size, and the labeling scheme. Conclusion: The evaluation results show that the accuracy is over 95% for some cases. We also analyze the effect of the parameters on the performance.


Author(s):  
D. Sangeetha ◽  
S. Sibi Chakkaravarthy ◽  
Suresh Chandra Satapathy ◽  
V. Vaidehi ◽  
Meenalosini Vimal Cruz

2021 ◽  
Vol 8 (1) ◽  
Author(s):  
Ross M. Lawrence ◽  
Eric W. Bridgeford ◽  
Patrick E. Myers ◽  
Ganesh C. Arvapalli ◽  
Sandhya C. Ramachandran ◽  
...  

AbstractUsing brain atlases to localize regions of interest is a requirement for making neuroscientifically valid statistical inferences. These atlases, represented in volumetric or surface coordinate spaces, can describe brain topology from a variety of perspectives. Although many human brain atlases have circulated the field over the past fifty years, limited effort has been devoted to their standardization. Standardization can facilitate consistency and transparency with respect to orientation, resolution, labeling scheme, file storage format, and coordinate space designation. Our group has worked to consolidate an extensive selection of popular human brain atlases into a single, curated, open-source library, where they are stored following a standardized protocol with accompanying metadata, which can serve as the basis for future atlases. The repository containing the atlases, the specification, as well as relevant transformation functions is available in the neuroparc OSF registered repository or https://github.com/neurodata/neuroparc.


2021 ◽  
Vol 13 (15) ◽  
pp. 8433
Author(s):  
Fatima Lambarraa-Lehnhardt ◽  
Rico Ihle ◽  
Hajar Elyoubi

The Green Moroccan Plan (GMP) is a national long-term strategy launched by the Moroccan government to support the agricultural sector as the main driver of social and economic development. The GMP involves a labeling strategy based on geographical indications, aimed at protecting and promoting the marketing of locally produced food specialties and linking their specific qualities and reputations to their domestic production region. We evaluated the success of this policy by comparing consumers’ attitudes and preferences toward a local product having a geographical indication label to one without. We conducted a survey of 500 consumers in main Moroccan cities. The potential consumer set for the local product was found to be segmented, indicating the potential for a domestic niche of environmentally aware consumers preferring organically and sustainably produced food. We applied the analytical hierarchy process to prioritize the attributes of the commodities of interest, which underscores the importance of the origin when choosing a local product without origin labeling; for the labeled product, intrinsic quality attributes are considered to be more important. These findings demonstrate the limited promotion of the established origin labeling in the domestic market. Hence, we recommend that the Moroccan government reinforce the labeling scheme with an organic label to increase the market potential of the environmentally aware consumers by ensuring sustainable production of local products.


1997 ◽  
Vol 3 (4) ◽  
pp. 317-345 ◽  
Author(s):  
JOSÉ M. GOÑI ◽  
JOSÉ C. GONZÁLEZ ◽  
ANTONIO MORENO

We present a lexical platform that has been developed for the Spanish language. It achieves portability between different computer systems and efficiency, in terms of speed and lexical coverage. A model for the full treatment of Spanish inflectional morphology for verbs, nouns and adjectives is presented. This model permits word formation based solely on morpheme concatenation, driven by a feature-based unification grammar. The run-time lexicon is a collection of allomorphs for both stems and endings. Although not tested, it should be suitable also for other Romance and highly inflected languages. A formalism is also described for encoding a lemma-based lexical source, well suited for expressing linguistic generalizations: inheritance classes, lemma encoding, morpho-graphemic allomorphy rules and limited type-checking. From this source base, we can automatically generate an allomorph indexed dictionary adequate for efficient retrieval and processing. A set of software tools has been implemented around this formalism: lexical base augmenting aids, lexical compilers to build run-time dictionaries and access libraries for them, feature manipulation libraries, unification and pseudo-unification modules, morphological processors, a parsing system, etc. Software interfaces among the different modules and tools are cleanly defined to ease software integration and tool combination in a flexible way. Directions for accessing our e-mail and web demonstration prototypes are also provided. Some figures are given, showing the lexical coverage of our platform compared to some popular spelling checkers.


Sign in / Sign up

Export Citation Format

Share Document