Organizational Efficiency through Intelligent Information Technologies
Latest Publications


TOTAL DOCUMENTS

17
(FIVE YEARS 0)

H-INDEX

1
(FIVE YEARS 0)

Published By IGI Global

9781466620476, 9781466620483

Author(s):  
Bilel Elayeb ◽  
Ibrahim Bounhas ◽  
Oussama Ben Khiroun ◽  
Fabrice Evrard ◽  
Narjès Bellamine-BenSaoud

This paper presents a new possibilistic information retrieval system using semantic query expansion. The work is involved in query expansion strategies based on external linguistic resources. In this case, the authors exploited the French dictionary “Le Grand Robert”. First, they model the dictionary as a graph and compute similarities between query terms by exploiting the circuits in the graph. Second, the possibility theory is used by taking advantage of a double relevance measure (possibility and necessity) between the articles of the dictionary and query terms. Third, these two approaches are combined by using two different aggregation methods. The authors also benefit from an existing approach for reweighting query terms in the possibilistic matching model to improve the expansion process. In order to assess and compare the approaches, the authors performed experiments on the standard ‘LeMonde94’ test collection.



Author(s):  
Hokyin Lai ◽  
Minhong Wang ◽  
Huaiqing Wang

Adaptive learning approaches support learners to achieve the intended learning outcomes through a personalized way. Previous studies mistakenly treat adaptive e-Learning as personalizing the presentation style of the learning materials, which is not completely correct. The main idea of adaptive learning is to personalize the earning content in a way that can cope with individual differences in aptitude. In this study, an adaptive learning model is designed based on the Aptitude-Treatment Interaction theory and Constructive Alignment Model. The model aims at improving students’ learning outcomes through enhancing their intrinsic motivation to learn. This model is operationalized with a multi-agent framework and is validated under a controlled laboratory setting. The result is quite promising. The individual differences of students, especially in the experimental group, have been narrowed significantly. Students who have difficulties in learning show significant improvement after the test. However, the longitudinal effect of this model is not tested in this study and will be studied in the future.



Author(s):  
Toly Chen

This paper presents a dynamically optimized fluctuation smoothing rule to improve the performance of scheduling jobs in a wafer fabrication factory. The rule has been modified from the four-factor bi-criteria nonlinear fluctuation smoothing (4f-biNFS) rule, by dynamically adjusting factors. Some properties of the dynamically optimized fluctuation smoothing rule were also discussed theoretically. In addition, production simulation was also applied to generate some test data for evaluating the effectiveness of the proposed methodology. According to the experimental results, the proposed methodology was better than some existing approaches to reduce the average cycle time and cycle time standard deviation. The results also showed that it was possible to improve the performance of one without sacrificing the other performance metrics.



Author(s):  
M. Thangamani ◽  
P. Thangaraj

The increase in the number of documents has aggravated the difficulty of classifying those documents according to specific needs. Clustering analysis in a distributed environment is a thrust area in artificial intelligence and data mining. Its fundamental task is to utilize characters to compute the degree of related corresponding relationship between objects and to accomplish automatic classification without earlier knowledge. Document clustering utilizes clustering technique to gather the documents of high resemblance collectively by computing the documents resemblance. Recent studies have shown that ontologies are useful in improving the performance of document clustering. Ontology is concerned with the conceptualization of a domain into an individual identifiable format and machine-readable format containing entities, attributes, relationships, and axioms. By analyzing types of techniques for document clustering, a better clustering technique depending on Genetic Algorithm (GA) is determined. Non-Dominated Ranked Genetic Algorithm (NRGA) is used in this paper for clustering, which has the capability of providing a better classification result. The experiment is conducted in 20 newsgroups data set for evaluating the proposed technique. The result shows that the proposed approach is very effective in clustering the documents in the distributed environment.



Author(s):  
Thorsten J. Dollmann ◽  
Peter Loos ◽  
Michael Fellmann ◽  
Oliver Thomas ◽  
Andreas Hoheisel ◽  
...  

This article describes a collaboration methodology for virtual organizations where the processes can be automatically executed using a hybrid web service, grid or cloud resources. Typically, the process of deriving executable workflows from process models is cumbersome and can be automated only in part or specific to a particular distributed system. The approach introduced in this paper, exemplified by the construction industry field, integrates existing technology within a process-centric framework. The solution on the basis of a hybrid system architecture in conjunction with semantic methods for consistency saving and the framework for modeling VO processes and their automated transformation and execution are discussed in detail.



Author(s):  
C. Rani ◽  
S. N. Deepa

This paper proposes a modified form of operator based on Particle Swarm Optimization (PSO) for designing Genetic Fuzzy Rule Based System (GFRBS). The usual procedure of velocity updating in PSO is modified by calculating the velocity using chromosome’s individual best value and global best value based on an updating probability without considering the inertia weight, old velocity and constriction factors. This kind of calculation brings intelligent information sharing mechanism and memory capability to Genetic Algorithm (GA) and can be easily implemented along with other genetic operators. The performance of the proposed operator is evaluated using ten publicly available bench mark data sets. Simulation results show that the proposed operator introduces new material into the population, thereby allows faster and more accurate convergence without struck into a local optima. Statistical analysis of the experimental results shows that the proposed operator produces a classifier model with minimum number of rules and higher classification accuracy.



Author(s):  
Sankaradass Veeramalai ◽  
Arputharaj Kannan

As the use of web applications increases, when users use search engines for finding some information by inputting keywords, the number of web pages that match the information increases at a tremendous rate. It is not easy for a user to retrieve the exact web page which contains information he or she requires. In this paper, an approach to web page retrieval system using the hybrid combination of context based and collaborative filtering method employing the concept of fuzzy association rule classification is introduced and the authors propose an innovative clustering of user profiles in order to reduce the filtering space and achieves sub-linear filtering time. This approach can produce recommended web page links for users based on the information that associates strongly with users’ queries quickly with better efficiency and therefore improve the recall, precision of a search engine.



Author(s):  
Yanliang Qi ◽  
Min Song ◽  
Suk-Chung Yoon ◽  
Lori deVersterre

Key-phrase extraction plays a useful a role in research areas of Information Systems (IS) like digital libraries. Short metadata like key phrases are beneficial for searchers to understand the concepts found in the documents. This paper evaluates the effectiveness of different supervised learning techniques on biomedical full-text: Sequential Minimal Optimization (SMO) and K-Nearest Neighbor, both of which could be embedded inside an information system for document search. The authors use these techniques to extract key phrases from PubMed and evaluate the performance of these systems using the holdout validation method. This paper compares different classifier techniques and performance differences between the full-text and it’s abstract. Compared with the authors’ previous work, which investigated the performance of Naïve Bayes, Linear Regression and SVM(reg1/2), this paper finds that SVMreg-1 performs best in key-phrase extraction for full-text, whereas Naïve Bayes performs best for abstracts. These techniques should be considered for use in information system search functionality. Additional research issues also are identified.



Author(s):  
Sridevi U. K. ◽  
Nagaveni N.

Clustering is an important topic to find relevant content from a document collection and it also reduces the search space. The current clustering research emphasizes the development of a more efficient clustering method without considering the domain knowledge and user’s need. In recent years the semantics of documents have been utilized in document clustering. The discussed work focuses on the clustering model where ontology approach is applied. The major challenge is to use the background knowledge in the similarity measure. This paper presents an ontology based annotation of documents and clustering system. The semi-automatic document annotation and concept weighting scheme is used to create an ontology based knowledge base. The Particle Swarm Optimization (PSO) clustering algorithm can be applied to obtain the clustering solution. The accuracy of clustering has been computed before and after combining ontology with Vector Space Model (VSM). The proposed ontology based framework gives improved performance and better clustering compared to the traditional vector space model. The result using ontology was significant and promising.



Author(s):  
Jeremiah D. Deng ◽  
Martin Purvis ◽  
Maryam Purvis

Software development effort estimation is important for quality management in the software development industry, yet its automation still remains a challenging issue. Applying machine learning algorithms alone often cannot achieve satisfactory results. This paper presents an integrated data mining framework that incorporates domain knowledge into a series of data analysis and modeling processes, including visualization, feature selection, and model validation. An empirical study on the software effort estimation problem using a benchmark dataset shows the necessity and effectiveness of the proposed approach.



Sign in / Sign up

Export Citation Format

Share Document