scholarly journals HPM: A Hybrid Model for User’s Behavior Prediction Based on N-Gram Parsing and Access Logs

2020 ◽  
Vol 2020 ◽  
pp. 1-18
Author(s):  
Sonia Setia ◽  
Verma Jyoti ◽  
Neelam Duhan

The continuous growth of the World Wide Web has led to the problem of long access delays. To reduce this delay, prefetching techniques have been used to predict the users’ browsing behavior to fetch the web pages before the user explicitly demands that web page. To make near accurate predictions for users’ search behavior is a complex task faced by researchers for many years. For this, various web mining techniques have been used. However, it is observed that either of the methods has its own set of drawbacks. In this paper, a novel approach has been proposed to make a hybrid prediction model that integrates usage mining and content mining techniques to tackle the individual challenges of both these approaches. The proposed method uses N-gram parsing along with the click count of the queries to capture more contextual information as an effort to improve the prediction of web pages. Evaluation of the proposed hybrid approach has been done by using AOL search logs, which shows a 26% increase in precision of prediction and a 10% increase in hit ratio on average as compared to other mining techniques.

Author(s):  
Chai Wutiwiwatchai

This article summarily reports the development of the first Thai spoken dialogue system (SDS), namely a Thai Interactive Hotel Reservation Agent (TIRA) . In the development, a multi-stage technique of spoken language understanding (SLU), which combined a word spotting technique for concept extraction and a pattern classification technique for goal identification, was proposed. To improve system performance, a novel approach of logical n-gram modeling was developed for concept extraction in order to enhance the SLU robustness. Furthermore, dialogue contextual information was used to help understanding and finally, an error detection mechanism was constructed to prevent unreliable interpretation outputs of the SLU. All the mentioned algorithms were incorporated in the TIRA system and evaluated by real users.


2020 ◽  
Vol 8 (6) ◽  
pp. 2619-2624

Now a day's web is the primary wellspring of data in each field. Web is additionally extending exponentially step by step procedure. To get the applicability of data is very tedious and is anything but an extremely simple assignment. For the most part of clients go for the different web indexes to look through any data. However, here and there web search tools are not ready to give valuable outcomes as the vast majority of the web archives are available in an unstructured way. Information mining is the extraction of data from an enormous database. This project can be useful in diagnostics, treatment, and counteraction of any ailment. There are large numbers of archives on the web about biomedical an explicit term so to acquire a pertinent record is exceptionally troublesome. The objective of this project is to apply content mining strategies to recover helpful biomedical web records. Here an increasingly productive instrument is proposed which utilizes the advanced SVM algorithm, grouping calculation where it can aggregate the comparable archives in a single spot. In this paper proposed smartly designed web mining algorithms to extract the textual form of information on web pages and to apply for web applications. This proposed system gives more helpful in all biomedical sectors. Search engines can be used to do the regression on the web pages into the biomedical structure. This methodology will assist the client in getting all the important relevant biomedical information in one place. On contrasting my methodology and the first SVM algorithm calculation that we use with an improved k-mean algorithm and found that our calculation on a normal giving 99.72 % results.


Nowadays, internet has become the easiest way to obtain more information from the web and millions of users search internet to find out the information. The continuous growth of web pages and users interest to search more information about various topics increases the complexity of recommendation. The user's behavior is extracted by using the web mining techniques, which are used in web server log. The main aim of this research study is to identify the navigation pattern of users from the log files. There are three major steps in the web mining process namely pre-processing the data, classification of pattern and users discovery. In recent periods, the web page articles are classified by the researchers before recommending the requested page to users. However, every category size is too large or manual labors are often needed for classification tasks. A high time complexity issues are faced by some existing clustering methods or according to the initial parameters, these techniques provides the iterative computing that leads to insufficient results. To address the above issues, a recommendation for web page is developed by initializing the margin parameters of classification techniques which considers both effectiveness and efficiency. This research work initializes the Random Forest's (RF) margin parameters by using the FireFly Algorithm (FFA) for reducing the processing time to speed up the process. A large volume of user's interest data is processed by these margin parameters, which provides a better recommendation than existing techniques. The experimental results show that RF-FFA method achieved 41.89% accuracy and recall values, when compared with other heuristic algorithms.


2003 ◽  
Vol 25 (2) ◽  
pp. 165-169
Author(s):  
Paul R. J. Duffy ◽  
Olivia Lelong

Summary An archaeological excavation was carried out at Graham Street, Leith, Edinburgh by Glasgow University Archaeological Research Division (GUARD) as part of the Historic Scotland Human Remains Call-off Contract following the discovery of human remains during machine excavation of a foundation trench for a new housing development. Excavation demonstrated that the burial was that of a young adult male who had been interred in a supine position with his head orientated towards the north. Radiocarbon dates obtained from a right tibia suggest the individual died between the 15th and 17th centuries AD. Little contextual information exists in documentary or cartographic sources to supplement this scant physical evidence. Accordingly, it is difficult to further refine the context of burial, although a possible link with a historically attested siege or a plague cannot be discounted.


2021 ◽  
Vol 5 (EICS) ◽  
pp. 1-23
Author(s):  
Markku Laine ◽  
Yu Zhang ◽  
Simo Santala ◽  
Jussi P. P. Jokinen ◽  
Antti Oulasvirta

Over the past decade, responsive web design (RWD) has become the de facto standard for adapting web pages to a wide range of devices used for browsing. While RWD has improved the usability of web pages, it is not without drawbacks and limitations: designers and developers must manually design the web layouts for multiple screen sizes and implement associated adaptation rules, and its "one responsive design fits all" approach lacks support for personalization. This paper presents a novel approach for automated generation of responsive and personalized web layouts. Given an existing web page design and preferences related to design objectives, our integer programming -based optimizer generates a consistent set of web designs. Where relevant data is available, these can be further automatically personalized for the user and browsing device. The paper includes presentation of techniques for runtime adaptation of the designs generated into a fully responsive grid layout for web browsing. Results from our ratings-based online studies with end users (N = 86) and designers (N = 64) show that the proposed approach can automatically create high-quality responsive web layouts for a variety of real-world websites.


2021 ◽  
Vol 7 (1) ◽  
Author(s):  
Daniel Duncan

Abstract Advances in sociophonetic research resulted in features once sorted into discrete bins now being measured continuously. This has implied a shift in what sociolinguists view as the abstract representation of the sociolinguistic variable. When measured discretely, variation is variation in selection: one variant is selected for production, and factors influencing language variation and change are influencing the frequency at which variants are selected. Measured continuously, variation is variation in execution: speakers have a single target for production, which they approximate with varying success. This paper suggests that both approaches can and should be considered in sociophonetic analysis. To that end, I offer the use of hidden Markov models (HMMs) as a novel approach to find speakers’ multiple targets within continuous data. Using the lot vowel among whites in Greater St. Louis as a case study, I compare 2-state and 1-state HMMs constructed at the individual speaker level. Ten of fifty-two speakers’ production is shown to involve the regular use of distinct fronted and backed variants of the vowel. This finding illustrates HMMs’ capacity to allow us to consider variation as both variant selection and execution, making them a useful tool in the analysis of sociophonetic data.


2019 ◽  
Vol 15 (4) ◽  
pp. 41-56 ◽  
Author(s):  
Ibukun Tolulope Afolabi ◽  
Opeyemi Samuel Makinde ◽  
Olufunke Oyejoke Oladipupo

Currently, for content-based recommendations, semantic analysis of text from webpages seems to be a major problem. In this research, we present a semantic web content mining approach for recommender systems in online shopping. The methodology is based on two major phases. The first phase is the semantic preprocessing of textual data using the combination of a developed ontology and an existing ontology. The second phase uses the Naïve Bayes algorithm to make the recommendations. The output of the system is evaluated using precision, recall and f-measure. The results from the system showed that the semantic preprocessing improved the recommendation accuracy of the recommender system by 5.2% over the existing approach. Also, the developed system is able to provide a platform for content-based recommendation in online shopping. This system has an edge over the existing recommender approaches because it is able to analyze the textual contents of users feedback on a product in order to provide the necessary product recommendation.


Vascular ◽  
2021 ◽  
pp. 170853812110489
Author(s):  
Nathan W Kugler ◽  
Brian D Lewis ◽  
Michael Malinowski

Objectives Axillary pullout syndrome is a complex, potentially fatal complication following axillary-femoral bypass graft creation. The re-operative nature, in addition to ongoing hemorrhage, makes for a complicated and potentially morbid repair. Methods We present the case of a 57-year-old man with history of a previous left axillary-femoral-femoral bypass who presented with acute limb-threatening ischemia as a result of bypass thrombosis managed with a right axillary-femoral bypass for limb salvage. His postoperative course was complicated by an axillary anastomotic dehiscence while recovering in inpatient rehabilitation resulting in acute, life-threatening hemorrhage. He was managed utilizing a novel hybrid approach in which a retrograde stent graft was initially placed across the anastomotic dehiscence for control of hemorrhage. He then underwent exploration, decompression, and interposition graft repair utilizing the newly placed stent graft to reinforce the redo axillary anastomosis. Results and Conclusion Compared with a traditional operative approach, the hybrid endovascular and open approach limited ongoing hemorrhage while providing a more stable platform for repair and graft revascularization. A hybrid approach to the management of axillary pullout syndrome provides a safe, effective means to the management of axillary anastomotic dehiscence while minimizing the morbidity of ongoing hemorrhage.


Sign in / Sign up

Export Citation Format

Share Document