A novel approach for Web page modeling in personal information extraction

2018 ◽  
Vol 22 (2) ◽  
pp. 603-620
Author(s):  
Wei Yuliang ◽  
Zhou Qi ◽  
Lv Fang ◽  
Han Xixian ◽  
Xin Guodong ◽  
...  
Author(s):  
J. Alamelu Mangai ◽  
V. Santhosh Kumar ◽  
S. Appavu Balamurugan

2018 ◽  
Vol 11 (2) ◽  
pp. 49-57
Author(s):  
Adrian Cristian MOISE

Starting from the provisions of Article 2 of the Council of Europe Convention on Cybercrime and from the provisions of Article 3 of Directive 2013/40/EU on attacks against information systems, the present study analyses how these provisions have been transposed into the text of Article 360 of the Romanian Criminal Code.  Illegal access to a computer system is a criminal offence that aims to affect the patrimony of individuals or legal entities.The illegal access to computer systems is accomplished with the help of the social engineering techniques, the best known technique of this kind is the use of phishing threats. Typically, phishing attacks will lead the recipient to a Web page designed to simulate the visual identity of a target organization, and to gather personal information about the user, the victim having knowledge of the attack.


2011 ◽  
Vol 2 (4) ◽  
pp. 149-161 ◽  
Author(s):  
Geeta ◽  
Omkar Mamillapalli ◽  
Shasikumar G. Totad ◽  
Prasad Reddy
Keyword(s):  
Web Page ◽  

2014 ◽  
Vol 519-520 ◽  
pp. 318-321
Author(s):  
Ning Lv ◽  
Jing Li Zhou ◽  
Lei Hua Qin

The precise context of user tasks helps to ameliorate personal information management on desktop. This paper introduces a novel approach to discern user tasks using contextual information which is divided into two categories, user behavior based context and text based context. With the contextual information, user tasks are discerned by support vector machine (SVM) method. Experimental results showed the impact of distinct attributes of files on the performance of user task identification.


2012 ◽  
Vol 12 (2) ◽  
pp. 34-50 ◽  
Author(s):  
Jagadish S. Kallimani ◽  
K. G. Srinivasa ◽  
B. Eswara Reddy

Abstract The method for filtering information from large volumes of text is called Information Extraction. It is a limited task than understanding the full text. In full text understanding, we express in an explicit fashion about all the information in a given text. But, in Information Extraction, we delimit in advance, as part of the specification of the task and the semantic range of the result. Only extractive summarization method is considered and developed for the study. In this article a model for summarization from large documents using a novel approach has been proposed by considering one of the South Indian regional languages (Kannada). It deals with a single document summarization based on statistical approach. The purpose of summary of an article is to facilitate the quick and accurate identification of the topic of the published document. The objective is to save prospective readers’ time and effort in finding the useful information in a given huge article. Various analyses of results were also discussed by comparing it with the English language.


2014 ◽  
Vol 539 ◽  
pp. 464-468
Author(s):  
Zhi Min Wang

The paper introduces segmentation ideas in the pretreatment process of web page. By page segmentation technique to extract the accurate information in the extract region, the region was processed to extract according to the rules of ontology extraction , and ultimately get the information you need. Through experiments on two real datasets and compare with related work, experimental results show that this method can achieve good extraction results.


Sign in / Sign up

Export Citation Format

Share Document