Application of VM-Based Computations to Speed Up the Web Crawling Process on Multi-core Processors

Author(s):  
Hussein Al-Bahadili ◽  
Hamzah Qtishat
Keyword(s):  
Speed Up ◽  
Physics World ◽  
1999 ◽  
Vol 12 (11) ◽  
pp. 20-20
Author(s):  
Eli Yablonovitch
Keyword(s):  
Speed Up ◽  

2021 ◽  
Vol 20 ◽  
pp. 225-229
Author(s):  
Andrii Berkovskyy ◽  
Kostiantyn Voskoboinik ◽  
Marcin Badurowicz

This article compares the compilation speed of the SCSS and LESS preprocessor codes. Each preprocessor has its own syntax, which is transpiled into the CSS stylesheet language in the further development of the web page. These technologies serve the same purpose - to simplify and speed up the writing of page views, but are based on different programming languages - Ruby (SCSS) and JavaScript (LESS). For the purposes of the research, a test application was created, a series of tests was carried out on various sets of codes, the results were collected, which allowed to identify a preprocessor with    a faster compilation speed, which turned out to be the LESS preprocessor.


2019 ◽  
Vol 8 (2) ◽  
pp. 56-64
Author(s):  
Liza Safitri ◽  
Erin Bevidianka

SMK Negeri 3 Tanjungpinang is a vocational high school located in Kampung Bulang, Tanjungpinang. This school is facing problems in managing the library that is owned so that it can serve library lending for users more quickly and accurately including in providing access to library materials directly to users. But, at the moment. The library still does not have a library circulation system that can provide library access. The web-based library system that is designed is expected to facilitate library members in SMK Negeri 3 Tanjungpinang in finding digital books about library materials owned by libraries without time and place limits. The web-based library sirkuasi system, which is designed according to the needs and conditions of the library is also expected to be able to facilitate and speed up officers in handling circulation services and making reports. Programming is done to make the system above is done using the PHP programming language with a MySQL database server. From the results of the implementation of the system and testing, it was concluded that the web-based library information system can answer the problems encountered so that it can help the library in serving users quickly and accurately.


Author(s):  
Deepak Mayal

World Wide Web (WWW)also referred to as web acts as a vital source of information and searching over the web has become so much easy nowadays all thanks to search engines google, yahoo etc. A search engine is basically a complex multiprogram that allows user to search information available on the web and for that purpose, they use web crawlers. Web crawler systematically browses the world wide web. Effective search helps in avoiding downloading and visiting irrelevant web pages on the web in order to do that web crawlers use different searching algorithm . This paper reviews different web crawling algorithm that determines the fate of the search system.


Distributed (P2P) systems build up approximately coupled application-level overlays on high of the web to encourage affordable sharing of assets. They'll be generally delegated either organized or unstructured systems. while not requesting requirements over the topology, unstructured P2P systems is made appallingly speedily and ar so contemplated fitting to the web air. Be that as it may, the arbitrary pursuit strategies received by these systems now and then perform ineffectively with a larger than usual system Size. during this paper, we tend to search for to help the pursuit execution in unstructured P2P arranges through misusing clients' basic intrigue designs caught among a likelihood theoretic structure named the client intrigue model (UIM). an exploration convention and a directing table change convention ar increasingly arranged in order to speed up the hunt technique through self sorting out the P2P organize into somewhat world. Each hypothetical and test examination are Conducted and incontestable the viability and strength of our methodology.


2021 ◽  
Vol 4 (1) ◽  
pp. 26-42
Author(s):  
Khasan Asrori ◽  
Ely Nuryani

Along with the development of the technology field at the company PT. Barata Indonesia, needs support to assist organizational activities, such as in ordering meeting rooms. Currently, information of meeting rooms availability and ordering at PT. Barata Indonesia still hasn't used technology so the process of ordering a meeting room is by contacting the room admin to ask about the availability of the place to be booked. This is less effective because the customer cannot know directly which room can be used for meetings and according to the capacity of the person. Therefore, this application is made to facilitate ordering meeting rooms at the company. The development of this meeting room reservation information system uses the SDLC Waterfall method which aims to simplify and speed up information access. The development of this Meeting Room Reservation Information System uses the CodeIgniter 3 Framework, which has 2 (two) interfaces, namely FrondEnd, which is the start page of the web application that is displayed for visitors and BackEnd is the admin page to process the required information data sources. The results of the system trial show that the application of the Meeting Room Booking Information System can provide more flexible information.


2017 ◽  
Vol 1 (1) ◽  
pp. 1-12 ◽  
Author(s):  
Dani Gunawan ◽  
Amalia Amalia ◽  
Atras Najwan

Collecting or harvesting data from the Internet is often done by using web crawler. General web crawler is developed to be more focus on certain topic. The type of this web crawler called focused crawler. To improve the datacollection performance, creating focused crawler is not enough as the focused crawler makes efficient usage of network bandwidth and storage capacity. This research proposes a distributed focused crawler in order to improve the web crawler performance which also efficient in network bandwidth and storage capacity. This distributed focused crawler implements crawling scheduling, site ordering to determine URL queue, and focused crawler by using Naïve Bayes. This research also tests the web crawling performance by conducting multithreaded, then observe the CPU and memory utilization. The conclusion is the web crawling performance will be decrease when too many threads are used. As the consequences, the CPU and memory utilization will be very high, meanwhile performance of the distributed focused crawler will be low.


Sign in / Sign up

Export Citation Format

Share Document