scholarly journals A Low-Cost Library Database Solution

2017 ◽  
Vol 19 (1) ◽  
pp. 46-49 ◽  
Author(s):  
Mark England ◽  
Lura Joseph ◽  
Nem W. Schlect

Two locally created databases are made available to the world via the Web using an inexpensive but highly functional search engine created in-house. The technology consists of a microcomputer running UNIX to serve relational databases. CGI forms created using the programming language Perl offer flexible interface designs for database users and database maintainers.

2019 ◽  
Vol 12 (2) ◽  
pp. 110-119 ◽  
Author(s):  
Jayaraman Sethuraman ◽  
Jafar A. Alzubi ◽  
Ramachandran Manikandan ◽  
Mehdi Gheisari ◽  
Ambeshwar Kumar

Background: The World Wide Web houses an abundance of information that is used every day by billions of users across the world to find relevant data. Website owners employ webmasters to ensure their pages are ranked top in search engine result pages. However, understanding how the search engine ranks a website, which comprises numerous web pages, as the top ten or twenty websites is a major challenge. Although systems have been developed to understand the ranking process, a specialized tool based approach has not been tried. Objective: This paper develops a new framework and system that process website contents to determine search engine optimization factors. Methods: To analyze the web page dynamically by assessing the web site content based on specific keywords, elimination method was used in an attempt to reveal various search engine optimization techniques. Conclusion: Our results lead to conclude that the developed system is able to perform a deeper analysis and find factors which play a role in bringing the site on the top of the list.


2017 ◽  
Vol 8 (1) ◽  
pp. 1-22 ◽  
Author(s):  
Sahar Maâlej Dammak ◽  
Anis Jedidi ◽  
Rafik Bouaziz

With the great mass of the pages managed through the world, and especially with the advent of the Web, it has become more difficult to find the relevant pages after an interrogation. Furthermore, the manual filtering of the indexed Web pages is a laborious task. A new filtering method of the annotated Web pages (by our semantic annotation process) and the non-annotated Web pages (retrieved from search engine “Google”) is then necessary to group the relevant Web pages for the user. In this paper, the authors will first synthesize their previous work of the semantic annotation of Web pages. Then, they will define a new filtering method based on three activities. The authors will also present their querying and filtering component of Web pages; their purpose is to demonstrate the feasibility of the filtering method. Finally, the authors will present an evaluation of this component, which has proved its performance for multiple domains.


2018 ◽  
pp. 742-748
Author(s):  
Viveka Vardhan Jumpala

The Internet, which is an information super high way, has practically compressed the world into a cyber colony through various networks and other Internets. The development of the Internet and the emergence of the World Wide Web (WWW) as common vehicle for communication and instantaneous access to search engines and databases. Search Engine is designed to facilitate search for information on the WWW. Search Engines are essentially the tools that help in finding required information on the web quickly in an organized manner. Different search engines do the same job in different ways thus giving different results for the same query. Search Strategies are the new trend on the Web.


Author(s):  
Lorna Uden ◽  
Kimmo Salmenjoki

The word portal came from the Latin word porta, which is translated to gate. Anything that acts as a gateway to anything else is a portal. The portal server acts as gateway to the enterprise in a network. However, there are many different definitions of the word portal. A search of the word using Google search engine yields many thousands of references. Some consider portal to be a new name for a Web site. A portal is an entry point to the World Wide Web (WWW) and therefore, more than what a Web site does. According to Internet 101 , a portal is a Web site linking to another Web site. Sometimes search engines have been referred to as portals. Access companies, such as Microsoft Network (MSN) and America On-Line (AOL), have often been referred to as portals. Although the definition of the word portal is still evolving, the definition we will use is a gateway, and a Web portal can thus be seen as a gateway to the information and services on the Web, more specifically to services on both the public Internet and on corporate intranets. This article aims to take the historical approach based on the development of the Web and examine the factors that have contributed to the evolution of portals. The origin of portals came about because of the need for information organisation. Users need to be provided with coherent and understandable information.


Author(s):  
Sahar Maâlej Dammak ◽  
Anis Jedidi ◽  
Rafik Bouaziz

With the great mass of the pages managed through the world, and especially with the advent of the Web, it has become more difficult to find the relevant pages after an interrogation. Furthermore, the manual filtering of the indexed Web pages is a laborious task. A new filtering method of the annotated Web pages (by our semantic annotation process) and the non-annotated Web pages (retrieved from search engine “Google”) is then necessary to group the relevant Web pages for the user. In this paper, the authors will first synthesize their previous work of the semantic annotation of Web pages. Then, they will define a new filtering method based on three activities. The authors will also present their querying and filtering component of Web pages; their purpose is to demonstrate the feasibility of the filtering method. Finally, the authors will present an evaluation of this component, which has proved its performance for multiple domains.


Author(s):  
Rohit S ◽  
M N Nachappa

Metadata is defined as the information providing data about one or more faces of the data. It is used to abridge basic indication about data which can make pursuing and working with specific data easier. The idea of metadata is often prolonged to involve words or phrases that stand for objects or “objects” in the world, leading to the notion of unit extraction. In this paper, I am proposing extracting the metadata of the files user inputs to the system, this can be achieved using Flask as the web platform and Python programming language, our goal is to make a free and lightweight metadata extractor which is more efficient and user friendly.


Author(s):  
Dika Saputri

The internet as an effective medium in the world of business <br />(especially in the field of marketing) from the viewpoint of the bisins to<br />market the products produced. Various models of product offerings are<br />conceptualized by business people issued to capture market segments.<br />Pay per Click (PPC) is one of several programs on the internet that has<br />the concept of giving gifts to internet users when opening<br />advertisements submitted by advertising companies through certain<br />sites. One dollar producer from the internet is Google Adsense. Google<br />Adsense is a dollar-producing affiliate program issued by Google<br />Search Engine companies by collaborating with web or blog owners in<br />terms of Advertising. With this kind of affiliate business model,<br />publishers (web owners or blogs) will get dollars from advertisements<br />displayed on the web or blog. Ads displayed on the web or blog can be<br />text or images. There are many titles for revenue generated from<br />Google Adsense. And to find out whether or not the Muslim community<br />is capable of following a business in the field of Advertising, such as<br />Pay Per Click (PPC), there needs to be a study that discusses the<br />business of Islamic Law.


2020 ◽  
Vol 2 (2) ◽  
pp. 15-24
Author(s):  
Difia Agustin ◽  
Alexius Ulan Bani ◽  
Fauziyah

The presence of technology and science has evolved in the world of work and its development, in this Globalization Era, it requires people to participate in these developments. This method uses processing in the field of work which was initially managed manually by recording orders written in a ledger, now managed using modern technology. From the results of this study it can be seen that in recording existing orders on the CV. Boganesia Jaya has not implemented the system or is still using manual methods. In that case some suggestions can be given to the company for further development by means of an information system designed using an application with the web-based PHP and MySQL programming language to solve this problem in CV. Boganesia Jaya which is more effective and accurate.


Author(s):  
Viveka Vardhan Jumpala

The Internet, which is an information super high way, has practically compressed the world into a cyber colony through various networks and other Internets. The development of the Internet and the emergence of the World Wide Web (WWW) as common vehicle for communication and instantaneous access to search engines and databases. Search Engine is designed to facilitate search for information on the WWW. Search Engines are essentially the tools that help in finding required information on the web quickly in an organized manner. Different search engines do the same job in different ways thus giving different results for the same query. Search Strategies are the new trend on the Web.


Author(s):  
Vijay Kasi ◽  
Radhika Jain

In the context of the Internet, a search engine can be defined as a software program designed to help one access information, documents, and other content on the World Wide Web. The adoption and growth of the Internet in the last decade has been unprecedented. The World Wide Web has always been applauded for its simplicity and ease of use. This is evident looking at the extent of the knowledge one requires to build a Web page. The flexible nature of the Internet has enabled the rapid growth and adoption of it, making it hard to search for relevant information on the Web. The number of Web pages has been increasing at an astronomical pace, from around 2 million registered domains in 1995 to 233 million registered domains in 2004 (Consortium, 2004). The Internet, considered a distributed database of information, has the CRUD (create, retrieve, update, and delete) rule applied to it. While the Internet has been effective at creating, updating, and deleting content, it has considerably lacked in enabling the retrieval of relevant information. After all, there is no point in having a Web page that has little or no visibility on the Web. Since the 1990s when the first search program was released, we have come a long way in terms of searching for information. Although we are currently witnessing a tremendous growth in search engine technology, the growth of the Internet has overtaken it, leading to a state in which the existing search engine technology is falling short. When we apply the metrics of relevance, rigor, efficiency, and effectiveness to the search domain, it becomes very clear that we have progressed on the rigor and efficiency metrics by utilizing abundant computing power to produce faster searches with a lot of information. Rigor and efficiency are evident in the large number of indexed pages by the leading search engines (Barroso, Dean, & Holzle, 2003). However, more research needs to be done to address the relevance and effectiveness metrics. Users typically type in two to three keywords when searching, only to end up with a search result having thousands of Web pages! This has made it increasingly hard to effectively find any useful, relevant information. Search engines face a number of challenges today requiring them to perform rigorous searches with relevant results efficiently so that they are effective. These challenges include the following (“Search Engines,” 2004). 1. The Web is growing at a much faster rate than any present search engine technology can index. 2. Web pages are updated frequently, forcing search engines to revisit them periodically. 3. Dynamically generated Web sites may be slow or difficult to index, or may result in excessive results from a single Web site. 4. Many dynamically generated Web sites are not able to be indexed by search engines. 5. The commercial interests of a search engine can interfere with the order of relevant results the search engine shows. 6. Content that is behind a firewall or that is password protected is not accessible to search engines (such as those found in several digital libraries).1 7. Some Web sites have started using tricks such as spamdexing and cloaking to manipulate search engines to display them as the top results for a set of keywords. This can make the search results polluted, with more relevant links being pushed down in the result list. This is a result of the popularity of Web searches and the business potential search engines can generate today. 8. Search engines index all the content of the Web without any bounds on the sensitivity of information. This has raised a few security and privacy flags. With the above background and challenges in mind, we lay out the article as follows. In the next section, we begin with a discussion of search engine evolution. To facilitate the examination and discussion of the search engine development’s progress, we break down this discussion into the three generations of search engines. Figure 1 depicts this evolution pictorially and highlights the need for better search engine technologies. Next, we present a brief discussion on the contemporary state of search engine technology and various types of content searches available today. With this background, the next section documents various concerns about existing search engines setting the stage for better search engine technology. These concerns include information overload, relevance, representation, and categorization. Finally, we briefly address the research efforts under way to alleviate these concerns and then present our conclusion.


Sign in / Sign up

Export Citation Format

Share Document