scholarly journals Usage of Web Service in Mobile Application for Parents and Students in Binus School Serpong

Author(s):  
Karto Iskandar ◽  
Andrew Thejo Putranto

A web service is a service offered by a device electronically to communicate with other electronic device using the World wide web. Smartphone is an electronic device that almost everyone has, especially student and parent for getting information about the school. In BINUS School Serpong mobile application, web services used for getting data from web server like student and menu data. Problem faced by BINUS School Serpong today is the time-consuming application update when using the native application while the application updates are very frequent. To resolve this problem, BINUS School Serpong mobile application will use the web service. This article showed the usage of web services with XML for retrieving data of student. The result from this study is that by using web service, smartphone can retrieve data consistently between multiple platforms. 

Author(s):  
Bill Karakostas ◽  
Yannis Zorgios

Chapter II presented the main concepts underlying business services. Ultimately, as this book proposes, business services need to be decomposed into networks of executable Web services. Web services are the primary software technology available today that closely matches the characteristics of business services. To understand the mapping from business to Web services, we need to understand the fundamental characteristics of the latter. This chapter therefore will introduce the main Web services concepts and standards. It does not intend to be a comprehensive description of all standards applicable to Web services, as many of them are still in a state of flux. It focuses instead on the more important and stable standards. All such standards are fully and precisely defined and maintained by the organizations that have defined and endorsed them, such as the World Wide Web Consortium (http://w3c. org), the OASIS organization (http://www.oasis-open.org) and others. We advise readers to visit periodically the Web sites describing the various standards to obtain the up to date versions.


Author(s):  
Barbara Carminati ◽  
Elena Ferrari ◽  
Patrick C.K. Hung

A Web service is a software system that supports interoperable application-to-application interactions over a network. Web services are based on a set of XML standards such as Universal Description, Discovery and Integration (UDDI), Web Services Description Language (WSDL), and Simple Object Access Protocol (SOAP). Recently, there have been increasing demands and discussions about Web services privacy technologies in the industry and research community. To enable privacy protection for Web service consumers across multiple domains and services, the World Wide Web Consortium (W3C) published a document called “Web Services Architecture (WSA) Requirements” that defines some fundamental privacy requirements for Web services. However, no comprehensive solutions to the various privacy issues have been so far defined. For these reasons, this chapter will focus on privacy technologies by first discussing the main privacy issues in WSA and related protocols. Then, this chapter illustrates the standardization efforts going on in the context of privacy for Web services and proposes different technical approaches to tackle the privacy issues.


Author(s):  
Farhana H. Zulkernine ◽  
Pat Martin

The widespread use and expansion of the World Wide Web has revolutionized the discovery, access, and retrieval of information. The Internet has become the doorway to a vast information base and has leveraged the access to information through standard protocols and technologies like HyperText Markup Language (HTML), active server pages (ASP), Java server pages (JSP), Web databases, and Web services. Web services are software applications that are accessible over the World Wide Web through standard communication protocols. A Web service typically has a Webaccessible interface for its clients at the front end, and is connected to a database system and other related application suites at the back end. Thus, Web services can render efficient Web access to an information base in a secured and selective manner. The true success of this technology, however, largely depends on the efficient management of the various components forming the backbone of a Web service system. This chapter presents an overview and the state of the art of various management approaches, models, and architectures for Web services systems toward achieving quality of service (QoS) in Web data access. Finally, it discusses the importance of autonomic or self-managing systems and provides an outline of our current research on autonomic Web services.


Author(s):  
Kevin Curran ◽  
Padraig O’Kane

The term “Web services” was initially employed by Bill Gates, chairman of Microsoft, at the Microsoft Professional Developers Conference in Orlando, Florida on July 12, 2000. Fundamentally, the term refers to automated resources accessed via an Internet URL. However, a more comprehensive definition is that of the World Wide Web Consortium (W3C)1, which declare Web services as “providing a standard means of interoperating between different software applications, running on a variety of platforms and/or frameworks.” An Internet connection allows retrieval of software-powered resources or functional components and is therefore regarded as an extension of the World Wide Web infrastructure. Web services represent the evolution of a human-oriented utilization of the Web to a technology that is application driven. It attempts to replace human-centric searches for information with searches that are primarily application based (Staab, 2003).


2007 ◽  
pp. 268-290
Author(s):  
Farhana H. Zulkernine ◽  
Pat Martin

The widespread use and expansion of the World Wide Web has revolutionized the discovery, access and retrieval of information. The Internet has become the doorway to a vast information base and has leveraged the access to information through standard protocols and technologies like Hyper Text Markup Language (HTML), Active Server Pages (ASP), Java Server Pages (JSP), Web databases and Web services. Web services are software applications that are accessible over the World Wide Web through standard communication protocols. A Web service typically has a Web accessible interface for its clients at the front end, and is connected to a database system and other related application suite at the backend. Thus Web services can render efficient Web access to an information base in a secured and selective manner. The true success of this technology, however, largely depends on the efficient management of the various components forming the backbone of a Web service system. This chapter presents an overview and the state-of-the-art of various management approaches, models, and architectures for Web services systems towards achieving Quality of Service (QoS) in Web data access. Finally it discusses the importance of autonomic or self-managing systems and provides an outline of our current research on autonomic Web services.


Author(s):  
Anthony D. Andre

This paper provides an overview of the various human factors and ergonomics (HF/E) resources on the World Wide Web (WWW). A list of the most popular and useful HF/E sites will be provided, along with several critical guidelines relevant to using the WWW. The reader will gain a clear understanding of how to find HF/E information on the Web and how to successfully use the Web towards various HF/E professional consulting activities. Finally, we consider the ergonomic implications of surfing the Web.


2017 ◽  
Vol 4 (1) ◽  
pp. 95-110 ◽  
Author(s):  
Deepika Punj ◽  
Ashutosh Dixit

In order to manage the vast information available on web, crawler plays a significant role. The working of crawler should be optimized to get maximum and unique information from the World Wide Web. In this paper, architecture of migrating crawler is proposed which is based on URL ordering, URL scheduling and document redundancy elimination mechanism. The proposed ordering technique is based on URL structure, which plays a crucial role in utilizing the web efficiently. Scheduling ensures that URLs should go to optimum agent for downloading. To ensure this, characteristics of both agents and URLs are taken into consideration for scheduling. Duplicate documents are also removed to make the database unique. To reduce matching time, document matching is made on the basis of their Meta information only. The agents of proposed migrating crawler work more efficiently than traditional single crawler by providing ordering and scheduling of URLs.


Author(s):  
Dr. Manish L Jivtode

The Broker Architecture became popular involving client and server. Representational State Transfer(REST) architecture is the architecture of World Wide Web. REST uses HTTP protocol based on Servlet and ASMX technology is replaced by WCF web service technology. SOAP and REST are two kinds of WCF web services. REST is lightweight compared to SOAP and hence emerged as the popular technology for building distributed applications in the cloud. In this paper, conducted by exposing a HTTP endpoint address, HTTP relay binding (webHttpRelayBinding) and CRUD contract through interface. The interface is decorated using WebGet and WebInvoke attributes. WCF configuration file created using XML tags for use with REST web service.


2021 ◽  
Author(s):  
Michael Dick

Since it was first formally proposed in 1990 (and since the first website was launched in 1991), the World Wide Web has evolved from a collection of linked hypertext documents residing on the Internet, to a "meta-medium" featuring platforms that older media have leveraged to reach their publics through alternative means. However, this pathway towards the modernization of the Web has not been entirely linear, nor will it proceed as such. Accordingly, this paper problematizes the notion of "progress" as it relates to the online realm by illuminating two distinct perspectives on the realized and proposed evolution of the Web, both of which can be grounded in the broader debate concerning technological determinism versus the social construction of technology: on the one hand, the centralized and ontology-driven shift from a human-centred "Web of Documents" to a machine-understandable "Web of Data" or "Semantic Web", which is supported by the Web's inventor, Tim Berners-Lee, and the organization he heads, the World Wide Web Consortium (W3C); on the other, the decentralized and folksonomy-driven mechanisms through which individuals and collectives exert control over the online environment (e.g. through the social networking applications that have come to characterize the contemporary period of "Web 2.0"). Methodologically, the above is accomplished through a sustained exploration of theory derived from communication and cultural studies, which discursively weaves these two viewpoints together with a technical history of recent W3C projects. As a case study, it is asserted that the forward slashes contained in a Uniform Resource Identifier (URI) were a social construct that was eventually rendered extraneous by the end-user community. By focusing On the context of the technology itself, it is anticipated that this paper will contribute to the broader debate concerning the future of the Web and its need to move beyond a determinant "modernization paradigm" or over-arching ontology, as well as advance the potential connections that can be cultivated with cognate disciplines.


Replication [7] must work. In fact, few cyberneticists would disagree with the signifi-cant unification of the lookaside buffer and I/O automata, which embodies the practi-cal principles of Bayesian complexity theory. In order to solve this question, we describe a novel methodology for the deployment of object-oriented languages (YAMP), discon-firming that the World Wide Web and robots can collude to realize this intent.


Sign in / Sign up

Export Citation Format

Share Document