Internet, Reengineering and Technology Applications in Retailing

Author(s):  
Dr. Rajagopal

The rapid growth in computer technology and commercial needs have allowed significant changes in the information management systems. There emerged in early nineties many commercial network backbones to link with the NSFnet to provide market information to the business firms. The Internet today is a combination of NSFnet and commercially available backbone services disseminating information on the decentralized networks all over the world. It is estimated that there are over 30,000 computer networks connecting over 2 million computers with each other on the Web. In view of the increasing use of electronic information sources through the networks the Transmission Control Protocol/Internet Protocol (TCP/IP) has been designed and made essential for each user networks to abide with the protocol standards which enables the data transfer and retrieval at source.

2010 ◽  
pp. 1324-1342
Author(s):  
Dr. Rajagopal

The rapid growth in computer technology and commercial needs have allowed significant changes in the information management systems. There emerged in early nineties many commercial network backbones to link with the NSFnet to provide market information to the business firms. The Internet today is a combination of NSFnet and commercially available backbone services disseminating information on the decentralized networks all over the world. It is estimated that there are over 30,000 computer networks connecting over 2 million computers with each other on the Web. In view of the increasing use of electronic information sources through the networks the Transmission Control Protocol/Internet Protocol (TCP/IP) has been designed and made essential for each user networks to abide with the protocol standards which enables the data transfer and retrieval at source.


Author(s):  
Mu-Chun Su ◽  
◽  
Shao-Jui Wang ◽  
Chen-Ko Huang ◽  
Pa-ChunWang ◽  
...  

Most of the dramatically increased amount of information available on the World Wide Web is provided via HTML and formatted for human browsing rather than for software programs. This situation calls for a tool that automatically extracts information from semistructured Web information sources, increasing the usefulness of value-added Web services. We present a <u>si</u>gnal-<u>r</u>epresentation-b<u>a</u>sed <u>p</u>arser (SIRAP) that breaks Web pages up into logically coherent groups - groups of information related to an entity, for example. Templates for records with different tag structures are generated incrementally by a Histogram-Based Correlation Coefficient (HBCC) algorithm, then records on a Web page are detected efficiently using templates generated by matching. Hundreds of Web pages from 17 state-of-the-art search engines were used to demonstrate the feasibility of our approach.


2010 ◽  
Vol 1 (1) ◽  
pp. 23-28 ◽  
Author(s):  
S. Altan Erdem

As the Internet gained more significance in various dimensions of our lives and dealings with others, it was just a matter of time for the world of healthcare to incorporate the Internet-use into its routines.  We are now seeing tangible examples of this use in many facets of healthcare industry.  Both providers and patients have been pursuing Internet-related strategies, remedies, routines, and etc. for a while now.  It has been stated that majority of the Americans who are online are looking for health information.  Healthcare information is accessed more than sports, stocks, and shopping.  Some believe that this growing use of online health information sources is able to bridge the gap between what patients know and what they are told.  In other words, these patients can visit their physicians armed with knowledge obtained on the Web and pursue rather educated discussions with their physicians about their medical issues.  Of course, this is true based on the assumptions that the websites that these patients use provide accurate information and the patients comprehend this information properly.  The purpose of this paper is to very briefly review some of the ongoing trends in this field and review the practicality of the two assumptions listed above.  It is hoped that inquiries like this result in a better understanding of the components required for a proper use of online options to improve the efficiency of healthcare practices.


2011 ◽  
pp. 1417-1421
Author(s):  
John F. Clayton

The development of the Internet has a relatively brief and well-documented history (Cerf, 2001; Griffiths, 2001; Leiner et al., 2000; Tyson, 2002). The initial concept was first mooted in the early 1960s. American computer specialists visualized the creation of a globally interconnected set of computers through which everyone quickly could access data and programs from any node, or place, in the world. In the early 1970s, a research project initiated by the United States Department of Defense investigated techniques and technologies to interlink packet networks of various kinds. This was called the Internetting project, and the system of connected networks that emerged from the project was known as the Internet. The initial networks created were purpose-built (i.e., they were intended for and largely restricted to closed specialist communities of research scholars). However, other scholars, other government departments, and the commercial sector realized the system of protocols developed during this research (Transmission Control Protocol [TCP] and Internet Protocol [IP], collectively known as the TCP/IP Protocol Suite) had the potential to revolutionize data and program sharing in all parts of the community. A flurry of activity, beginning with the National Science Foundation (NSF) network NSFNET in 1986, over the last two decades of the 20th century created the Internet as we know it today. In essence, the Internet is a collection of computers joined together with cables and connectors following standard communication protocols.


Author(s):  
John F. Clayton

The development of the Internet has a relatively brief and well-documented history (Cerf, 2001; Griffiths, 2001; Leiner et al., 2000; Tyson, 2002). The initial concept was first mooted in the early 1960s. American computer specialists visualized the creation of a globally interconnected set of computers through which everyone quickly could access data and programs from any node, or place, in the world. In the early 1970s, a research project initiated by the United States Department of Defense investigated techniques and technologies to interlink packet networks of various kinds. This was called the Internetting project, and the system of connected networks that emerged from the project was known as the Internet. The initial networks created were purpose-built (i.e., they were intended for and largely restricted to closed specialist communities of research scholars). However, other scholars, other government departments, and the commercial sector realized the system of protocols developed during this research (Transmission Control Protocol [TCP] and Internet Protocol [IP], collectively known as the TCP/IP Protocol Suite) had the potential to revolutionize data and program sharing in all parts of the community. A flurry of activity, beginning with the National Science Foundation (NSF) network NSFNET in 1986, over the last two decades of the 20th century created the Internet as we know it today. In essence, the Internet is a collection of computers joined together with cables and connectors following standard communication protocols.


2014 ◽  
Vol 03 (02) ◽  
pp. 41-44
Author(s):  
Marina Giampietro

In March 1989 at CERN, Tim Berners-Lee submitted his proposal to develop a radical new way of linking and sharing information over the internet. The document was entitled "Information Management: A Proposal" (CERN Courier May 2009 p24). And so the web was born. Now, Berners-Lee, the World Wide Web Consortium (W3C) and the World Wide Web Foundation are launching a series of initiatives to mark the 25th anniversary of the original proposal, and to raise awareness of themes linked to the web, such as freedom, accessibility and privacy.


2021 ◽  
Vol 32 (2) ◽  
pp. 162-172
Author(s):  
Marko Nel ◽  
Imke de Kock

There is a need in energy-poor sub-Saharan Africa for a system to manage energy more efficiently and effectively both within and between countries. An approach that has been proven in other parts of the world to facilitate this is super grids. With their interconnection and information management systems, super grids can contribute to the increasingly effective and efficient management of energy, and they have the potential to increase sustainability. The applicability of such super grids in the sub-Saharan African context is still uncertain and is scientifically under-explored; thus there is a need to establish their applicability in a sub-Saharan African context. In this article, the literature on super grids is analysed and contextualised from a bibliometric and content analysis perspective, in order to draw parallels between such super grids and the sub-Saharan African context, and thus to investigate their applicability in that context.


Bioanalysis ◽  
2020 ◽  
Vol 12 (14) ◽  
pp. 1033-1038
Author(s):  
Cecilia Arfvidsson ◽  
David Van Bedaf ◽  
Susanne Globig ◽  
Magnus Knutsson ◽  
Mark Lewis ◽  
...  

In this paper, the European Bioanalysis Forum reports back from the discussions with software developers, involved in regulated bioanalysis software solutions, on agreeing to data transfer specification in the bioanalytical labs’ LC–MS workflows as part of today’s Data Integrity (DI) challenges. The proposed specifications aim at identifying what consists of a minimum dataset, that is, which are the pre-identified fields to be included in DI proof bidirectional data transfer between LC–MS and information management systems. The proposal is an attempt from the European Bioanalysis Forum to facilitate new software solutions becoming available to increase compliance related to DI in today’s LC–MS workflows. The proposal may also serve as a template and inspiration for new data transfer solutions in other workflows.


2020 ◽  
pp. 151-156
Author(s):  
A. P. Korablev ◽  
N. S. Liksakova ◽  
D. M. Mirin ◽  
D. G. Oreshkin ◽  
P. G. Efimov

A new species list of plants and lichens of Russia and neighboring countries has been developed for Turboveg for Windows, the program, intended for storage and management of phytosociological data (relevés), is widely used all around the world (Hennekens, Schaminée, 2001; Hennekens, 2015). The species list is built upon the database of the Russian website Plantarium (Plantarium…: [site]), which contains a species atlas and illustrated an online Handbook of plants and lichens. The nomenclature used on Plantarium was originally based on the following issues: vascular plants — S. K. Cherepanov (1995) with additions; mosses — «Flora of mosses of Russia» (Proect...: [site]); liverworts and hornworts — A. D. Potemkin and E. V. Sofronova (2009); lichens — «Spisok…» G. P. Urbanavichyus ed. (2010); other sources (Plantarium...: [site]). The new species list, currently the most comprehensive in Turboveg format for Russia, has 89 501 entries, including 4627 genus taxa compare to the old one with 32 020 entries (taxa) and only 253 synonyms. There are 84 805 species and subspecies taxa in the list, 37 760 (44.7 %) of which are accepted, while the others are synonyms. Their distribution by groups of organisms and divisions are shown in Table. A large number of synonyms in the new list and its adaptation to work with the Russian literature will greatly facilitate the entry of old relevé data. The ways of making new list, its structure as well as the possibilities of checking taxonomic lists on Internet resources are considered. The files of the species list for Turboveg 2 and Turboveg 3, the technique of associating existing databases with a new species list (in Russian) are available on the web page https://www.binran.ru/resursy/informatsionnyye-resursy/tekuschie-proekty/species_list_russia/.


Sign in / Sign up

Export Citation Format

Share Document