Evaluation of Online Disaster and Emergency Preparedness Resources

2008 ◽  
Vol 23 (5) ◽  
pp. 438-446 ◽  
Author(s):  
Daniela B. Friedman ◽  
Manju Tanwar ◽  
Jane V.E. Richter

AbstractIntroduction:Increasingly, individuals are relying on the Internet as a major source of health information. When faced with sudden or pending disasters, people resort to the Internet in search of clear, current, and accurate instructions on how to prepare for and respond to such emergencies. Research about online health resources ascertained that information was written at the secondary education and college levels and extremely difficult for individuals with limited literacy to comprehend. This content analysis is the first to assess the reading difficulty level and format suitability of a large number of disaster and emergency preparedness Web pages intended for the general public.Objectives:The aims of this study were to: (1) assess the readability and suitability of disaster and emergency preparedness information on the Web; and (2) determine whether the reading difficulty level and suitability of online resources differ by the type of disaster or emergency and/or Website domain.Methods:Fifty Websites containing information on disaster and/or emergency preparedness were retrieved using the GoogleTM search engine. Readability testing was conducted on the first Web page, suggested by GoogleTM, addressing preparedness for the general public. The reading level was assessed using Flesch-Kincaid (F-K) and Flesch Reading Ease (FRE) measures. The Suitability Assessment of Materials (SAM) instrument was used to evaluate additional factors such as graphics, layout, and cultural appropriateness.Results:The mean F-K readability score of the 50 Websites was Grade 10.74 (95% CI = 9.93, 11.55). The mean FRE score was 45.74 (95% CI = 41.38, 50.10), a score considered “difficult”. A Web page with content about both risk and preparedness supplies was the most difficult to read according to F-K (Grade level = 12.1). Web pages with general disaster and emergency information and preparedness supplies were considered most difficult according to the FRE (38.58, 95% CI = 30.09, 47.08). The average SAM score was 48% or 0.48 (95% CI = 0.45, 0.51), implying below average suitability of these Websites. Websites on pandemics and bioterrorism were the most difficult to read (F-K: p = 0.012; FRE: p = 0.014) and least suitable (SAM: p = 0.035) compared with other disasters and emergencies.Conclusions:The results suggest the need for readily accessible preparedness resources on the Web that are easy-to-read and visually appropriate. Interdisciplinary collaborations between public health educators, risk communication specialists, and Web page creators and writers are recommended to ensure the development and dissemination of disaster and emergency resources that consider literacy abilities of the general public.

Author(s):  
John DiMarco

Web authoring is the process of developing Web pages. The Web development process requires you to use software to create functional pages that will work on the Internet. Adding Web functionality is creating specific components within a Web page that do something. Adding links, rollover graphics, and interactive multimedia items to a Web page creates are examples of enhanced functionality. This chapter demonstrates Web based authoring techniques using Macromedia Dreamweaver. The focus is on adding Web functions to pages generated from Macromedia Fireworks and to overview creating Web pages from scratch using Dreamweaver. Dreamweaver and Fireworks are professional Web applications. Using professional Web software will benefit you tremendously. There are other ways to create Web pages using applications not specifically made to create Web pages. These applications include Microsoft Word and Microsoft PowerPoint. The use of Microsoft applications for Web page development is not covered in this chapter. However, I do provide steps on how to use these applications for Web page authoring within the appendix of this text. If you feel that you are more comfortable using the Microsoft applications or the Macromedia applications simply aren’t available to you yet, follow the same process for Web page conceptualization and content creation and use the programs available to you. You should try to get Web page development skills using Macromedia Dreamweaver because it helps you expand your software skills outside of basic office applications. The ability to create a Web page using professional Web development software is important to building a high-end computer skills set. The main objectives of this chapter are to get you involved in some technical processes that you’ll need to create the Web portfolio. Focus will be on guiding you through opening your sliced pages, adding links, using tables, creating pop up windows for content and using layers and timelines for dynamic HTML. The coverage will not try to provide a complete tutorial set for Macromedia Dreamweaver, but will highlight essential techniques. Along the way you will get pieces of hand coded action scripts and JavaScripts. You can decide which pieces you want to use in your own Web portfolio pages. The techniques provided are a concentrated workflow for creating Web pages. Let us begin to explore Web page authoring.


2019 ◽  
Vol 8 (2S11) ◽  
pp. 2011-2016

With the boom in the number of internet pages, it is very hard to discover desired records effortlessly and fast out of heaps of web pages retrieved with the aid of a search engine. there may be a increasing requirement for automatic type strategies with more class accuracy. There are a few conditions these days in which it's far vital to have an green and reliable classification of a web-web page from the information contained within the URL (Uniform aid Locator) handiest, with out the want to go to the web page itself. We want to understand if the URL can be used by us while not having to look and visit the page due to numerous motives. Getting the web page content material and sorting them to discover the genre of the net web page is very time ingesting and calls for the consumer to recognize the shape of the web page which needs to be categorised. To avoid this time-eating technique we proposed an exchange method so one can help us get the genre of the entered URL based of the entered URL and the metadata i.e., description, keywords used in the website along side the title of the web site. This approach does not most effective rely upon URL however also content from the internet application. The proposed gadget can be evaluated using numerous available datasets.


2005 ◽  
Vol 5 (3) ◽  
pp. 255-268 ◽  
Author(s):  
Russell Williams ◽  
Rulzion Rattray

Organisations increasingly use the internet and web to communicate with the marketplace. Indeed, the hotel industry seems particularly suited to the use of these technologies. Many sites are not accessible to large segments of the disabled community, however, or to individuals using particular hard and softwares. Identifying the competitive and legal mandates for website accessibility, the study looks at the accessibility of UK-based hotel websites. Utilising the accessibility software, Bobby, as well as making some additional manual accessibility checks, the study finds disappointingly low levels of website accessibility. If organisations want to make more effective use of the web then they need to ensure that their web pages are designed from the outside-in — from the user's perspective.


2009 ◽  
Vol 36 (1) ◽  
pp. 41-49 ◽  
Author(s):  
ANDREW E. THOMPSON ◽  
SARA L. GRAYDON

ObjectiveWith continuing use of the Internet, rheumatologists are referring patients to various websites to gain information about medications and diseases. Our goal was to develop and evaluate a Medication Website Assessment Tool (MWAT) for use by health professionals, and to explore the overall quality of methotrexate information presented on common English-language websites.MethodsIdentification of websites was performed using a search strategy on the search engine Google. The first 250 hits were screened. Inclusion criteria included those English-language websites from authoritative sources, trusted medical, physicians’, and common health-related websites. Websites from pharmaceutical companies, online pharmacies, and where the purpose seemed to be primarily advertisements were also included. Product monographs or technical-based web pages and web pages where the information was clearly directed at patients with cancer were excluded. Two reviewers independently scored each included web page for completeness and accuracy, format, readability, reliability, and credibility. An overall ranking was provided for each methotrexate information page.ResultsTwenty-eight web pages were included in the analysis. The average score for completeness and accuracy was 15.48 ± 3.70 (maximum 24) with 10 out of 28 pages scoring 18 (75%) or higher. The average format score was 6.00 ± 1.46 (maximum 8). The Flesch-Kincaid Grade Level revealed an average grade level of 10.07 ± 1.84, with 5 out of 28 websites written at a reading level less than grade 8; however, no web page scored at a grade 5 to 6 level. An overall ranking was calculated identifying 8 web pages as appropriate sources of accurate and reliable methotrexate information.ConclusionWith the enormous amount of information available on the Internet, it is important to direct patients to web pages that are complete, accurate, readable, and credible sources of information. We identified web pages that may serve the interests of both rheumatologists and patients.


Author(s):  
Vijay Kasi ◽  
Radhika Jain

In the context of the Internet, a search engine can be defined as a software program designed to help one access information, documents, and other content on the World Wide Web. The adoption and growth of the Internet in the last decade has been unprecedented. The World Wide Web has always been applauded for its simplicity and ease of use. This is evident looking at the extent of the knowledge one requires to build a Web page. The flexible nature of the Internet has enabled the rapid growth and adoption of it, making it hard to search for relevant information on the Web. The number of Web pages has been increasing at an astronomical pace, from around 2 million registered domains in 1995 to 233 million registered domains in 2004 (Consortium, 2004). The Internet, considered a distributed database of information, has the CRUD (create, retrieve, update, and delete) rule applied to it. While the Internet has been effective at creating, updating, and deleting content, it has considerably lacked in enabling the retrieval of relevant information. After all, there is no point in having a Web page that has little or no visibility on the Web. Since the 1990s when the first search program was released, we have come a long way in terms of searching for information. Although we are currently witnessing a tremendous growth in search engine technology, the growth of the Internet has overtaken it, leading to a state in which the existing search engine technology is falling short. When we apply the metrics of relevance, rigor, efficiency, and effectiveness to the search domain, it becomes very clear that we have progressed on the rigor and efficiency metrics by utilizing abundant computing power to produce faster searches with a lot of information. Rigor and efficiency are evident in the large number of indexed pages by the leading search engines (Barroso, Dean, & Holzle, 2003). However, more research needs to be done to address the relevance and effectiveness metrics. Users typically type in two to three keywords when searching, only to end up with a search result having thousands of Web pages! This has made it increasingly hard to effectively find any useful, relevant information. Search engines face a number of challenges today requiring them to perform rigorous searches with relevant results efficiently so that they are effective. These challenges include the following (“Search Engines,” 2004). 1. The Web is growing at a much faster rate than any present search engine technology can index. 2. Web pages are updated frequently, forcing search engines to revisit them periodically. 3. Dynamically generated Web sites may be slow or difficult to index, or may result in excessive results from a single Web site. 4. Many dynamically generated Web sites are not able to be indexed by search engines. 5. The commercial interests of a search engine can interfere with the order of relevant results the search engine shows. 6. Content that is behind a firewall or that is password protected is not accessible to search engines (such as those found in several digital libraries).1 7. Some Web sites have started using tricks such as spamdexing and cloaking to manipulate search engines to display them as the top results for a set of keywords. This can make the search results polluted, with more relevant links being pushed down in the result list. This is a result of the popularity of Web searches and the business potential search engines can generate today. 8. Search engines index all the content of the Web without any bounds on the sensitivity of information. This has raised a few security and privacy flags. With the above background and challenges in mind, we lay out the article as follows. In the next section, we begin with a discussion of search engine evolution. To facilitate the examination and discussion of the search engine development’s progress, we break down this discussion into the three generations of search engines. Figure 1 depicts this evolution pictorially and highlights the need for better search engine technologies. Next, we present a brief discussion on the contemporary state of search engine technology and various types of content searches available today. With this background, the next section documents various concerns about existing search engines setting the stage for better search engine technology. These concerns include information overload, relevance, representation, and categorization. Finally, we briefly address the research efforts under way to alleviate these concerns and then present our conclusion.


2010 ◽  
Vol 3 (2) ◽  
pp. 50-66 ◽  
Author(s):  
Mohamed El Louadi ◽  
Imen Ben Ali

The major complaint users have about using the Web is that they must wait for information to load onto their screen. This is more acute in countries where bandwidth is limited and fees are high. Given bandwidth limitations, Web pages are often hard to accelerate. Predictive feedback information is assumed to distort Internet users’ perception of time, making them more tolerant of low speed. This paper explores the relationship between actual Web page loading delay and perceived Web page loading delay and two aspects of user satisfaction: the Internet user’s satisfaction with the Web page loading delay and satisfaction with the Web page displayed. It also investigates whether predictive feedback information can alter Internet user’s perception of time. The results show that, though related, perceived time and actual time differ slightly in their effect on satisfaction. In this case, it is the perception of time that counts. The results also show that the predictive feedback information displayed on the Web page has an effect on the Internet user’s perception of time, especially in the case of slow Web pages.


2018 ◽  
Author(s):  
Joao Palotti ◽  
Guido Zuccon ◽  
Allan Hanbury

BACKGROUND Understandability plays a key role in ensuring that people accessing health information are capable of gaining insights that can assist them with their health concerns and choices. The access to unclear or misleading information has been shown to negatively impact the health decisions of the general public. OBJECTIVE The aim of this study was to investigate methods to estimate the understandability of health Web pages and use these to improve the retrieval of information for people seeking health advice on the Web. METHODS Our investigation considered methods to automatically estimate the understandability of health information in Web pages, and it provided a thorough evaluation of these methods using human assessments as well as an analysis of preprocessing factors affecting understandability estimations and associated pitfalls. Furthermore, lessons learned for estimating Web page understandability were applied to the construction of retrieval methods, with specific attention to retrieving information understandable by the general public. RESULTS We found that machine learning techniques were more suitable to estimate health Web page understandability than traditional readability formulae, which are often used as guidelines and benchmark by health information providers on the Web (larger difference found for Pearson correlation of .602 using gradient boosting regressor compared with .438 using Simple Measure of Gobbledygook Index with the Conference and Labs of the Evaluation Forum eHealth 2015 collection). CONCLUSIONS The findings reported in this paper are important for specialized search services tailored to support the general public in seeking health advice on the Web, as they document and empirically validate state-of-the-art techniques and settings for this domain application.


Think India ◽  
2019 ◽  
Vol 22 (2) ◽  
pp. 174-187
Author(s):  
Harmandeep Singh ◽  
Arwinder Singh

Nowadays, internet satisfying people with different services related to different fields. The profit, as well as non-profit organization, uses the internet for various business purposes. One of the major is communicated various financial as well as non-financial information on their respective websites. This study is conducted on the top 30 BSE listed public sector companies, to measure the extent of governance disclosure (non-financial information) on their web pages. The disclosure index approach to examine the extent of governance disclosure on the internet was used. The governance index was constructed and broadly categorized into three dimensions, i.e., organization and structure, strategy & Planning and accountability, compliance, philosophy & risk management. The empirical evidence of the study reveals that all the Indian public sector companies have a website, and on average, 67% of companies disclosed some kind of governance information directly on their websites. Further, we found extreme variations in the web disclosure between the three categories, i.e., The Maharatans, The Navratans, and Miniratans. However, the result of Kruskal-Wallis indicates that there is no such significant difference between the three categories. The study provides valuable insights into the Indian economy. It explored that Indian public sector companies use the internet for governance disclosure to some extent, but lacks symmetry in the disclosure. It is because there is no such regulation for web disclosure. Thus, the recommendation of the study highlighted that there must be such a regulated framework for the web disclosure so that stakeholders ensure the transparency and reliability of the information.


2020 ◽  
Vol 4 (3) ◽  
pp. 551-557
Author(s):  
Muhammad zaky ramadhan ◽  
Kemas Muslim Lhaksmana

Hadith has several levels of authenticity, among which are weak (dhaif), and fabricated (maudhu) hadith that may not originate from the prophet Muhammad PBUH, and thus should not be considered in concluding an Islamic law (sharia). However, many such hadiths have been commonly confused as authentic hadiths among ordinary Muslims. To easily distinguish such hadiths, this paper proposes a method to check the authenticity of a hadith by comparing them with a collection of fabricated hadiths in Indonesian. The proposed method applies the vector space model and also performs spelling correction using symspell to check whether the use of spelling check can improve the accuracy of hadith retrieval, because it has never been done in previous works and typos are common on Indonesian-translated hadiths on the Web and social media raw text. The experiment result shows that the use of spell checking improves the mean average precision and recall to become 81% (from 73%) and 89% (from 80%), respectively. Therefore, the improvement in accuracy by implementing spelling correction make the hadith retrieval system more feasible and encouraged to be implemented in future works because it can correct typos that are common in the raw text on the Internet.


2002 ◽  
Vol 7 (1) ◽  
pp. 9-25 ◽  
Author(s):  
Moses Boudourides ◽  
Gerasimos Antypas

In this paper we are presenting a simple simulation of the Internet World-Wide Web, where one observes the appearance of web pages belonging to different web sites, covering a number of different thematic topics and possessing links to other web pages. The goal of our simulation is to reproduce the form of the observed World-Wide Web and of its growth, using a small number of simple assumptions. In our simulation, existing web pages may generate new ones as follows: First, each web page is equipped with a topic concerning its contents. Second, links between web pages are established according to common topics. Next, new web pages may be randomly generated and subsequently they might be equipped with a topic and be assigned to web sites. By repeated iterations of these rules, our simulation appears to exhibit the observed structure of the World-Wide Web and, in particular, a power law type of growth. In order to visualise the network of web pages, we have followed N. Gilbert's (1997) methodology of scientometric simulation, assuming that web pages can be represented by points in the plane. Furthermore, the simulated graph is found to possess the property of small worlds, as it is the case with a large number of other complex networks.


Sign in / Sign up

Export Citation Format

Share Document