scholarly journals Snoring and the Internet: A Cartographic Exploration of Medicalisation

2021 ◽  
Author(s):  
◽  
Christopher Sparks

<p>The medicalisation of snoring has led to new industries of diagnosis, treatment, and transport regulation. Bryn Sparks’s research develops a novel mapping technique to model internet-searches about snoring to help investigate medicalisation in the digital era. Bryn’s research explores the medicalisation of snoring across multiple levels: at the micro-level of individuals in whom the internet-search influences conceptions of snoring; at the meso-level of web-sites wherein competition for attention interacts through search-engine feed-back to amplify medicalisation; and at the macro-level of the internet in terms of how the shifting conception of snoring over time reflects a dynamic pattern of medicalisation.</p>

2021 ◽  
Author(s):  
◽  
Christopher Sparks

<p>The medicalisation of snoring has led to new industries of diagnosis, treatment, and transport regulation. Bryn Sparks’s research develops a novel mapping technique to model internet-searches about snoring to help investigate medicalisation in the digital era. Bryn’s research explores the medicalisation of snoring across multiple levels: at the micro-level of individuals in whom the internet-search influences conceptions of snoring; at the meso-level of web-sites wherein competition for attention interacts through search-engine feed-back to amplify medicalisation; and at the macro-level of the internet in terms of how the shifting conception of snoring over time reflects a dynamic pattern of medicalisation.</p>


Author(s):  
Rony Baskoro Lukito ◽  
Cahya Lukito ◽  
Deddy Arifin

The purpose of this research is how to optimize a web design that can increase the number of visitors. The number of Internet users in the world continues to grow in line with advances in information technology. Products and services marketing media do not just use the printed and electronic media. Moreover, the cost of using the Internet as a medium of marketing is relatively inexpensive when compared to the use of television as a marketing medium. The penetration of the internet as a marketing medium lasted for 24 hours in different parts of the world. But to make an internet site into a site that is visited by many internet users, the site is not only good from the outside view only. Web sites that serve as a medium for marketing must be built with the correct rules, so that the Web site be optimal marketing media. One of the good rules in building the internet site as a marketing medium is how the content of such web sites indexed well in search engines like google. Search engine optimization in the index will be focused on the search engine Google for 83% of internet users across the world using Google as a search engine. Search engine optimization commonly known as SEO (Search Engine Optimization) is an important rule that the internet site is easier to find a user with the desired keywords.


Author(s):  
Jos van Iwaarden ◽  
Ton van der Wiele ◽  
Roger Williams ◽  
Steve Eldridge

The Internet has come of age as a global source of information about every topic imaginable. A company like Google has become a household name in Western countries and making use of its internet search engine is so popular that “Googling” has even become a verb in many Western languages. Whether it is for business or private purposes, people worldwide rely on Google to present them relevant information. Even the scientific community is increasingly employing Google’s search engine to find academic articles and other sources of information about the topics they are studying. Yet, the vast amount of information that is available on the internet is gradually changing in nature. Initially, information would be uploaded by the administrators of the web site and would then be visible to all visitors of the site. This approach meant that web sites tended to be limited in the amount of content they provided, and that such content was strictly controlled by the administrators. Over time, web sites have granted their users the authority to add information to web pages, and sometimes even to alter existing information. Current examples of such web sites are eBay (auction), Wikipedia (encyclopedia), YouTube (video sharing), LinkedIn (social networking), Blogger (weblogs) and Delicious (social bookmarking).


Leonardo ◽  
2000 ◽  
Vol 33 (5) ◽  
pp. 347-350 ◽  
Author(s):  
Andruid Kerne

CollageMachine builds interactive collages from the Web. First you choose a direction. Then CollageMachine will take you surfing out across the Internet as far as it can reach. It builds a collage from the most interesting media it can find for you. You don't have to click through links. You rearrange the collage to refine your exploration. CollageMachine is an agent of recombination. Aesthetics of musical composition and conceptual detournement underlie its development. The composer John Cage and Dada artists such as Marcel Duchamp and Max Ernst used structured chance procedures to create aesthetic assemblages. These works create new meaning by recontextualizing found objects. Instead of functioning as a single visual work, CollageMachine embodies the process of collage making. CollageMachine [1] deconstructs Web sites and re-presents them in collage form. The program crawls the Web, downloading sites. It breaks each page down into media elements—images and texts. Over time, these elements stream into a collage. Point, click, drag, and drop to rearrange the media. How you organize the elements shows CollageMachine what you're interested in. You can teach it to bring media of interest to you. On the basis of your interactions, CollageMachine reasons about your interests; the evolving model informs ongoing choices of selection and placement. CollageMachine has been developed through a process of freely combining disciplines according to the principles of “interface ecology.”


Author(s):  
Vijay Kasi ◽  
Radhika Jain

In the context of the Internet, a search engine can be defined as a software program designed to help one access information, documents, and other content on the World Wide Web. The adoption and growth of the Internet in the last decade has been unprecedented. The World Wide Web has always been applauded for its simplicity and ease of use. This is evident looking at the extent of the knowledge one requires to build a Web page. The flexible nature of the Internet has enabled the rapid growth and adoption of it, making it hard to search for relevant information on the Web. The number of Web pages has been increasing at an astronomical pace, from around 2 million registered domains in 1995 to 233 million registered domains in 2004 (Consortium, 2004). The Internet, considered a distributed database of information, has the CRUD (create, retrieve, update, and delete) rule applied to it. While the Internet has been effective at creating, updating, and deleting content, it has considerably lacked in enabling the retrieval of relevant information. After all, there is no point in having a Web page that has little or no visibility on the Web. Since the 1990s when the first search program was released, we have come a long way in terms of searching for information. Although we are currently witnessing a tremendous growth in search engine technology, the growth of the Internet has overtaken it, leading to a state in which the existing search engine technology is falling short. When we apply the metrics of relevance, rigor, efficiency, and effectiveness to the search domain, it becomes very clear that we have progressed on the rigor and efficiency metrics by utilizing abundant computing power to produce faster searches with a lot of information. Rigor and efficiency are evident in the large number of indexed pages by the leading search engines (Barroso, Dean, & Holzle, 2003). However, more research needs to be done to address the relevance and effectiveness metrics. Users typically type in two to three keywords when searching, only to end up with a search result having thousands of Web pages! This has made it increasingly hard to effectively find any useful, relevant information. Search engines face a number of challenges today requiring them to perform rigorous searches with relevant results efficiently so that they are effective. These challenges include the following (“Search Engines,” 2004). 1. The Web is growing at a much faster rate than any present search engine technology can index. 2. Web pages are updated frequently, forcing search engines to revisit them periodically. 3. Dynamically generated Web sites may be slow or difficult to index, or may result in excessive results from a single Web site. 4. Many dynamically generated Web sites are not able to be indexed by search engines. 5. The commercial interests of a search engine can interfere with the order of relevant results the search engine shows. 6. Content that is behind a firewall or that is password protected is not accessible to search engines (such as those found in several digital libraries).1 7. Some Web sites have started using tricks such as spamdexing and cloaking to manipulate search engines to display them as the top results for a set of keywords. This can make the search results polluted, with more relevant links being pushed down in the result list. This is a result of the popularity of Web searches and the business potential search engines can generate today. 8. Search engines index all the content of the Web without any bounds on the sensitivity of information. This has raised a few security and privacy flags. With the above background and challenges in mind, we lay out the article as follows. In the next section, we begin with a discussion of search engine evolution. To facilitate the examination and discussion of the search engine development’s progress, we break down this discussion into the three generations of search engines. Figure 1 depicts this evolution pictorially and highlights the need for better search engine technologies. Next, we present a brief discussion on the contemporary state of search engine technology and various types of content searches available today. With this background, the next section documents various concerns about existing search engines setting the stage for better search engine technology. These concerns include information overload, relevance, representation, and categorization. Finally, we briefly address the research efforts under way to alleviate these concerns and then present our conclusion.


2018 ◽  
Vol 74 ◽  
pp. 08015
Author(s):  
Gardito Qastalani ◽  
Kiki Fauziah

This research focused on the strategy of fulfilling students’ information needs in the digital era.The purpose of this study is to identify strategies for meeting the informa6tion needs of students in the current digital era. This study uses a qualitative approach and case study method. Data collection is done by interviewing, observing, and analysing documents. The results of this study show that the strategy of fulfilling the information needs of students is by prioritising the internet such as e-journal, e-book, or search engine.


Author(s):  
Charle André Viljoen ◽  
Rob Scott Millar ◽  
Julian Hoevelmann ◽  
Elani Muller ◽  
Lina Hähnle ◽  
...  

Abstract Aims Mobile learning is attributed to the acquisition of knowledge derived from accessing information on a mobile device. Although increasingly implemented in medical education, research on its utility in Electrocardiography remains sparse. In this study, we explored the effect of mobile learning on the accuracy of ECG analysis and interpretation. Methods and results The study comprised 181 participants (77 fourth- and 69 sixth-year medical students, and 35 residents). Participants were randomised to analyse ECGs with a mobile learning strategy (either searching the Internet or using an ECG reference app) or not. For each ECG, they provided their initial diagnosis, key supporting features and final diagnosis consecutively. Two weeks later they analysed the same ECGs, without access to any mobile device. ECG interpretation was more accurate when participants used the ECG app (56%), as compared to searching the Internet (50.3%) or neither (43.5%, p=0.001). Importantly, mobile learning supported participants in revising their initial incorrect ECG diagnosis (ECG app 18.7%, Internet search 13.6%, no mobile device 8.4%, p&lt;0.001). However, whilst this was true for students, there was no significant difference amongst residents. Internet searches were only useful if participants identified the correct ECG features. The app was beneficial when participants searched by ECG features, but not by diagnosis. Using the ECG reference app required less time than searching the Internet (7:44±4:13 vs 9:14±4:34, p &lt; 0.001). Mobile learning gains were not sustained after two weeks. Conclusion Whilst mobile learning contributes to increased ECG diagnostic accuracy, the benefits were not sustained over time.


Author(s):  
Jon T.S. Quah ◽  
Winnie C.H. Leow ◽  
K. L. Yong

This project experiments with the designing of a Web site that has the self-adaptive feature of generating and adapting the site contents dynamically to match visitors’ tastes based on their activities on the site. No explicit inputs are required from visitors. Instead a visitor’s clickstream on the site will be implicitly monitored, logged, and analyzed. Based on the information gathered, the Web site would then generate Web contents that contain items that have certain relatedness to items that were previously browsed by the visitor. The relatedness rules will have multidimensional aspects in order to produce cross-mapping between items. The Internet has become a place where a vast amount of information can be deposited and also retrieved by hundreds of millions of people scattered around the globe. With such an ability to reach out to this large pool of people, we have seen the expulsion of companies plunging into conducting business over the Internet (e-commerce). This has made the competition for consumers’ dollars fiercely stiff. It is now insufficient to just place information of products onto the Internet and expect customers to browse through the Web pages. Instead, e-commerce Web site designing is undergoing a significant revolution. It has become an important strategy to design Web sites that are able to generate contents that are matched to the customer’s taste or preference. In fact a survey done in 1998 (GVU, 1998) shows that around 23% of online shoppers actually reported a dissatisfying experience with Web sites that are confusing or disorganized. Personalization features on the Web would likely reverse this dissatisfaction and increase the likelihood of attracting and retaining visitors. Having personalization or an adaptive site can bring the following benefits: 1. Attract and maintain visitors with adaptive contents that are tailored to their taste. 2. Target Web contents correspondingly to their respective audience, thus reducing information that is of no interest to the audience. 3. Advertise and promote products through marketing campaigns targeting the correct audience. 4. Enable the site to intelligently direct information to a selective or respective audience. Currently, most Web personalization or adaptive features employ data mining or collaborative filtering techniques (Herlocker, Konstan, Borchers, & Riedl, 1999; Mobasher, Cooley, & Srivastava, 1999; Mobasher, Jain, Han, & Srivastava, 1997; Spiliopoulou, Faulstich, & Winkler, 1999) which often use past historical (static) data (e.g., previous purchases or server logs). The deployment of data mining often involves significant resources (large storage space and computing power) and complicated rules or algorithms. A vast amount of data is required in order to be able to form recommendations that made sense and are meaningful in general (Claypool et al., 1999; Basu, Hirsh, & Cohen, 1998). While the main idea of Web personalization is to increase the ‘stickiness’ of a portal, with the proven presumption that the number of times a shopper returns to a shop has a direct relationship to the likelihood of resulting in business transactions, the method of achieving the goal varies. The methods range from user clustering and time framed navigation sessions analysis (Kim et al., 2005; Wang & Shao, 2004), analyzing relationship between customers and products (Wang, Chuang, Hsu, & Keh, 2004), performing collaborative filtering and data mining on transaction data (Cho & Kim, 2002, 2004; Uchyigit & Clark, 2002; Jung, Jung, & Lee, 2003), deploying statistical methods for finding relationships (Kim & Yum, 2005), and performing recommendations bases on similarity with known user groups (Yu, Liu, & Li, 2005), to tracking shopping behavior over time as well as over the taxonomy of products. Our implementation works on the premise that each user has his own preferences and needs, and these interests drift over time (Cho, Cho, & Kim, 2005). Therefore, besides identifying users’ needs, the system should also be sensitive to changes in tastes. Finally, a truly useful system should not only be recommending items in which a user had shown interest, but also related items that may be of relevance to the user (e.g., buying a pet => recommend some suitable pet foods for the pet, as well as suggesting some accessories that may be useful, such as fur brush, nail clipper, etc.). In this aspect, we borrow the concept of ‘category management’ use in the retailing industry to perform classification as well as linking the categories using shared characteristics. These linkages provide the bridge for cross-category recommendations.


Blood ◽  
2016 ◽  
Vol 128 (22) ◽  
pp. 3565-3565 ◽  
Author(s):  
Adeel M. Khan ◽  
Alok A. Khorana

Abstract Background: Digital and surveillance epidemiology via internet search engine analysis has allowed for new insights into patients' and laypersons' health concerns and awareness. Google Trends in particular has become an increasingly well-published data resource as it automatically compiles all Google searches from 2004 to the present from internet users worldwide into an aggregated, passively-collected, and publicly viewable data site. Over the past few years, cancer awareness campaigns in the United States have greatly increased awareness of specific malignancies, such as breast and prostate. However, the hematologic malignancies have not had the same level of awareness, despite rising incidence rates for leukemia, lymphoma, and myeloma from 2004 to 2015. This study sought to examine patients' and laypersons' internet searches for terms related to the three major hematologic malignancies. Methods: Google Trends (www.google.com/trends) was accessed to obtain the relative search engine traffic values for terms related to the three major blood cancers (leukemia, lymphoma, and myeloma) from January 2004 to December 2015. These values are defined as search volume indices (SVIs) and are directly obtainable from Google Trends. For comparison, SVIs for the term "cancer" were also collected during the same time frame. Using standard Boolean operators, searches for "cancer" were operationalized as CANCER + CANCERS + MALIGNANCY + MALIGNANCIES + MALIGNANT, searches for "leukemia" as LEUKEMIA + LEUKEMIAS, searches for "lymphoma" as LYMPHOMA + LYMPHOMAS + "NON HODGKIN" + NON HODGKIN + "HODGKIN DISEASE" + "HODGKIN'S DISEASE", and searches for "myeloma" as MYELOMA + MYELOMAS. Trends in the respective SVIs were analyzed with Mann-Kendall trend tests and Sen's slope estimators in R (V3.3.1), similar to previously published Google Trends analyses. Results: Individual inspection of each search term revealed the average SVI for "cancer" was 76.4, for "leukemia" was 68.0, for "lymphoma" was 75.5, and for "myeloma" was 30.4 during the time frame 2004 to 2015. Simultaneous inspection across search terms revealed the SVIs for "cancer" far outweighed searches for "leukemia," "lymphoma," and "myeloma" combined (mean SVIs 76 vs. 4, 4, and 1 respectively). Mann-Kendall trend tests showed a statistically significant decrease in searches for "leukemia" (S = -6460.0, p < 0.001) and "lymphoma" (S = -6338, p < 0.001) over time. "Myeloma" searches (S = -1321, p = 0.02) and "cancer" searches (S = -2389.0, p < 0.005) also showed a statistically significant decrease. Sen's slope estimators showed the greatest decline for "leukemia" (Q = -0.18, 95% CI: -0.19 to -0.17) and "lymphoma" (Q = -0.21, 95% CI: -0.22 to -0.20) and lowest decline for "cancer" (Q = -0.05, 95% CI: -0.06 to -0.04) and "myeloma" (Q = -0.01, 95% CI: -0.02 to 0.00). Discussion: Searches for "leukemia" and "lymphoma" have sharply declined over the time period 2004 to 2015 in the United States. Searches for "myeloma" have remained stably low over time with marginal decrease. Overall, internet searches for the hematologic malignancies represent a very small fraction of total searches for "cancer." These data suggest a declining awareness for the major hematologic malignancies despite their rising incidences in the United States. Patient awareness may be increased with greater efforts toward disease-specific advocacy campaigns and public health endeavors. Figure 1 Individual Google Trends searches for terms (in clockwise order) "cancer," "leukemia," "lymphoma," and "myeloma" in the United States from 2004 to 2015. Figure 1. Individual Google Trends searches for terms (in clockwise order) "cancer," "leukemia," "lymphoma," and "myeloma" in the United States from 2004 to 2015. Figure 2 Simultaneous Google Trends search for "cancer," "leukemia," "lymphoma," and "myeloma" in the United States from 2004 to 2015. Figure 2. Simultaneous Google Trends search for "cancer," "leukemia," "lymphoma," and "myeloma" in the United States from 2004 to 2015. Disclosures Khorana: Halozyme: Consultancy, Honoraria; Amgen: Consultancy, Honoraria, Research Funding; Bayer: Consultancy, Honoraria; Sanofi: Consultancy, Honoraria; Leo: Consultancy, Honoraria, Research Funding; Pfizer: Consultancy, Honoraria; Roche: Consultancy, Honoraria; Janssen Scientific Affairs, LLC: Consultancy, Honoraria, Research Funding.


2015 ◽  
Vol 3 ◽  
pp. 506-510 ◽  
Author(s):  
Jakub Zilincan

Search engine optimization techniques, often shortened to “SEO,” should lead to first positions in organic search results. Some optimization techniques do not change over time, yet still form the basis for SEO. However, as the Internet and web design evolves dynamically, new optimization techniques flourish and flop. Thus, we looked at the most important factors that can help to improve positioning in search results. It is important to emphasize that none of the techniques can guarantee high ranking because search engines have sophisticated algorithms, which measure the quality of webpages and derive their position in search results from it. Next, we introduced and examined the object of the optimization, which is a particular website. This web site was created for the sole purpose of implementing and testing all the main SEO techniques. The main objective of this article was to determine whether search engine optimization increases ranking of website in search results and subsequently leads to higher traffic. This research question is supported by testing and verification of results. The last part of our article concludes the research results and proposes further recommendations.


Sign in / Sign up

Export Citation Format

Share Document