scholarly journals Ethics in Health Informatics

2020 ◽  
Vol 29 (01) ◽  
pp. 026-031
Author(s):  
Kenneth W. Goodman

SummaryContemporary bioethics was fledged and is sustained by challenges posed by new technologies. These technologies have affected many lives. Yet health informatics affects more lives than any of them. The challenges include the development and the appropriate uses and users of machine learning software, the balancing of privacy rights against the needs of public health and clinical practice in a time of Big Data analytics, whether and how to use this technology, and the role of ethics and standards in health policy. Historical antecedents in statistics and evidence-based practice foreshadow some of the difficulties now faced, but the scope and scale of these challenges requires that ethics, too, be brought to scale in parallel, especially given the size of contemporary data sets and the processing power of new computers. Fortunately, applied ethics affords a variety of tools to help identify and rank applicable values, support best practices, and contribute to standards. The bioethics community can in partnership with the informatics community arrive at policies that promote the health sciences while reaffirming the many and varied rights that patients expect will be honored.

2017 ◽  
pp. 83-99
Author(s):  
Sivamathi Chokkalingam ◽  
Vijayarani S.

The term Big Data refers to large-scale information management and analysis technologies that exceed the capability of traditional data processing technologies. Big Data is differentiated from traditional technologies in three ways: volume, velocity and variety of data. Big data analytics is the process of analyzing large data sets which contains a variety of data types to uncover hidden patterns, unknown correlations, market trends, customer preferences and other useful business information. Since Big Data is new emerging field, there is a need for development of new technologies and algorithms for handling big data. The main objective of this paper is to provide knowledge about various research challenges of Big Data analytics. A brief overview of various types of Big Data analytics is discussed in this paper. For each analytics, the paper describes process steps and tools. A banking application is given for each analytics. Some of research challenges and possible solutions for those challenges of big data analytics are also discussed.


2021 ◽  
Author(s):  
Beau Coker ◽  
Cynthia Rudin ◽  
Gary King

Inference is the process of using facts we know to learn about facts we do not know. A theory of inference gives assumptions necessary to get from the former to the latter, along with a definition for and summary of the resulting uncertainty. Any one theory of inference is neither right nor wrong but merely an axiom that may or may not be useful. Each of the many diverse theories of inference can be valuable for certain applications. However, no existing theory of inference addresses the tendency to choose, from the range of plausible data analysis specifications consistent with prior evidence, those that inadvertently favor one’s own hypotheses. Because the biases from these choices are a growing concern across scientific fields, and in a sense the reason the scientific community was invented in the first place, we introduce a new theory of inference designed to address this critical problem. We introduce hacking intervals, which are the range of a summary statistic one may obtain given a class of possible endogenous manipulations of the data. Hacking intervals require no appeal to hypothetical data sets drawn from imaginary superpopulations. A scientific result with a small hacking interval is more robust to researcher manipulation than one with a larger interval and is often easier to interpret than a classical confidence interval. Some versions of hacking intervals turn out to be equivalent to classical confidence intervals, which means they may also provide a more intuitive and potentially more useful interpretation of classical confidence intervals. This paper was accepted by J. George Shanthikumar, big data analytics.


Neurosurgery ◽  
2013 ◽  
Vol 72 (suppl_1) ◽  
pp. A54-A62 ◽  
Author(s):  
Paolo Ferroli ◽  
Giovanni Tringali ◽  
Francesco Acerbi ◽  
Marco Schiariti ◽  
Morgan Broggi ◽  
...  

Abstract During the past decades, medical applications of virtual reality technology have been developing rapidly, ranging from a research curiosity to a commercially and clinically important area of medical informatics and technology. With the aid of new technologies, the user is able to process large amounts of data sets to create accurate and almost realistic reconstructions of anatomic structures and related pathologies. As a result, a 3-diensional (3-D) representation is obtained, and surgeons can explore the brain for planning or training. Further improvement such as a feedback system increases the interaction between users and models by creating a virtual environment. Its use for advanced 3-D planning in neurosurgery is described. Different systems of medical image volume rendering have been used and analyzed for advanced 3-D planning: 1 is a commercial “ready-to-go” system (Dextroscope, Bracco, Volume Interaction, Singapore), whereas the others are open-source-based software (3-D Slicer, FSL, and FreesSurfer). Different neurosurgeons at our institution experienced how advanced 3-D planning before surgery allowed them to facilitate and increase their understanding of the complex anatomic and pathological relationships of the lesion. They all agreed that the preoperative experience of virtually planning the approach was helpful during the operative procedure. Virtual reality for advanced 3-D planning in neurosurgery has achieved considerable realism as a result of the available processing power of modern computers. Although it has been found useful to facilitate the understanding of complex anatomic relationships, further effort is needed to increase the quality of the interaction between the user and the model.


Author(s):  
Sivamathi Chokkalingam ◽  
Vijayarani S.

The term Big Data refers to large-scale information management and analysis technologies that exceed the capability of traditional data processing technologies. Big Data is differentiated from traditional technologies in three ways: volume, velocity and variety of data. Big data analytics is the process of analyzing large data sets which contains a variety of data types to uncover hidden patterns, unknown correlations, market trends, customer preferences and other useful business information. Since Big Data is new emerging field, there is a need for development of new technologies and algorithms for handling big data. The main objective of this paper is to provide knowledge about various research challenges of Big Data analytics. A brief overview of various types of Big Data analytics is discussed in this paper. For each analytics, the paper describes process steps and tools. A banking application is given for each analytics. Some of research challenges and possible solutions for those challenges of big data analytics are also discussed.


10.28945/3907 ◽  
2017 ◽  
Vol 2 ◽  
pp. 001-030
Author(s):  
Jay Hoecker

After 28 years of public service, the Information Technology Bureau Chief for the Southwest Florida Water Management District (SWFWMD), Dr. Steven Dicks, was still passionate about serving the public. By first focusing on the needs of people in the District, Dr. Dicks was able to objectively observe the many aspects of operations for which he was responsible. His observations led him to believe that the use of technology within his organization wasn’t as effective and efficient as he knew it could be. The SWFWMD managed the water resources of 16 Florida counties in an area that was close to 10,000 square miles, with 4.7 million inhabitants. To effectively manage water resources, the organization needed to understand current water needs, prepare for future needs, and protect and preserve water resources within its boundaries. A significant part of managing water resources involved the production and utilization of scientific computer models that help track, predict, and control a plethora of water related challenges. At the SWFWMD, individuals used desktop computers to run most of the scientific models. Given the limited computing capacity of the average desktop computer, running a model was beginning to require too much time; and simple interruptions, such as a system reboot, could jeopardize the ability to complete long model runs. Therefore, the resulting data used and produced was often very ineffectively managed. Dr. Dicks began to wonder about the current system of producing and managing scientific models, and how the systems might be improved. He evaluated options. He thought upgrading individual desktops could help, but only in the short term because as the data sets and demand for processing power continued to grow, the desktop might always be a step behind. Installing powerful servers to house the data sets and run the models would be a significant improvement, but the cost to acquire and maintain the new system might challenge the budget. A cloud-based solution utilizing an “infrastructure as a service” approach, was a third option, but current system infrastructure compatibility, security, and access needed to be carefully evaluated. Dr. Dicks believed that data management was just as important as the processing power of an upgrade, and that ultimately the technology that allowed for the most effective EDM system needed to be identified and implemented to best serve the needs of the District into the future.


Vamping the Stage is the first book-length historical and comparative examination of women, modernity, and popular music in Asia. This book documents the many ways that women performers have supported, challenged, and undermined representations of existing gendered norms in the entertainment industries of China, Japan, India, Indonesia, Iran, Korea, Malaysia, and the Philippines. The case studies in this volume address colonial, post-colonial, as well as late modern conditions of culture as they relate to women’s musical practices and their changing social and cultural identities throughout Asia. Female entertainers were artistic pioneers of new music, new cinema, new forms of dance and theater, and new behavior and morals. Their voices, mediated through new technologies of film, radio, and the phonograph, changed the soundscape of global popular music and resonate today in all spheres of modern life. These female performers were not merely symbols of times that were rapidly changing. They were active agents in the creation of local performance cultures and the rise of a region-wide and globally oriented entertainment industry. Placing women’s voices in social and historical contexts, the authors critically analyze salient discourses, representations, meanings, and politics of “voice” in Asian popular music of the 20th century to the present day.


Author(s):  
Yuancheng Li ◽  
Yaqi Cui ◽  
Xiaolong Zhang

Background: Advanced Metering Infrastructure (AMI) for the smart grid is growing rapidly which results in the exponential growth of data collected and transmitted in the device. By clustering this data, it can give the electricity company a better understanding of the personalized and differentiated needs of the user. Objective: The existing clustering algorithms for processing data generally have some problems, such as insufficient data utilization, high computational complexity and low accuracy of behavior recognition. Methods: In order to improve the clustering accuracy, this paper proposes a new clustering method based on the electrical behavior of the user. Starting with the analysis of user load characteristics, the user electricity data samples were constructed. The daily load characteristic curve was extracted through improved extreme learning machine clustering algorithm and effective index criteria. Moreover, clustering analysis was carried out for different users from industrial areas, commercial areas and residential areas. The improved extreme learning machine algorithm, also called Unsupervised Extreme Learning Machine (US-ELM), is an extension and improvement of the original Extreme Learning Machine (ELM), which realizes the unsupervised clustering task on the basis of the original ELM. Results: Four different data sets have been experimented and compared with other commonly used clustering algorithms by MATLAB programming. The experimental results show that the US-ELM algorithm has higher accuracy in processing power data. Conclusion: The unsupervised ELM algorithm can greatly reduce the time consumption and improve the effectiveness of clustering.


Author(s):  
Petra Molnar

This chapter focuses on how technologies used in the management of migration—such as automated decision-making in immigration and refugee applications and artificial intelligence (AI) lie detectors—impinge on human rights with little international regulation, arguing that this lack of regulation is deliberate, as states single out the migrant population as a viable testing ground for new technologies. Making migrants more trackable and intelligible justifies the use of more technology and data collection under the guide of national security, or even under tropes of humanitarianism and development. Technology is not inherently democratic, and human rights impacts are particularly important to consider in humanitarian and forced migration contexts. An international human rights law framework is particularly useful for codifying and recognizing potential harms, because technology and its development are inherently global and transnational. Ultimately, more oversight and issue specific accountability mechanisms are needed to safeguard fundamental rights of migrants, such as freedom from discrimination, privacy rights, and procedural justice safeguards, such as the right to a fair decision maker and the rights of appeal.


2021 ◽  
Vol 29 ◽  
pp. 115-124
Author(s):  
Xinlu Wang ◽  
Ahmed A.F. Saif ◽  
Dayou Liu ◽  
Yungang Zhu ◽  
Jon Atli Benediktsson

BACKGROUND: DNA sequence alignment is one of the most fundamental and important operation to identify which gene family may contain this sequence, pattern matching for DNA sequence has been a fundamental issue in biomedical engineering, biotechnology and health informatics. OBJECTIVE: To solve this problem, this study proposes an optimal multi pattern matching with wildcards for DNA sequence. METHODS: This proposed method packs the patterns and a sliding window of texts, and the window slides along the given packed text, matching against stored packed patterns. RESULTS: Three data sets are used to test the performance of the proposed algorithm, and the algorithm was seen to be more efficient than the competitors because its operation is close to machine language. CONCLUSIONS: Theoretical analysis and experimental results both demonstrate that the proposed method outperforms the state-of-the-art methods and is especially effective for the DNA sequence.


Sign in / Sign up

Export Citation Format

Share Document