scholarly journals Questions from a Contraceptive Pill Junkie: Applying Human Psychometrics to Investigate Gender Bias in Machine Learning

2021 ◽  
Author(s):  
◽  
Hazel Darney

<p>With the rapid uptake of machine learning artificial intelligence in our daily lives, we are beginning to realise the risks involved in implementing this technology in high-stakes decision making. This risk is due to machine learning decisions being based in human-curated datasets, meaning these decisions are not bias-free. Machine learning datasets put women at a disadvantage due to factors including (but not limited to) historical exclusion of women in data collection, research, and design; as well as the low participation of women in artificial intelligence fields. These factors mean that applications of machine learning may fail to treat the needs and experiences of women as equal to those of men.    Research into understanding gender biases in machine learning frequently occurs within the computer science field. This has frequently resulted in research where bias is inconsistently defined, and proposed techniques do not engage with relevant literature outside of the artificial intelligence field. This research proposes a novel, interdisciplinary approach to the measurement and validation of gender biases in machine learning. This approach translates methods of human-based gender bias measurement in psychology, forming a gender bias questionnaire for use on a machine rather than a human.   The final output system of this research as a proof of concept demonstrates the potential for a new approach to gender bias investigation. This system takes advantage of the qualitative nature of language to provide a new way of understanding gender data biases by outputting both quantitative and qualitative results. These results can then be meaningfully translated into their real-world implications.</p>

2021 ◽  
Author(s):  
◽  
Hazel Darney

<p>With the rapid uptake of machine learning artificial intelligence in our daily lives, we are beginning to realise the risks involved in implementing this technology in high-stakes decision making. This risk is due to machine learning decisions being based in human-curated datasets, meaning these decisions are not bias-free. Machine learning datasets put women at a disadvantage due to factors including (but not limited to) historical exclusion of women in data collection, research, and design; as well as the low participation of women in artificial intelligence fields. These factors mean that applications of machine learning may fail to treat the needs and experiences of women as equal to those of men.    Research into understanding gender biases in machine learning frequently occurs within the computer science field. This has frequently resulted in research where bias is inconsistently defined, and proposed techniques do not engage with relevant literature outside of the artificial intelligence field. This research proposes a novel, interdisciplinary approach to the measurement and validation of gender biases in machine learning. This approach translates methods of human-based gender bias measurement in psychology, forming a gender bias questionnaire for use on a machine rather than a human.   The final output system of this research as a proof of concept demonstrates the potential for a new approach to gender bias investigation. This system takes advantage of the qualitative nature of language to provide a new way of understanding gender data biases by outputting both quantitative and qualitative results. These results can then be meaningfully translated into their real-world implications.</p>


2020 ◽  
Vol 6 ◽  
pp. 237802312096717
Author(s):  
Carsten Schwemmer ◽  
Carly Knight ◽  
Emily D. Bello-Pardo ◽  
Stan Oklobdzija ◽  
Martijn Schoonvelde ◽  
...  

Image recognition systems offer the promise to learn from images at scale without requiring expert knowledge. However, past research suggests that machine learning systems often produce biased output. In this article, we evaluate potential gender biases of commercial image recognition platforms using photographs of U.S. members of Congress and a large number of Twitter images posted by these politicians. Our crowdsourced validation shows that commercial image recognition systems can produce labels that are correct and biased at the same time as they selectively report a subset of many possible true labels. We find that images of women received three times more annotations related to physical appearance. Moreover, women in images are recognized at substantially lower rates in comparison with men. We discuss how encoded biases such as these affect the visibility of women, reinforce harmful gender stereotypes, and limit the validity of the insights that can be gathered from such data.


2021 ◽  
Author(s):  
Bongs Lainjo

Abstract Background: Information technology has continued to shape contemporary thematic trends. Advances in communication have impacted almost all themes ranging from education, engineering, healthcare, and many other aspects of our daily lives. Method: This paper attempts to review the different dynamics of the thematic IoT platforms. A select number of themes are extensively analyzed with emphasis on data mining (DM), personalized healthcare (PHC), and thematic trends of a select number of subjectively identified IoT-related publications over three years. In this paper, the number of IoT-related-publications is used as a proxy representing the number of apps. DM remains the trailblazer, serving as a theme with crosscutting qualities that drive artificial intelligence (AI), machine learning (ML), and data transformation. A case study in PHC illustrates the importance, complexity, productivity optimization, and nuances contributing to a successful IoT platform. Among the initial 99 IoT themes, 18 are extensively analyzed using the number of IoT publications to demonstrate a combination of different thematic dynamics, including subtleties that influence escalating IoT publication themes. Results: Based on findings amongst the 99 themes, the annual median IoT-related publications for all the themes over the four years were increasingly 5510, 8930, 11700, and 14800 for 2016, 2017, 2018, and 2019 respectively; indicating an upbeat prognosis of IoT dynamics. Conclusion: The vulnerabilities that come with the successful implementation of IoT systems are highlighted including the successes currently achieved by institutions promoting the benefits of IoT-related systems like the case study. Security continues to be an issue of significant importance.


Author(s):  
Bistra Konstantinova Vassileva

In recent years, artificial intelligence (AI) has gained attention from policymakers, universities, researchers, companies and businesses, media, and the wide public. The growing importance and relevance of artificial intelligence (AI) to humanity is undisputed: AI assistants and recommendations, for instance, are increasingly embedded in our daily lives. The chapter starts with a critical review on AI definitions since terms such as “artificial intelligence,” “machine learning,” and “data science” are often used interchangeably, yet they are not the same. The first section begins with AI capabilities and AI research clusters. Basic categorisation of AI is presented as well. The increasing societal relevance of AI and its rising inburst in our daily lives though sometimes controversial are discussed in second section. The chapter ends with conclusions and recommendations aimed at future development of AI in a responsible manner.


2018 ◽  
Vol 42 (3) ◽  
pp. 343-354 ◽  
Author(s):  
Mike Thelwall

Purpose The purpose of this paper is to investigate whether machine learning induces gender biases in the sense of results that are more accurate for male authors or for female authors. It also investigates whether training separate male and female variants could improve the accuracy of machine learning for sentiment analysis. Design/methodology/approach This paper uses ratings-balanced sets of reviews of restaurants and hotels (3 sets) to train algorithms with and without gender selection. Findings Accuracy is higher on female-authored reviews than on male-authored reviews for all data sets, so applications of sentiment analysis using mixed gender data sets will over represent the opinions of women. Training on same gender data improves performance less than having additional data from both genders. Practical implications End users of sentiment analysis should be aware that its small gender biases can affect the conclusions drawn from it and apply correction factors when necessary. Users of systems that incorporate sentiment analysis should be aware that performance will vary by author gender. Developers do not need to create gender-specific algorithms unless they have more training data than their system can cope with. Originality/value This is the first demonstration of gender bias in machine learning sentiment analysis.


Author(s):  
Andrew Best ◽  
Samantha F. Warta ◽  
Katelynn A. Kapalo ◽  
Stephen M. Fiore

Using research in social cognition as a foundation, we studied rapid versus reflective mental state attributions and the degree to which machine learning classifiers can be trained to make such judgments. We observed differences in response times between conditions, but did not find significant differences in the accuracy of mental state attributions. We additionally demonstrate how to train machine classifiers to identify mental states. We discuss advantages of using an interdisciplinary approach to understand and improve human-robot interaction and to further the development of social cognition in artificial intelligence.


PeerJ ◽  
2021 ◽  
Vol 9 ◽  
pp. e11451
Author(s):  
Ruiyang Ren ◽  
Haozhe Luo ◽  
Chongying Su ◽  
Yang Yao ◽  
Wen Liao

Artificial intelligence has been emerging as an increasingly important aspect of our daily lives and is widely applied in medical science. One major application of artificial intelligence in medical science is medical imaging. As a major component of artificial intelligence, many machine learning models are applied in medical diagnosis and treatment with the advancement of technology and medical imaging facilities. The popularity of convolutional neural network in dental, oral and craniofacial imaging is heightening, as it has been continually applied to a broader spectrum of scientific studies. Our manuscript reviews the fundamental principles and rationales behind machine learning, and summarizes its research progress and its recent applications specifically in dental, oral and craniofacial imaging. It also reviews the problems that remain to be resolved and evaluates the prospect of the future development of this field of scientific study.


Author(s):  
Hooi Kun Lee ◽  
Abdul Rafiez Abdul Raziff

The value of play has mainly stayed consistent throughout time. Playing is, without a doubt, one of the essential things we can do. Playing in addition to supporting motor, neurological, and social development improves adaptation by encouraging people to explore diverse perspectives on the world and assisting them in developing methods for dealing with problems in a safe setting. The way we play and what we play with have been heavily affected by the quickly evolving technology shaping our daily lives. Artificial intelligence (A.I.) is now found in many products, including vehicles, phones, and vacuum cleaners. This extends to children's items, with the creation of an "Internet of Toys." Many learning, remote control, and app-integrated toys include innovative playthings that employ speech recognition and machine learning to communicate with users. This study examines the impact of technology adoption on the success and failure of two toys industry – Hasbro, Inc and Toys R Us, Inc. The research methodology of this study is based on case studies where the comparison of the two industries was made from a few areas. The finding of the study determines that corporations that evolved consistently with the change of technology will continue to grow in the market. In contrast, the corporation that failed to adopt digital transformation will be a force out of the market.


2019 ◽  
Author(s):  
James A Smith ◽  
Roxanna E Abhari ◽  
Zain U Hussain ◽  
Carl Heneghan ◽  
Gary S Collins ◽  
...  

AbstractObjectivesTo determine the extent and disclosure of financial ties to industry and use of scientific evidence in comments on a US Food and Drug Administration (FDA) regulatory framework for modifications to artificial intelligence/machine Learning (AI/ML)-based software as a medical device (SaMD).DesignCross-sectional study.SettingWe searched all publicly available comments on the FDA “Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) - Discussion Paper and Request for Feedback” from April 2nd 2019 to August 8th 2019.Main outcome measuresThe proportion of articles submitted by parties with financial ties to industry, disclosing those ties, citing scientific articles, citing systematic reviews and meta-analyses, and using a systematic process to identify relevant literature.ResultsWe analysed 125 comments submitted on the proposed framework. 79 (63%) comments came from parties with financial ties; for 36 (29%) comments it was not clear and the absence of financial ties could only be confirmed for 10 (8%) comments. No financial ties were disclosed in any of the comments that were not from industry submitters. The vast majority of submitted comments (86%) did not cite any scientific literature, just 4% cited a systematic review or meta-analysis, and no comments indicated that a systematic process was used to identify relevant literature.ConclusionsFinancial ties to industry were common and undisclosed and scientific evidence, including systematic reviews and meta-analyses, were rarely cited. To ensure regulatory frameworks best serve patient interests, the FDA should mandate disclosure of potential conflicts of interest (including financial ties), in comments, encourage the use of scientific evidence and encourage engagement from non-conflicted parties.Strengths and limitations of this study-We analysed the extent of financial ties to industry and the use of scientific evidence in comments on the proposed FDA framework-We used a comprehensive strategy to attempt to identify financial ties to industry-Readers may be able to contribute higher quality comments to subsequent drafts of this framework-There is heterogeneity in the degree of conflict with respect to the framework that the recorded financial ties may represent; some ties will be more likely than others to result in biased commenting-Because the framework could not be classified as pro-industry or not, we did not classify the direction of opinions expressed in comments with respect to the framework and their association with financial ties-We do not know how information submitted to FDA is used internally in the rule-making process


2021 ◽  
Vol 5 (12) ◽  
pp. 78
Author(s):  
Hebitz C. H. Lau ◽  
Jeffrey C. F. Ho

This study presents a co-design project that invites participants with little or no background in artificial intelligence (AI) and machine learning (ML) to design their ideal virtual assistants (VAs) for everyday (/daily) use. VAs are differently designed and function when integrated into people’s daily lives (e.g., voice-controlled VAs are designed to blend in based on their natural qualities). To further understand users’ ideas of their ideal VA designs, participants were invited to generate designs of personal VAs. However, end users may have unrealistic expectations of future technologies. Therefore, design fiction was adopted as a method of guiding the participants’ image of the future and carefully managing their realistic, as well as unrealistic, expectations of future technologies. The result suggests the need for a human–AI relationship based on controls with various dimensions (e.g., vocalness degree and autonomy level) instead of specific features. The design insights are discussed in detail. Additionally, the co-design process offers insights into how users can participate in AI/ML designs.


Sign in / Sign up

Export Citation Format

Share Document