Artificial Intelligence and Machine Learning Derived Efficiencies for Large‐Scale Survey Estimation Efforts

2020 ◽  
pp. 561-595
Author(s):  
Steven B. Cohen ◽  
Jamie Shorey
2020 ◽  
Author(s):  
Jon Green ◽  
Matthew Baum ◽  
James Druckman ◽  
David Lazer ◽  
Katherine Ognyanova ◽  
...  

An individual’s issue preferences are non-separable when they depend on other issue outcomes (Lacy 2001a), presenting measurement challenges for traditional survey research. We extend this logic to the broader case of conditional preferences, in which policy preferences depend on the status of conditions with inherent levels of uncertainty -- and are not necessarily policies themselves. We demonstrate new approaches for measuring conditional preferences in two large-scale survey experiments regarding the conditions under which citizens would support reopening schools in their communities during the COVID-19 pandemic. By drawing on recently-developed methods at the intersection of machine learning and causal inference, we identify which citizens are most likely to have school reopening preferences that depend on additional considerations. The results highlight the advantages of using such approaches to measure conditional preferences, which represent an underappreciated and general phenomenon in public opinion.


Author(s):  
P. Alison Paprica ◽  
Frank Sullivan ◽  
Yin Aphinyanaphongs ◽  
Garth Gibson

Many health systems and research institutes are interested in supplementing their traditional analyses of linked data with machine learning (ML) and other artificial intelligence (AI) methods and tools. However, the availability of individuals who have the required skills to develop and/or implement ML/AI is a constraint, as there is high demand for ML/AI talent in many sectors. The three organizations presenting are all actively involved in training and capacity building for ML/AI broadly, and each has a focus on, and/or discrete initiatives for, particular trainees. P. Alison Paprica, Vector Institute for artificial intelligence, Institute for Clinical Evaluative Sciences, University of Toronto, Canada. Alison is VP, Health Strategy and Partnerships at Vector, responsible for health strategy and also playing a lead role in “1000AIMs” – a Vector-led initiative in support of the Province of Ontario’s \$30 million investment to increase the number of AI-related master’s program graduates to 1,000 per year within five years. Frank Sullivan, University of St Andrews Scotland. Frank is a family physician and an associate director of HDRUK@Scotland. Health Data Research UK \url{https://hdruk.ac.uk/} has recently provided funding to six sites across the UK to address challenging healthcare issues through use of data science. A 50 PhD student Doctoral Training Scheme in AI has also been announced. Each site works in close partnership with National Health Service bodies and the public to translate research findings into benefits for patients and populations. Yin Aphinyanaphongs – INTREPID NYU clinical training program for incoming clinical fellows. Yin is the Director of the Clinical Informatics Training Program at NYU Langone Health. He is deeply interested in the intersection of computer science and health care and as a physician and a scientist, he has a unique perspective on how to train medical professionals for a data drive world. One version of this teaching process is demonstrated in the INTREPID clinical training program. Yin teaches clinicians to work with large scale data within the R environment and generate hypothesis and insights. The session will begin with three brief presentations followed by a facilitated session where all participants share their insights about the essential skills and competencies required for different kinds of ML/AI application and contributions. Live polling and voting will be used at the end of the session to capture participants’ view on the key learnings and take away points. The intended outputs and outcomes of the session are: Participants will have a better understanding of the skills and competencies required for individuals to contribute to AI applications in health in various ways Participants will gain knowledge about different options for capacity building from targeted enhancement of the skills of clinical fellows, to producing large number of applied master’s graduates, to doctoral-level training After the session, the co-leads will work together to create a resource that summarizes the learnings from the session and make them public (though publication in a peer-reviewed journal and/or through the IPDLN website)


2022 ◽  
Vol 14 (2) ◽  
pp. 1-15
Author(s):  
Lara Mauri ◽  
Ernesto Damiani

Large-scale adoption of Artificial Intelligence and Machine Learning (AI-ML) models fed by heterogeneous, possibly untrustworthy data sources has spurred interest in estimating degradation of such models due to spurious, adversarial, or low-quality data assets. We propose a quantitative estimate of the severity of classifiers’ training set degradation: an index expressing the deformation of the convex hulls of the classes computed on a held-out dataset generated via an unsupervised technique. We show that our index is computationally light, can be calculated incrementally and complements well existing ML data assets’ quality measures. As an experimentation, we present the computation of our index on a benchmark convolutional image classifier.


Author(s):  
M. Stashevskaya

The article contains a study of existing views on the economic content of big data. From among the views, within which the authors define big data, the descriptive-model, utility-digital and complex-technological approaches are formulated. Against the back- ground of the large-scale spread of digital technologies (machine learning, cloud computing, artificial intelligence, augmented and virtual reality, etc.), functioning thanks to big data, the study of their economic essence is becoming especially relevant. As a result, it was found that the basis of economic activity in the digital economy is big data. The definition of big data as a resource of the digital economy is proposed.


2020 ◽  
pp. bjophthalmol-2020-316594 ◽  
Author(s):  
Peter Heydon ◽  
Catherine Egan ◽  
Louis Bolter ◽  
Ryan Chambers ◽  
John Anderson ◽  
...  

Background/aimsHuman grading of digital images from diabetic retinopathy (DR) screening programmes represents a significant challenge, due to the increasing prevalence of diabetes. We evaluate the performance of an automated artificial intelligence (AI) algorithm to triage retinal images from the English Diabetic Eye Screening Programme (DESP) into test-positive/technical failure versus test-negative, using human grading following a standard national protocol as the reference standard.MethodsRetinal images from 30 405 consecutive screening episodes from three English DESPs were manually graded following a standard national protocol and by an automated process with machine learning enabled software, EyeArt v2.1. Screening performance (sensitivity, specificity) and diagnostic accuracy (95% CIs) were determined using human grades as the reference standard.ResultsSensitivity (95% CIs) of EyeArt was 95.7% (94.8% to 96.5%) for referable retinopathy (human graded ungradable, referable maculopathy, moderate-to-severe non-proliferative or proliferative). This comprises sensitivities of 98.3% (97.3% to 98.9%) for mild-to-moderate non-proliferative retinopathy with referable maculopathy, 100% (98.7%,100%) for moderate-to-severe non-proliferative retinopathy and 100% (97.9%,100%) for proliferative disease. EyeArt agreed with the human grade of no retinopathy (specificity) in 68% (67% to 69%), with a specificity of 54.0% (53.4% to 54.5%) when combined with non-referable retinopathy.ConclusionThe algorithm demonstrated safe levels of sensitivity for high-risk retinopathy in a real-world screening service, with specificity that could halve the workload for human graders. AI machine learning and deep learning algorithms such as this can provide clinically equivalent, rapid detection of retinopathy, particularly in settings where a trained workforce is unavailable or where large-scale and rapid results are needed.


2020 ◽  
Vol 9 (2) ◽  
pp. 119-128
Author(s):  
Mani Manavalan

Internet of Things (IoT) has become one of the mainstream advancements and a supreme domain of research for the technical as well as the scientific world, and financially appealing for the business world. It supports the interconnection of different gadgets and the connection of gadgets to people. IoT requires a distributed computing set up to deal with the rigorous data processing and training; and simultaneously, it requires artificial intelligence (AI) and machine learning (ML) to analyze the information stored on various cloud frameworks and make extremely quick and smart decisions w.r.t to data. Moreover, the continuous developments in these three areas of IT present a strong opportunity to collect real-time data about every activity of a business. Artificial Intelligence (AI) and Machine Learning are assuming a supportive part in applications and use cases offered by the Internet of Things, a shift evident in the behavior of enterprises trying to adopt this paradigm shift around the world. Small as well as large-scale organizations across the globe are leveraging these applications to develop the latest offers of services and products that will present a new set of business opportunities and direct new developments in the technical landscape. The following transformation will also present another opportunity for various industries to run their operations and connect with their users through the power of AI, ML, and IoT combined. Moreover, there is still huge scope for those who can convert raw information into valuable business insights, and the way ahead to do as such lies in viable data analytics. Organizations are presently looking further into the data streams to identify new and inventive approaches to elevate proficiency and effectiveness in the technical as well as business landscape. Organizations are taking on bigger, more exhaustive research approaches with the assistance of continuous progress being made in science and technology, especially in machine learning and artificial intelligence. If companies want to understand the valuable capacity of this innovation, they are required to integrate their IoT frameworks with persuasive AI and ML algorithms that allow ’smart devices/gadgets’ to imitate behavioral patterns of humans and be able to take wise decisions just like humans without much of an intervention. Integrating both artificial intelligence and machine learning with IoT networks is proving to be a challenging task for the accomplishment of the present IoT-based digital ecosystems. Hence, organizations should direct the necessary course of action to identify how they will drive value from intersecting AI, ML, and IoT to maintain a satisfactory position in the business in years to come. In this review, we will also discuss the progress of IoT so far and what role AI and ML can play in accomplishing new heights for businesses in the future. Later the paper will discuss the opportunities and challenges faced during the implementation of this hybrid model.


2019 ◽  
Vol 29 (Supplement_4) ◽  
Author(s):  
S Ram

Abstract With rapid developments in big data technology and the prevalence of large-scale datasets from diverse sources, the healthcare predictive analytics (HPA) field is witnessing a dramatic surge in interest. In healthcare, it is not only important to provide accurate predictions, but also critical to provide reliable explanations to the underlying black-box models making the predictions. Such explanations can play a crucial role in not only supporting clinical decision-making but also facilitating user engagement and patient safety. If users and decision makers do not have faith in the HPA model, it is highly likely that they will reject its use. Furthermore, it is extremely risky to blindly accept and apply the results derived from black-box models, which might lead to undesirable consequences or life-threatening outcomes in domains with high stakes such as healthcare. As machine learning and artificial intelligence systems are becoming more capable and ubiquitous, explainable artificial intelligence and machine learning interpretability are garnering significant attention among practitioners and researchers. The introduction of policies such as the General Data Protection Regulation (GDPR), has amplified the need for ensuring human interpretability of prediction models. In this talk I will discuss methods and applications for developing local as well as global explanations from machine learning and the value they can provide for healthcare prediction.


Author(s):  
Kathleen M. Bakarich ◽  
Patrick O'Brien

In this paper, we survey public accounting professionals to gauge the extent to which Artificial Intelligence (AI), specifically Robotic Process Automation (RPA) and Machine Learning (ML), are currently being utilized by the profession, as well as perceptions about the impact and receptiveness to this technology. Quantitative and qualitative responses from ninety participants, representing various firms, service lines, and positions, indicate that both RPA and ML are currently not being used extensively by public accountants nor by clients of public accounting firms, and firms are conducting some, but not extensive training on these technologies for employees. However, respondents strongly indicated that AI will significantly impact their daily responsibilities in five years and that employees in the profession are very receptive to these changes. Additionally, we find that firm size appears to be the most significant factor impacting differences in responses. These results indicate that while large-scale AI adoption has not yet come to public accounting, substantial changes are on the horizon.


Author(s):  
Stephen Grossberg

The book is the culmination of 50 years of intensive research by the author, who is broadly acknowledged to be the most important pioneer and current research leader who models how brains give rise to minds, notably how neural circuits in multiple brain regions interact together to generate psychological functions. The book provides a unified understanding of how, where, and why our brains can consciously see, hear, feel, and know about the world, and effectively plan and act within it. It hereby embodies a revolutionary Principia of Mind that clarifies how autonomous adaptive intelligence is achieved, thereby providing mechanistic explanations of multiple mental disorders, biological bases of morality, religion, and the human condition, as well as solutions to large-scale problems in machine learning, technology, and Artificial Intelligence. Because brains embody a universal developmental code, unifying insights also emerge about all living cellular tissues and about how mental laws reflect laws of the physical world.


Sign in / Sign up

Export Citation Format

Share Document