scholarly journals Toward understanding the impact of artificial intelligence on labor

2019 ◽  
Vol 116 (14) ◽  
pp. 6531-6539 ◽  
Author(s):  
Morgan R. Frank ◽  
David Autor ◽  
James E. Bessen ◽  
Erik Brynjolfsson ◽  
Manuel Cebrian ◽  
...  

Rapid advances in artificial intelligence (AI) and automation technologies have the potential to significantly disrupt labor markets. While AI and automation can augment the productivity of some workers, they can replace the work done by others and will likely transform almost all occupations at least to some degree. Rising automation is happening in a period of growing economic inequality, raising fears of mass technological unemployment and a renewed call for policy efforts to address the consequences of technological change. In this paper we discuss the barriers that inhibit scientists from measuring the effects of AI and automation on the future of work. These barriers include the lack of high-quality data about the nature of work (e.g., the dynamic requirements of occupations), lack of empirically informed models of key microlevel processes (e.g., skill substitution and human–machine complementarity), and insufficient understanding of how cognitive technologies interact with broader economic dynamics and institutional mechanisms (e.g., urban migration and international trade policy). Overcoming these barriers requires improvements in the longitudinal and spatial resolution of data, as well as refinements to data on workplace skills. These improvements will enable multidisciplinary research to quantitatively monitor and predict the complex evolution of work in tandem with technological progress. Finally, given the fundamental uncertainty in predicting technological change, we recommend developing a decision framework that focuses on resilience to unexpected scenarios in addition to general equilibrium behavior.

2021 ◽  
pp. 1-62
Author(s):  
Rozenn Gazan ◽  
Florent Vieux ◽  
Ségolène Mora ◽  
Sabrina Havard ◽  
Carine Dubuisson

Abstract Objective: To describe existing online 24-hour dietary recall (24hDR) tools in terms of functionalities and ability to tackle challenges encountered during national dietary surveys, such as maximizing response rates and collecting high-quality data from a representative sample of the population, while minimizing the cost and response burden. Design: A search (from 2000 to 2019) was conducted in peer-reviewed and grey literature. For each tool, information on functionalities, validation and user usability studies, and potential adaptability for integration into a new context was collected. Setting: Not country-specific Participants: General population Results: Eighteen online 24hDR tools were identified. Most were developed in Europe, for children ≥10 years old and/or for adults. Eight followed the five multiple-pass steps, but used various methodologies and features. Almost all tools (except three) validated their nutrient intake estimates, but with high heterogeneity in methodologies. User usability was not always assessed, and rarely by applying real-time methods. For researchers, eight tools developed a web platform to manage the survey and five appeared to be easily adaptable to a new context. Conclusions: Among the eighteen online 24hDR tools identified, the best candidates to be used in national dietary surveys should be those that were validated for their intake estimates, had confirmed user and researcher usability, and seemed sufficiently flexible to be adapted to new contexts. Regardless of the tool, adaptation to another context will still require time and funding, and this is probably the most challenging step.


2020 ◽  
Vol 2020 ◽  
pp. 1-15
Author(s):  
Yu Qiao ◽  
Jun Wu ◽  
Hao Cheng ◽  
Zilan Huang ◽  
Qiangqiang He ◽  
...  

In the age of the development of artificial intelligence, we face the challenge on how to obtain high-quality data set for learning systems effectively and efficiently. Crowdsensing is a new powerful tool which will divide tasks between the data contributors to achieve an outcome cumulatively. However, it arouses several new challenges, such as incentivization. Incentive mechanisms are significant to the crowdsensing applications, since a good incentive mechanism will attract more workers to participate. However, existing mechanisms failed to consider situations where the crowdsourcer has to hire capacitated workers or workers from multiregions. We design two objectives for the proposed multiregion scenario, namely, weighted mean and maximin. The proposed mechanisms maximize the utility of services provided by a selected data contributor under both constraints approximately. Also, extensive simulations are conducted to verify the effectiveness of our proposed methods.


NeoReviews ◽  
2021 ◽  
Vol 22 (5) ◽  
pp. e284-e295
Author(s):  
Deepika Sankaran ◽  
Natasha Nakra ◽  
Ritu Cheema ◽  
Dean Blumberg ◽  
Satyan Lakshminrusimha

The coronavirus disease 2019 pandemic caused by the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) has swept across the world like an indiscriminating wildfire. Pregnant women and neonates are particularly vulnerable to this infection compared with older children and healthy young adults, with unique challenges in their management. Unfamiliarity with the consequences of this novel virus and lack of high-quality data led to considerable heterogeneity in obstetrical and neonatal management early in the pandemic. The aim of the this review is to summarize the impact of SARS-CoV-2 infection on pregnancy and childbirth and to examine care and possible outcomes for neonates with Covid-19-positive mothers. A brief review of vaccines currently approved by the United States Food and Drug Administration for emergency use and their potential effects on pregnant and lactating women in included.


2019 ◽  
Vol 11 (15) ◽  
pp. 4154
Author(s):  
Ming-Kuang Chung ◽  
Dau-Jye Lu ◽  
Bor-Wen Tsai ◽  
Kuei-Tien Chou

Based on the criterion of governance quality, this study aimed to use the case of community-based monitoring in Taiwanese Wu-Wei-Kang Wildlife Refuge to evaluate the impact of public participation geographic information system (PPGIS) on its governance quality regarding inclusiveness, respect, competence, visions and scopes, accountability, and equity. Our research included 31 informants and 75 records (25 by in-depth interview and 50 from participant observation) collected in the field from 2009 to 2015. The results show that there are several effects attributable to the application of PPGIS in substratum elevation monitoring, including generating high quality data; strengthening monitoring processes and extending attributes of its outputs by lay knowledge; promoting stakeholders’ understanding of wetlands and their involvement in negotiations; increasing their capacity and degree to participate in refuge management; amending visions and scopes of this refuge; rearranging stakeholder divisions of labor; and assisting local communities as partners of this refuge. This study demonstrates that governance quality could provide a useful concept for evaluating PPGIS effectiveness on stakeholders’ participation, knowledge interpretation, capacity and consensus building, decision-making, and distribution of rights. Being a sole case with a qualitative approach, further case studies need to be undertaken to better understand the relationships between protected area governance quality and PPGIS.


2020 ◽  
pp. 002215542095914
Author(s):  
A. Sally Davis ◽  
Mary Y. Chang ◽  
Jourdan E. Brune ◽  
Teal S. Hallstrand ◽  
Brian Johnson ◽  
...  

Advances in reagents, methodologies, analytic platforms, and tools have resulted in a dramatic transformation of the research pathology laboratory. These advances have increased our ability to efficiently generate substantial volumes of data on the expression and accumulation of mRNA, proteins, carbohydrates, signaling pathways, cells, and structures in healthy and diseased tissues that are objective, quantitative, reproducible, and suitable for statistical analysis. The goal of this review is to identify and present how to acquire the critical information required to measure changes in tissues. Included is a brief overview of two morphometric techniques, image analysis and stereology, and the use of artificial intelligence to classify cells and identify hidden patterns and relationships in digital images. In addition, we explore the importance of preanalytical factors in generating high-quality data. This review focuses on techniques we have used to measure proteoglycans, glycosaminoglycans, and immune cells in tissues using immunohistochemistry and in situ hybridization to demonstrate the various morphometric techniques. When performed correctly, quantitative digital pathology is a powerful tool that provides unbiased quantitative data that are difficult to obtain with other methods.


2020 ◽  
Author(s):  
Maryam Zolnoori ◽  
Mark D Williams ◽  
William B Leasure ◽  
Kurt B Angstman ◽  
Che Ngufor

BACKGROUND Patient-centered registries are essential in population-based clinical care for patient identification and monitoring of outcomes. Although registry data may be used in real time for patient care, the same data may further be used for secondary analysis to assess disease burden, evaluation of disease management and health care services, and research. The design of a registry has major implications for the ability to effectively use these clinical data in research. OBJECTIVE This study aims to develop a systematic framework to address the data and methodological issues involved in analyzing data in clinically designed patient-centered registries. METHODS The systematic framework was composed of 3 major components: visualizing the multifaceted and heterogeneous patient-centered registries using a data flow diagram, assessing and managing data quality issues, and identifying patient cohorts for addressing specific research questions. RESULTS Using a clinical registry designed as a part of a collaborative care program for adults with depression at Mayo Clinic, we were able to demonstrate the impact of the proposed framework on data integrity. By following the data cleaning and refining procedures of the framework, we were able to generate high-quality data that were available for research questions about the coordination and management of depression in a primary care setting. We describe the steps involved in converting clinically collected data into a viable research data set using registry cohorts of depressed adults to assess the impact on high-cost service use. CONCLUSIONS The systematic framework discussed in this study sheds light on the existing inconsistency and data quality issues in patient-centered registries. This study provided a step-by-step procedure for addressing these challenges and for generating high-quality data for both quality improvement and research that may enhance care and outcomes for patients. INTERNATIONAL REGISTERED REPORT DERR1-10.2196/18366


10.2196/18366 ◽  
2020 ◽  
Vol 9 (10) ◽  
pp. e18366
Author(s):  
Maryam Zolnoori ◽  
Mark D Williams ◽  
William B Leasure ◽  
Kurt B Angstman ◽  
Che Ngufor

Background Patient-centered registries are essential in population-based clinical care for patient identification and monitoring of outcomes. Although registry data may be used in real time for patient care, the same data may further be used for secondary analysis to assess disease burden, evaluation of disease management and health care services, and research. The design of a registry has major implications for the ability to effectively use these clinical data in research. Objective This study aims to develop a systematic framework to address the data and methodological issues involved in analyzing data in clinically designed patient-centered registries. Methods The systematic framework was composed of 3 major components: visualizing the multifaceted and heterogeneous patient-centered registries using a data flow diagram, assessing and managing data quality issues, and identifying patient cohorts for addressing specific research questions. Results Using a clinical registry designed as a part of a collaborative care program for adults with depression at Mayo Clinic, we were able to demonstrate the impact of the proposed framework on data integrity. By following the data cleaning and refining procedures of the framework, we were able to generate high-quality data that were available for research questions about the coordination and management of depression in a primary care setting. We describe the steps involved in converting clinically collected data into a viable research data set using registry cohorts of depressed adults to assess the impact on high-cost service use. Conclusions The systematic framework discussed in this study sheds light on the existing inconsistency and data quality issues in patient-centered registries. This study provided a step-by-step procedure for addressing these challenges and for generating high-quality data for both quality improvement and research that may enhance care and outcomes for patients. International Registered Report Identifier (IRRID) DERR1-10.2196/18366


2019 ◽  
Vol 88 (1) ◽  
Author(s):  
Tomasz Henryk Szymura ◽  
Magdalena Szymura

Grasslands provide wide range of ecosystem services, however, their area and quality are still diminishing in Europe. Nowadays, they often create isolated patches inside “sea” of other habitats. We have examined basic structural landscape metrics of grasslands in Poland using CORINE land use database. Characteristics for both all individual patches as well as average values for 10 × 10-km grid covering Poland were examined. We also assessed the percentage of grasslands within protected areas and ecological corridors. We found that in Poland rather small patches (0.3–1 km<sup>2</sup>) dominate, usually located 200–500 m away from each other. The grasslands had clumped distribution, thus in Poland exist large areas where grasslands patches are separated kilometers from each other. Almost all indices calculated for 10 × 10-km<sup>2</sup> were correlated, i.e., in regions with high percentage of grasslands, the patches were large, more numerous, placed close to each other, and had more irregular shapes. Our results revealed that the percentage of grasslands within protected areas and ecological corridors did not differ from the average value for Poland. On the other hand, forests were significantly over-represented in protected areas and ecological corridors. These findings suggest that there is no planned scheme for grassland protection at the landscape scale in Poland. Development the scheme is urgent and needs high-quality data regarding distribution of seminatural grasslands patches. In practice, nature conservationists and managers should consider spatial processes in their plans in order to maintain grassland biodiversity.


2014 ◽  
Vol 92 (6) ◽  
pp. 515-526 ◽  
Author(s):  
Suresh Andrew Sethi ◽  
Geoffrey M. Cook ◽  
Patrick Lemons ◽  
John Wenburg

Molecular markers with inadequate power to discriminate among individuals can lead to false recaptures (shadows), and inaccurate genotyping can lead to missed recaptures (ghosts), potentially biasing genetic mark–recapture estimates. We used simulations to examine the impact of microsatellite (MSAT) and single nucleotide polymorphism (SNP) marker-set size, allelic frequency, multitubes approaches, and sample matching protocols on shadow and ghost events in genetic mark–recapture studies, presenting guidance on the specifications for MSAT and SNP marker panels, and sample matching protocols necessary to produce high-quality data. Shadow events are controllable by increasing the number of markers or by selecting markers with high discriminatory power; reasonably sized marker sets (e.g., ≥9 MSATs or ≥32 SNPs) of moderate allelic diversity lead to low probabilities of shadow errors. Ghost events are more challenging to control and low allelic dropout or false-allele error rates produced high rates of erroneous mismatches in mark–recapture sampling. Fortunately, error-tolerant matching protocols, which use information from positively matching loci between comparisons of samples, and multitubes protocols to achieve consensus genotypes are effective at eliminating ghost events. We present a case study on Pacific walrus, Odobenus rosmarus divergens (Illiger, 1815), using simulation results to inform genetic marker choices.


2015 ◽  
Vol 1 (1) ◽  
pp. 91-100
Author(s):  
Banumathi A C ◽  
Chandra E

Speech Recognition means converting Speech into Text. This Emerging Technology makes all the field of use as more sophisticated one. The impact of this revolutionary Technology has shown its wide range of usage in all tasks. Almost all the Technical devices use the Speech recognition as their part of their project. Speech Recognition Technology used in fields like computers, artificial Intelligence, Medical , Healthcare, Smart Phones, etc., This paper provides a glimpse of the challenges that is faced by the speech recognition systems in many applications and the approaches taken to fulfill it.


Sign in / Sign up

Export Citation Format

Share Document