Automatic information retrievement for exporting services: First project findings from the development of an AI based export decision supporting instrument

2021 ◽  
pp. 2-11
Author(s):  
David Aufreiter ◽  
Doris Ehrlinger ◽  
Christian Stadlmann ◽  
Margarethe Uberwimmer ◽  
Anna Biedersberger ◽  
...  

On the servitization journey, manufacturing companies complement their offerings with new industrial and knowledge-based services, which causes challenges of uncertainty and risk. In addition to the required adjustment of internal factors, the international selling of services is a major challenge. This paper presents the initial results of an international research project aimed at assisting advanced manufacturers in making decisions about exporting their service offerings to foreign markets. In the frame of this project, a tool is developed to support managers in their service export decisions through the automated generation of market information based on Natural Language Processing and Machine Learning. The paper presents a roadmap for progressing towards an Artificial Intelligence-based market information solution. It describes the research process steps of analyzing problem statements of relevant industry partners, selecting target countries and markets, defining parameters for the scope of the tool, classifying different service offerings and their components into categories and developing annotation scheme for generating reliable and focused training data for the Artificial Intelligence solution. This paper demonstrates good practices in essential steps and highlights common pitfalls to avoid for researcher and managers working on future research projects supported by Artificial Intelligence. In the end, the paper aims at contributing to support and motivate researcher and manager to discover AI application and research opportunities within the servitization field.

Author(s):  
Christian Horn ◽  
Oscar Ivarsson ◽  
Cecilia Lindhé ◽  
Rich Potter ◽  
Ashely Green ◽  
...  

AbstractRock art carvings, which are best described as petroglyphs, were produced by removing parts of the rock surface to create a negative relief. This tradition was particularly strong during the Nordic Bronze Age (1700–550 BC) in southern Scandinavia with over 20,000 boats and thousands of humans, animals, wagons, etc. This vivid and highly engaging material provides quantitative data of high potential to understand Bronze Age social structures and ideologies. The ability to provide the technically best possible documentation and to automate identification and classification of images would help to take full advantage of the research potential of petroglyphs in southern Scandinavia and elsewhere. We, therefore, attempted to train a model that locates and classifies image objects using faster region-based convolutional neural network (Faster-RCNN) based on data produced by a novel method to improve visualizing the content of 3D documentations. A newly created layer of 3D rock art documentation provides the best data currently available and has reduced inscribed bias compared to older methods. Several models were trained based on input images annotated with bounding boxes produced with different parameters to find the best solution. The data included 4305 individual images in 408 scans of rock art sites. To enhance the models and enrich the training data, we used data augmentation and transfer learning. The successful models perform exceptionally well on boats and circles, as well as with human figures and wheels. This work was an interdisciplinary undertaking which led to important reflections about archaeology, digital humanities, and artificial intelligence. The reflections and the success represented by the trained models open novel avenues for future research on rock art.


2020 ◽  
Vol 9 (1) ◽  
pp. 121-127 ◽  
Author(s):  
Trevor D. Hadley ◽  
Rowland W. Pettit ◽  
Tahir Malik ◽  
Amelia A. Khoei ◽  
Hamisu M. Salihu

Artificial Intelligence (AI) applications in medicine have grown considerably in recent years. AI in the forms of Machine Learning, Natural Language Processing, Expert Systems, Planning and Logistics methods, and Image Processing networks provide great analytical aptitude. While AI methods were first conceptualized for radiology, investigations today are established across all medical specialties. The necessity for proper infrastructure, skilled labor, and access to large, well-organized data sets has kept the majority of medical AI applications in higher-income countries. However, critical technological improvements, such as cloud computing and the near-ubiquity of smartphones, have paved the way for use of medical AI applications in resource-poor areas. Global health initiatives (GHI) have already begun to explore ways to leverage medical AI technologies to detect and mitigate public health inequities. For example, AI tools can help optimize vaccine delivery and community healthcare worker routes, thus enabling limited resources to have a maximal impact. Other promising AI tools have demonstrated an ability to: predict burn healing time from smartphone photos; track regions of socioeconomic disparity combined with environmental trends to predict communicable disease outbreaks; and accurately predict pregnancy complications such as birth asphyxia in low resource settings with limited patient clinical data. In this commentary, we discuss the current state of AI-driven GHI and explore relevant lessons from past technology-centered GHI. Additionally, we propose a conceptual framework to guide the development of sustainable strategies for AI-driven GHI, and we outline areas for future research. Keywords: • Artificial Intelligence • AI Framework • Global Health • Implementation • Sustainability • AI Strategy   Copyright © 2020 Hadley et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


2020 ◽  
Vol 78 (4) ◽  
pp. 1547-1574
Author(s):  
Sofia de la Fuente Garcia ◽  
Craig W. Ritchie ◽  
Saturnino Luz

Background: Language is a valuable source of clinical information in Alzheimer’s disease, as it declines concurrently with neurodegeneration. Consequently, speech and language data have been extensively studied in connection with its diagnosis. Objective: Firstly, to summarize the existing findings on the use of artificial intelligence, speech, and language processing to predict cognitive decline in the context of Alzheimer’s disease. Secondly, to detail current research procedures, highlight their limitations, and suggest strategies to address them. Methods: Systematic review of original research between 2000 and 2019, registered in PROSPERO (reference CRD42018116606). An interdisciplinary search covered six databases on engineering (ACM and IEEE), psychology (PsycINFO), medicine (PubMed and Embase), and Web of Science. Bibliographies of relevant papers were screened until December 2019. Results: From 3,654 search results, 51 articles were selected against the eligibility criteria. Four tables summarize their findings: study details (aim, population, interventions, comparisons, methods, and outcomes), data details (size, type, modalities, annotation, balance, availability, and language of study), methodology (pre-processing, feature generation, machine learning, evaluation, and results), and clinical applicability (research implications, clinical potential, risk of bias, and strengths/limitations). Conclusion: Promising results are reported across nearly all 51 studies, but very few have been implemented in clinical research or practice. The main limitations of the field are poor standardization, limited comparability of results, and a degree of disconnect between study aims and clinical applications. Active attempts to close these gaps will support translation of future research into clinical practice.


Author(s):  
Thanh Thi Nguyen

Artificial intelligence (AI) has been applied widely in our daily lives in a variety of ways with numerous successful stories. AI has also contributed to dealing with the coronavirus disease (COVID-19) pandemic, which has been happening around the globe. This paper presents a survey of AI methods being used in various applications in the fight against the COVID-19 outbreak and outlines the crucial roles of AI research in this unprecedented battle. We touch on a number of areas where AI plays as an essential component, from medical image processing, data analytics, text mining and natural language processing, the Internet of Things, to computational biology and medicine. A summary of COVID-19 related data sources that are available for research purposes is also presented. Research directions on exploring the potentials of AI and enhancing its capabilities and power in the battle are thoroughly discussed. We highlight 13 groups of problems related to the COVID-19 pandemic and point out promising AI methods and tools that can be used to solve those problems. It is envisaged that this study will provide AI researchers and the wider community an overview of the current status of AI applications and motivate researchers in harnessing AI potentials in the fight against COVID-19.


Author(s):  
Mathias-Felipe de-Lima-Santos ◽  
Wilson Ceron

In recent years, news media has been greatly disrupted by the potential of technologically driven approaches in the creation, production, and distribution of news products and services. Artificial intelligence (AI) has emerged from the realm of science fiction and has become a very real tool that can aid society in addressing many issues, including the challenges faced by the news industry. The ubiquity of computing has become apparent and has demonstrated the different approaches that can be achieved using AI. We analyzed the news industry’s AI adoption based on the seven subfields of AI: (i) machine learning; (ii) computer vision (CV); (iii) speech recognition; (iv) natural language processing (NLP); (v) planning, scheduling, and optimization; (vi) expert systems; and (vii) robotics. Our findings suggest that three subfields are being developed more in the news media: machine learning, computer vision, as well as planning, scheduling, and optimization. Other areas have not been fully deployed in the journalistic field. Most AI news projects rely on funds from tech companies such as Google. This limits AI’s potential to a small number of players in the news industry. We make conclusions by providing examples of how these subfields are being developed in journalism and present an agenda for future research.


2020 ◽  
Vol 07 (01) ◽  
pp. 63-72 ◽  
Author(s):  
Gee Wah Ng ◽  
Wang Chi Leung

In the last 10 years, Artificial Intelligence (AI) has seen successes in fields such as natural language processing, computer vision, speech recognition, robotics and autonomous systems. However, these advances are still considered as Narrow AI, i.e. AI built for very specific or constrained applications. These applications have its usefulness in improving the quality of human life; but it is not good enough to do highly general tasks like what the human can do. The holy grail of AI research is to develop Strong AI or Artificial General Intelligence (AGI), which produces human-level intelligence, i.e. the ability to sense, understand, reason, learn and act in dynamic environments. Strong AI is more than just a composition of Narrow AI technologies. We proposed that it has to be a holistic approach towards understanding and reacting to the operating environment and decision-making process. The Strong AI must be able to demonstrate sentience, emotional intelligence, imagination, effective command of other machines or robots, and self-referring and self-reflecting qualities. This paper will give an overview of current Narrow AI capabilities, present the technical gaps, and highlight future research directions for Strong AI. Could Strong AI become conscious? We provide some discussion pointers.


2021 ◽  
Vol 3 (1) ◽  
pp. 13-26
Author(s):  
Mathias-Felipe de-Lima-Santos ◽  
Wilson Ceron

In recent years, news media has been greatly disrupted by the potential of technologically driven approaches in the creation, production, and distribution of news products and services. Artificial intelligence (AI) has emerged from the realm of science fiction and has become a very real tool that can aid society in addressing many issues, including the challenges faced by the news industry. The ubiquity of computing has become apparent and has demonstrated the different approaches that can be achieved using AI. We analyzed the news industry’s AI adoption based on the seven subfields of AI: (i) machine learning; (ii) computer vision (CV); (iii) speech recognition; (iv) natural language processing (NLP); (v) planning, scheduling, and optimization; (vi) expert systems; and (vii) robotics. Our findings suggest that three subfields are being developed more in the news media: machine learning, computer vision, and planning, scheduling, and optimization. Other areas have not been fully deployed in the journalistic field. Most AI news projects rely on funds from tech companies such as Google. This limits AI’s potential to a small number of players in the news industry. We made conclusions by providing examples of how these subfields are being developed in journalism and presented an agenda for future research.


10.2196/20701 ◽  
2020 ◽  
Vol 22 (9) ◽  
pp. e20701 ◽  
Author(s):  
Theresa Schachner ◽  
Roman Keller ◽  
Florian v Wangenheim

Background A rising number of conversational agents or chatbots are equipped with artificial intelligence (AI) architecture. They are increasingly prevalent in health care applications such as those providing education and support to patients with chronic diseases, one of the leading causes of death in the 21st century. AI-based chatbots enable more effective and frequent interactions with such patients. Objective The goal of this systematic literature review is to review the characteristics, health care conditions, and AI architectures of AI-based conversational agents designed specifically for chronic diseases. Methods We conducted a systematic literature review using PubMed MEDLINE, EMBASE, PyscInfo, CINAHL, ACM Digital Library, ScienceDirect, and Web of Science. We applied a predefined search strategy using the terms “conversational agent,” “healthcare,” “artificial intelligence,” and their synonyms. We updated the search results using Google alerts, and screened reference lists for other relevant articles. We included primary research studies that involved the prevention, treatment, or rehabilitation of chronic diseases, involved a conversational agent, and included any kind of AI architecture. Two independent reviewers conducted screening and data extraction, and Cohen kappa was used to measure interrater agreement.A narrative approach was applied for data synthesis. Results The literature search found 2052 articles, out of which 10 papers met the inclusion criteria. The small number of identified studies together with the prevalence of quasi-experimental studies (n=7) and prevailing prototype nature of the chatbots (n=7) revealed the immaturity of the field. The reported chatbots addressed a broad variety of chronic diseases (n=6), showcasing a tendency to develop specialized conversational agents for individual chronic conditions. However, there lacks comparison of these chatbots within and between chronic diseases. In addition, the reported evaluation measures were not standardized, and the addressed health goals showed a large range. Together, these study characteristics complicated comparability and open room for future research. While natural language processing represented the most used AI technique (n=7) and the majority of conversational agents allowed for multimodal interaction (n=6), the identified studies demonstrated broad heterogeneity, lack of depth of reported AI techniques and systems, and inconsistent usage of taxonomy of the underlying AI software, further aggravating comparability and generalizability of study results. Conclusions The literature on AI-based conversational agents for chronic conditions is scarce and mostly consists of quasi-experimental studies with chatbots in prototype stage that use natural language processing and allow for multimodal user interaction. Future research could profit from evidence-based evaluation of the AI-based conversational agents and comparison thereof within and between different chronic health conditions. Besides increased comparability, the quality of chatbots developed for specific chronic conditions and their subsequent impact on the target patients could be enhanced by more structured development and standardized evaluation processes.


2020 ◽  
Vol 6 (3) ◽  
pp. 162-170 ◽  
Author(s):  
Zehong Cao

The advancement in neuroscience and computer science promotes the ability of the human brain to communicate and interact with the environment, making brain–computer interface (BCI) top interdisciplinary research. Furthermore, with the modern technology advancement in artificial intelligence (AI), including machine learning (ML) and deep learning (DL) methods, there is vast growing interest in the electroencephalogram (EEG)‐based BCIs for AI‐related visual, literal, and motion applications. In this review study, the literature on mainstreams of AI for the EEG‐based BCI applications is investigated to fill gaps in the interdisciplinary BCI field. Specifically, the EEG signals and their main applications in BCI are first briefly introduced. Next, the latest AI technologies, including the ML and DL models, are presented to monitor and feedback human cognitive states. Finally, some BCI‐inspired AI applications, including computer vision, natural language processing, and robotic control applications, are presented. The future research directions of the EEG‐based BCI are highlighted in line with the AI technologies and applications.


2020 ◽  
Author(s):  
Madison Milne-Ives ◽  
Caroline de Cock ◽  
Ernest Lim ◽  
Melissa Harper Shehadeh ◽  
Nick de Pennington ◽  
...  

BACKGROUND The high demand for health care services and the growing capability of artificial intelligence have led to the development of conversational agents designed to support a variety of health-related activities, including behavior change, treatment support, health monitoring, training, triage, and screening support. Automation of these tasks could free clinicians to focus on more complex work and increase the accessibility to health care services for the public. An overarching assessment of the acceptability, usability, and effectiveness of these agents in health care is needed to collate the evidence so that future development can target areas for improvement and potential for sustainable adoption. OBJECTIVE This systematic review aims to assess the effectiveness and usability of conversational agents in health care and identify the elements that users like and dislike to inform future research and development of these agents. METHODS PubMed, Medline (Ovid), EMBASE (Excerpta Medica dataBASE), CINAHL (Cumulative Index to Nursing and Allied Health Literature), Web of Science, and the Association for Computing Machinery Digital Library were systematically searched for articles published since 2008 that evaluated unconstrained natural language processing conversational agents used in health care. EndNote (version X9, Clarivate Analytics) reference management software was used for initial screening, and full-text screening was conducted by 1 reviewer. Data were extracted, and the risk of bias was assessed by one reviewer and validated by another. RESULTS A total of 31 studies were selected and included a variety of conversational agents, including 14 chatbots (2 of which were voice chatbots), 6 embodied conversational agents (3 of which were interactive voice response calls, virtual patients, and speech recognition screening systems), 1 contextual question-answering agent, and 1 voice recognition triage system. Overall, the evidence reported was mostly positive or mixed. Usability and satisfaction performed well (27/30 and 26/31), and positive or mixed effectiveness was found in three-quarters of the studies (23/30). However, there were several limitations of the agents highlighted in specific qualitative feedback. CONCLUSIONS The studies generally reported positive or mixed evidence for the effectiveness, usability, and satisfactoriness of the conversational agents investigated, but qualitative user perceptions were more mixed. The quality of many of the studies was limited, and improved study design and reporting are necessary to more accurately evaluate the usefulness of the agents in health care and identify key areas for improvement. Further research should also analyze the cost-effectiveness, privacy, and security of the agents. INTERNATIONAL REGISTERED REPORT RR2-10.2196/16934


Sign in / Sign up

Export Citation Format

Share Document