domain expert
Recently Published Documents


TOTAL DOCUMENTS

153
(FIVE YEARS 56)

H-INDEX

8
(FIVE YEARS 2)

2021 ◽  
Vol 8 ◽  
Author(s):  
Katie Winkle ◽  
Emmanuel Senft ◽  
Séverin Lemaignan

Participatory design (PD) has been used to good success in human-robot interaction (HRI) but typically remains limited to the early phases of development, with subsequent robot behaviours then being hardcoded by engineers or utilised in Wizard-of-Oz (WoZ) systems that rarely achieve autonomy. In this article, we present LEADOR (Led-by-Experts Automation and Design Of Robots), an end-to-end PD methodology for domain expert co-design, automation, and evaluation of social robot behaviour. This method starts with typical PD, working with the domain expert(s) to co-design the interaction specifications and state and action space of the robot. It then replaces the traditional offline programming or WoZ phase by an in situ and online teaching phase where the domain expert can live-program or teach the robot how to behave whilst being embedded in the interaction context. We point out that this live teaching phase can be best achieved by adding a learning component to a WoZ setup, which captures implicit knowledge of experts, as they intuitively respond to the dynamics of the situation. The robot then progressively learns an appropriate, expert-approved policy, ultimately leading to full autonomy, even in sensitive and/or ill-defined environments. However, LEADOR is agnostic to the exact technical approach used to facilitate this learning process. The extensive inclusion of the domain expert(s) in robot design represents established responsible innovation practice, lending credibility to the system both during the teaching phase and when operating autonomously. The combination of this expert inclusion with the focus on in situ development also means that LEADOR supports a mutual shaping approach to social robotics. We draw on two previously published, foundational works from which this (generalisable) methodology has been derived to demonstrate the feasibility and worth of this approach, provide concrete examples in its application, and identify limitations and opportunities when applying this framework in new environments.


2021 ◽  
Author(s):  
◽  
Harith Al-Sahaf

<p>Image classification is a core task in many applications of computer vision, including object detection and recognition. It aims at analysing the visual content and automatically categorising a set of images into different groups. Performing image classification can largely be affected by the features used to perform this task. Extracting features from images is a challenging task due to the large search space size and practical requirements such as domain knowledge and human intervention. Human intervention is usually needed to identify a good set of keypoints (regions of interest), design a set of features to be extracted from those keypoints such as lines and corners, and develop a way to extract those features. Automating these tasks has great potential to dramatically decrease the time and cost, and may potentially improve the performance of the classification task.  There are two well-recognised approaches in the literature to automate the processes of identifying keypoints and extracting image features. Designing a set of domain-independent features is the first approach, where the focus is on dividing the image into a number of predefined regions and extracting features from those regions. The second approach is synthesising a function or a set of functions to form an image descriptor that aims at automatically detecting a set of keypoints such as lines and corners, and performing feature extraction. Although employing image descriptors is more effective and very popular in the literature, designing those descriptors is a difficult task that in most cases requires domain-expert intervention.  The overall goal of this thesis is to develop a new domain independent Genetic Programming (GP) approach to image classification by utilising GP to evolve programs that are capable of automatically detecting diverse and informative keypoints, designing a set of features, and performing feature extraction using only a small number of training instances to facilitate image classification, and are robust to different image changes such as illumination and rotation. This thesis focuses on incorporating a variety of simple arithmetic operators and first-order statistics (mid-level features) into the evolutionary process and on representation of GP to evolve programs that are robust to image changes for image classification.  This thesis proposes methods for domain-independent binary classification in images using GP to automatically identify regions within an image that have the potential to improve classification while considering the limitation of having a small training set. Experimental results show that in over 67% of cases the new methods significantly outperform the use of existing hand-crafted features and features automatically detected by other methods.  This thesis proposes the first GP approach for automatically evolving an illumination-invariant dense image descriptor that detects automatically designed keypoints, and performs feature extraction using only a few instances of each class. The experimental results show improvement of 86% on average compared to two GP-based methods, and can significantly outperform domain-expert hand-crafted descriptors in more than 89% of the cases.  This thesis also considers rotation variation of images and proposes a method for automatically evolving rotation-invariant image descriptors through integrating a set of first-order statistics as terminals. Compared to hand-crafted descriptors, the experimental results reveal that the proposed method has significantly better performance in more than 83% of the cases.  This thesis proposes a new GP representation that allows the system to automatically choose the length of the feature vector side-by-side with evolving an image descriptor. Automatically determining the length of the feature vector helps to reduce the number of the parameters to be set. The results show that this method has evolved descriptors with a very small feature vector which yet still significantly outperform the competitive methods in more than 91% of the cases.  This thesis proposes a method for transfer learning by model in GP, where an image descriptor evolved on instances of a related problem (source domain) is applied directly to solve a problem being tackled (target domain). The results show that the new method evolves image descriptors that have better generalisability compared to hand-crafted image descriptors. Those automatically evolved descriptors show positive influence on classifying the target domain datasets in more than 56% of the cases.</p>


2021 ◽  
Author(s):  
◽  
Harith Al-Sahaf

<p>Image classification is a core task in many applications of computer vision, including object detection and recognition. It aims at analysing the visual content and automatically categorising a set of images into different groups. Performing image classification can largely be affected by the features used to perform this task. Extracting features from images is a challenging task due to the large search space size and practical requirements such as domain knowledge and human intervention. Human intervention is usually needed to identify a good set of keypoints (regions of interest), design a set of features to be extracted from those keypoints such as lines and corners, and develop a way to extract those features. Automating these tasks has great potential to dramatically decrease the time and cost, and may potentially improve the performance of the classification task.  There are two well-recognised approaches in the literature to automate the processes of identifying keypoints and extracting image features. Designing a set of domain-independent features is the first approach, where the focus is on dividing the image into a number of predefined regions and extracting features from those regions. The second approach is synthesising a function or a set of functions to form an image descriptor that aims at automatically detecting a set of keypoints such as lines and corners, and performing feature extraction. Although employing image descriptors is more effective and very popular in the literature, designing those descriptors is a difficult task that in most cases requires domain-expert intervention.  The overall goal of this thesis is to develop a new domain independent Genetic Programming (GP) approach to image classification by utilising GP to evolve programs that are capable of automatically detecting diverse and informative keypoints, designing a set of features, and performing feature extraction using only a small number of training instances to facilitate image classification, and are robust to different image changes such as illumination and rotation. This thesis focuses on incorporating a variety of simple arithmetic operators and first-order statistics (mid-level features) into the evolutionary process and on representation of GP to evolve programs that are robust to image changes for image classification.  This thesis proposes methods for domain-independent binary classification in images using GP to automatically identify regions within an image that have the potential to improve classification while considering the limitation of having a small training set. Experimental results show that in over 67% of cases the new methods significantly outperform the use of existing hand-crafted features and features automatically detected by other methods.  This thesis proposes the first GP approach for automatically evolving an illumination-invariant dense image descriptor that detects automatically designed keypoints, and performs feature extraction using only a few instances of each class. The experimental results show improvement of 86% on average compared to two GP-based methods, and can significantly outperform domain-expert hand-crafted descriptors in more than 89% of the cases.  This thesis also considers rotation variation of images and proposes a method for automatically evolving rotation-invariant image descriptors through integrating a set of first-order statistics as terminals. Compared to hand-crafted descriptors, the experimental results reveal that the proposed method has significantly better performance in more than 83% of the cases.  This thesis proposes a new GP representation that allows the system to automatically choose the length of the feature vector side-by-side with evolving an image descriptor. Automatically determining the length of the feature vector helps to reduce the number of the parameters to be set. The results show that this method has evolved descriptors with a very small feature vector which yet still significantly outperform the competitive methods in more than 91% of the cases.  This thesis proposes a method for transfer learning by model in GP, where an image descriptor evolved on instances of a related problem (source domain) is applied directly to solve a problem being tackled (target domain). The results show that the new method evolves image descriptors that have better generalisability compared to hand-crafted image descriptors. Those automatically evolved descriptors show positive influence on classifying the target domain datasets in more than 56% of the cases.</p>


Minerals ◽  
2021 ◽  
Vol 11 (11) ◽  
pp. 1265
Author(s):  
Sebastian Iwaszenko ◽  
Leokadia Róg

The study of the petrographic structure of medium- and high-rank coals is important from both a cognitive and a utilitarian point of view. The petrographic constituents and their individual characteristics and features are responsible for the properties of coal and the way it behaves in various technological processes. This paper considers the application of convolutional neural networks for coal petrographic images segmentation. The U-Net-based model for segmentation was proposed. The network was trained to segment inertinite, liptinite, and vitrinite. The segmentations prepared manually by a domain expert were used as the ground truth. The results show that inertinite and vitrinite can be successfully segmented with minimal difference from the ground truth. The liptinite turned out to be much more difficult to segment. After usage of transfer learning, moderate results were obtained. Nevertheless, the application of the U-Net-based network for petrographic image segmentation was successful. The results are good enough to consider the method as a supporting tool for domain experts in everyday work.


2021 ◽  
Author(s):  
Thomas Schleider ◽  
Raphael Troncy ◽  
Thibault Ehrhart ◽  
Mareike Dorozynski ◽  
Franz Rottensteiner ◽  
...  

2021 ◽  
Author(s):  
Holger Regenbrecht ◽  
Noel Park ◽  
Stuart Duncan ◽  
steven Mills ◽  
Rosa Lutz ◽  
...  

Developing, evaluating, and disseminating IT research prototypes for and with indigenous partners is both challenging and rewarding. In conjunction with our domain expert collaborators, Te Rau Aroha Marae (Bluff, Aotearoa/New Zealand) and our academic colleagues at the universities of Waikato and Canterbury, we are implementing a mixed reality telepresence system to connect a diasporic Māori community to their historical, cultural and geographic mātauranga (knowledge). In this article we describe our project, Ātea Presence, which is guided by the principles of partnership, participation, and protection. We describe the design and evaluation of the system developed, the collaborative process we undertook with Te Rau Aroha Marae and our Māori academic colleagues and report on lessons learned along the way.


2021 ◽  
Author(s):  
Holger Regenbrecht ◽  
Noel Park ◽  
Stuart Duncan ◽  
steven Mills ◽  
Rosa Lutz ◽  
...  

Developing, evaluating, and disseminating IT research prototypes for and with indigenous partners is both challenging and rewarding. In conjunction with our domain expert collaborators, Te Rau Aroha Marae (Bluff, Aotearoa/New Zealand) and our academic colleagues at the universities of Waikato and Canterbury, we are implementing a mixed reality telepresence system to connect a diasporic Māori community to their historical, cultural and geographic mātauranga (knowledge). In this article we describe our project, Ātea Presence, which is guided by the principles of partnership, participation, and protection. We describe the design and evaluation of the system developed, the collaborative process we undertook with Te Rau Aroha Marae and our Māori academic colleagues and report on lessons learned along the way.


2021 ◽  
Author(s):  
Ben Cardoen ◽  
Timothy Wong ◽  
Parsa Alan ◽  
Sieun Lee ◽  
Joanne Aiko Matsubara ◽  
...  

We introduce a novel method that is able to localize fluorescent labelled objects in multi-scale 2D microscopy, and is robust to highly variable imaging conditions. Localized objects are then classified in a novel way using belief theory, requiring only the image level label. Each object is assigned a `belief' that describes how likely it is to appear in an image with a given set of labels. We apply our method successfully to identify amyloid-beta deposits, associated with Alzheimer's disease, and to discover caveolae and their modular components in superresolution microscpy. We illustrate how our approach allows the fusion or combination of models learned across markedly different datasets. We show how we can compute the `conflict', or disagreement between the models, an insight that can allow the domain expert to interpret the composite model.


2021 ◽  
Author(s):  
Adrian Krenzer ◽  
Kevin Makowski ◽  
Amar Hekalo ◽  
Daniel Fitting ◽  
Joel Troya ◽  
...  

Abstract Background: Machine learning, especially deep learning, is becoming more and more relevant in research and development in the medical domain. For all of the supervised deep learning applications, data is the most critical factor in securing successful implementation and sustaining the progress of the machine learning model. Especially gastroenterological data, which often involves endoscopic videos, are cumbersome to annotate. Domain experts are needed to interpret and annotate the videos. To support those domain experts, we generated a framework. With this framework, instead of annotating every frame in the video sequence, experts are just performing key annotations at the beginning and the end of sequences with pathologies, e.g. visible polyps. Subsequently, non-expert annotators supported by machine learning add the missing annotations for the frames in-between. Results: Using this framework we were able to reduce work load of domain experts on average by a factor of 20. This is primarily due to the structure of the framework, which is designed to minimize the workload of the domain expert. Pairing this framework with a state-of-the-art semi-automated pre-annotation model enhances the annotation speed further. Through a study with 10 participants we show that semi-automated annotation using our tool doubles the annotation speed of non-expert annotators compared to a well-known state-of-the-art annotation tool. Conclusion: In summary, we introduce a framework for fast expert annotation for gastroenterologists, which reduces the workload of the domain expert considerably while maintaining a very high annotation quality. The framework incorporates a semi-automated annotation system utilizing trained object detection models. The software and framework are open-source.


Author(s):  
Maurice Funk ◽  
Jean Christoph Jung ◽  
Carsten Lutz

We consider the problem to learn a concept or a query in the presence of an ontology formulated in the description logic ELr, in Angluin's framework of active learning that allows the learning algorithm to interactively query an oracle (such as a domain expert). We show that the following can be learned in polynomial time: (1) EL-concepts, (2) symmetry-free ELI-concepts, and (3) conjunctive queries (CQs) that are chordal, symmetry-free, and of bounded arity. In all cases, the learner can pose to the oracle membership queries based on ABoxes and equivalence queries that ask whether a given concept/query from the considered class is equivalent to the target. The restriction to bounded arity in (3) can be removed when we admit unrestricted CQs in equivalence queries. We also show that EL-concepts are not polynomial query learnable in the presence of ELI-ontologies.


Sign in / Sign up

Export Citation Format

Share Document