data efficiency
Recently Published Documents


TOTAL DOCUMENTS

95
(FIVE YEARS 61)

H-INDEX

11
(FIVE YEARS 5)

2022 ◽  
Author(s):  
Maede Maftouni ◽  
Bo Shen ◽  
Andrew Chung Chee Law ◽  
Niloofar Ayoobi Yazdi ◽  
Zhenyu Kong

<p>The global extent of COVID-19 mutations and the consequent depletion of hospital resources highlighted the necessity of effective computer-assisted medical diagnosis. COVID-19 detection mediated by deep learning models can help diagnose this highly contagious disease and lower infectivity and mortality rates. Computed tomography (CT) is the preferred imaging modality for building automatic COVID-19 screening and diagnosis models. It is well-known that the training set size significantly impacts the performance and generalization of deep learning models. However, accessing a large dataset of CT scan images from an emerging disease like COVID-19 is challenging. Therefore, data efficiency becomes a significant factor in choosing a learning model. To this end, we present a multi-task learning approach, namely, a mask-guided attention (MGA) classifier, to improve the generalization and data efficiency of COVID-19 classification on lung CT scan images.</p><p>The novelty of this method is compensating for the scarcity of data by employing more supervision with lesion masks, increasing the sensitivity of the model to COVID-19 manifestations, and helping both generalization and classification performance. Our proposed model achieves better overall performance than the single-task baseline and state-of-the-art models, as measured by various popular metrics. In our experiment with different percentages of data from our curated dataset, the classification performance gain from this multi-task learning approach is more significant for the smaller training sizes. Furthermore, experimental results demonstrate that our method enhances the focus on the lesions, as witnessed by both</p><p>attention and attribution maps, resulting in a more interpretable model.</p>


2022 ◽  
Author(s):  
Maede Maftouni ◽  
Bo Shen ◽  
Andrew Chung Chee Law ◽  
Niloofar Ayoobi Yazdi ◽  
Zhenyu Kong

<p>The global extent of COVID-19 mutations and the consequent depletion of hospital resources highlighted the necessity of effective computer-assisted medical diagnosis. COVID-19 detection mediated by deep learning models can help diagnose this highly contagious disease and lower infectivity and mortality rates. Computed tomography (CT) is the preferred imaging modality for building automatic COVID-19 screening and diagnosis models. It is well-known that the training set size significantly impacts the performance and generalization of deep learning models. However, accessing a large dataset of CT scan images from an emerging disease like COVID-19 is challenging. Therefore, data efficiency becomes a significant factor in choosing a learning model. To this end, we present a multi-task learning approach, namely, a mask-guided attention (MGA) classifier, to improve the generalization and data efficiency of COVID-19 classification on lung CT scan images.</p><p>The novelty of this method is compensating for the scarcity of data by employing more supervision with lesion masks, increasing the sensitivity of the model to COVID-19 manifestations, and helping both generalization and classification performance. Our proposed model achieves better overall performance than the single-task baseline and state-of-the-art models, as measured by various popular metrics. In our experiment with different percentages of data from our curated dataset, the classification performance gain from this multi-task learning approach is more significant for the smaller training sizes. Furthermore, experimental results demonstrate that our method enhances the focus on the lesions, as witnessed by both</p><p>attention and attribution maps, resulting in a more interpretable model.</p>


2022 ◽  
Vol 11 (1) ◽  
pp. 4
Author(s):  
Mustafa Al Samara ◽  
Ismail Bennis ◽  
Abdelhafid Abouaissa ◽  
Pascal Lorenz

The Internet of Things (IoT) is a fact today where a high number of nodes are used for various applications. From small home networks to large-scale networks, the aim is the same: transmitting data from the sensors to the base station. However, these data are susceptible to different factors that may affect the collected data efficiency or the network functioning, and therefore the desired quality of service (QoS). In this context, one of the main issues requiring more research and adapted solutions is the outlier detection problem. The challenge is to detect outliers and classify them as either errors to be ignored, or important events requiring actions to prevent further service degradation. In this paper, we propose a comprehensive literature review of recent outlier detection techniques used in the IoTs context. First, we provide the fundamentals of outlier detection while discussing the different sources of an outlier, the existing approaches, how we can evaluate an outlier detection technique, and the challenges facing designing such techniques. Second, comparison and discussion of the most recent outlier detection techniques are presented and classified into seven main categories, which are: statistical-based, clustering-based, nearest neighbour-based, classification-based, artificial intelligent-based, spectral decomposition-based, and hybrid-based. For each category, available techniques are discussed, while highlighting the advantages and disadvantages of each of them. The related works for each of them are presented. Finally, a comparative study for these techniques is provided.


2021 ◽  
Vol 5 (4) ◽  
pp. 529
Author(s):  
Fajar Agung Nugroho ◽  
Fajar Septian ◽  
Dimas Abisono Pungkastyo ◽  
Joko Riyanto

Research and community service activities are the obligations of a lecturer that must be carried out from part of the Tri Dharma of Higher Education in addition to teaching, where research activities should have a level of innovation in the form of development or discovery of something new, but with a large number of lecturers, this results in research activities. and community service has many similarities with previous activities. At the Lembaga Penelitian dan Pengabdian kepada Masyarakat (LPPM), Pamulang University, experiencing several problems in the management of research and community service activities, namely the absence of a system used to manage research and community service activities and data related to the track record of research and community service activities that have been carried out by 2,613 lecturers who impact on the difficulty in finding data, efficiency of storage space and more importantly is the number of similar proposals in the research itself. The research carried out aims to develop an information system that can process research and community service activities and detect similarities in content by applying the Cosine Similarity algorithm, so that it can overcome existing problems. The system development method uses a waterfall. From the results of making the system that has been carried out, it shows that the system is capable of processing activities in the field of research and community service carried out by lecturers, supporting storage, and facilitating screening of proposals for research and community service activities that will be approved.


2021 ◽  
pp. 1-14
Author(s):  
Vencia D Herzog ◽  
Stefan Suwelack

Abstract Decisions in engineering design are closely tied to the 3D shape of the product. Limited availability of 3D shape data and expensive annotation present key challenges for using Artificial Intelligence in product design and development. In this work we explore transfer learning strategies to improve the data-efficiency of geometric reasoning models based on deep neural networks as used for tasks such as shape retrieval and design synthesis. We address the utilization of problem- related and un-annotated 3D data to compensate for small data volumes. Our experiments show promising results for knowledge transfer on mechanical component benchmarks.


Author(s):  
Sebastian Larsen ◽  
Paul A. Hooper

AbstractHighly complex data streams from in-situ additive manufacturing (AM) monitoring systems are becoming increasingly prevalent, yet finding physically actionable patterns remains a key challenge. Recent AM literature utilising machine learning methods tend to make predictions about flaws or porosity without considering the dynamical nature of the process. This leads to increases in false detections as useful information about the signal is lost. This study takes a different approach and investigates learning a physical model of the laser powder bed fusion process dynamics. In addition, deep representation learning enables this to be achieved directly from high speed videos. This representation is combined with a predictive state space model which is learned in a semi-supervised manner, requiring only the optimal laser parameter to be characterised. The model, referred to as FlawNet, was exploited to measure offsets between predicted and observed states resulting in a highly robust metric, known as the dynamic signature. This feature also correlated strongly with a global material quality metric, namely porosity. The model achieved state-of-the-art results with a receiver operating characteristic (ROC) area under curve (AUC) of 0.999 when differentiating between optimal and unstable laser parameters. Furthermore, there was a demonstrated potential to detect changes in ultra-dense, 0.1% porosity, materials with an ROC AUC of 0.944, suggesting an ability to detect anomalous events prior to the onset of significant material degradation. The method has merit for the purposes of detecting out of process distributions, while maintaining data efficiency. Subsequently, the generality of the methodology would suggest the solution is applicable to different laser processing systems and can potentially be adapted to a number of different sensing modalities.


2021 ◽  
pp. 027836492110382
Author(s):  
Beomjoon Kim ◽  
Luke Shimanuki ◽  
Leslie Pack Kaelbling ◽  
Tomás Lozano-Pérez

We present a framework for learning to guide geometric task-and-motion planning (G-TAMP). G-TAMP is a subclass of task-and-motion planning in which the goal is to move multiple objects to target regions among movable obstacles. A standard graph search algorithm is not directly applicable, because G-TAMP problems involve hybrid search spaces and expensive action feasibility checks. To handle this, we introduce a novel planner that extends basic heuristic search with random sampling and a heuristic function that prioritizes feasibility checking on promising state–action pairs. The main drawback of such pure planners is that they lack the ability to learn from planning experience to improve their efficiency. We propose two learning algorithms to address this. The first is an algorithm for learning a rank function that guides the discrete task-level search, and the second is an algorithm for learning a sampler that guides the continuous motion-level search. We propose design principles for designing data-efficient algorithms for learning from planning experience and representations for effective generalization. We evaluate our framework in challenging G-TAMP problems, and show that we can improve both planning and data efficiency.


2021 ◽  
Author(s):  
Ruiqi Tang ◽  
Ziyi Zhao ◽  
Kailun Wang ◽  
Xiaoli Gong ◽  
Jin Zhang ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document