incremental approach
Recently Published Documents


TOTAL DOCUMENTS

408
(FIVE YEARS 80)

H-INDEX

28
(FIVE YEARS 4)

eLife ◽  
2022 ◽  
Vol 11 ◽  
Author(s):  
Ko Sugawara ◽  
Çağrı Çevrim ◽  
Michalis Averof

Deep learning is emerging as a powerful approach for bioimage analysis. Its use in cell tracking is limited by the scarcity of annotated data for the training of deep-learning models. Moreover, annotation, training, prediction, and proofreading currently lack a unified user interface. We present ELEPHANT, an interactive platform for 3D cell tracking that addresses these challenges by taking an incremental approach to deep learning. ELEPHANT provides an interface that seamlessly integrates cell track annotation, deep learning, prediction, and proofreading. This enables users to implement cycles of incremental learning starting from a few annotated nuclei. Successive prediction-validation cycles enrich the training data, leading to rapid improvements in tracking performance. We test the software’s performance against state-of-the-art methods and track lineages spanning the entire course of leg regeneration in a crustacean over 1 week (504 timepoints). ELEPHANT yields accurate, fully-validated cell lineages with a modest investment in time and effort.


2021 ◽  
Author(s):  
Dominique Salacz ◽  
Farid Allam ◽  
Imre Szilagyi ◽  
Yousof Al Mansoori

Abstract After the oil price crashes in 2014 & 2020 several M&A deals ended up in legal debates because operators cancelled major projects or infills wells that were booked in the "probable" reserves only. This document challenges the compatibility between the deterministic incremental reserve assessment method (PRMS2018, chapter 4.2.1.3), and the concept of split condition (PRMS2018 chapter 2.2.0.3), which is not allowed for reserves booking under PRMS. With a few examples, we explain why the incremental method may be misleading investors, if used wrongly. Policies, stock market requirements, or simply the understanding of reserves guidelines may differ from one company to another. Many filers and auditors are still keen on using the deterministic incremental approach. This method consists in "defining discrete parts or segments of the accumulation that reflect high, best, and low confidence regarding the estimates of recoverable quantities under the defined development plan". In principle, this should give similar result to the widely accepted scenario method (PRMS2018, chapter 4.2.1.3) but in reality, major discrepancies are observed. Some reserve evaluation may also become misleading for banks, investors, or even for good asset management. I many cases, the estimation of recoverable volumes is reasonable, but it does not match the company CAPEX requirements, affecting corporate cash flow as well as potential Reserves Based Lending (RBL) requirements. In another case, the 1P case will be robust, but the 2P may be grossly overestimated, affecting M&A or share price. "Reserves guidelines are principle based" this has recently become a very fashionable statement in the context of SEC bookings. Similar discussions will also occur when reviewing PRMS reports. However, different interpretations for keywords such as "Project", "Spit condition", or "FID" should not prevent the evaluator to provide a reliable reserves estimation to investor or company management. This document questions the threshold where ethics disappears, and a Madoff scheme may become legal.


2021 ◽  
Vol 212 ◽  
pp. 105067
Author(s):  
Yigezu A. Yigezu ◽  
Tamer El-Shater ◽  
Mohamed Boughlala ◽  
Mina Devkota ◽  
Rachid Mrabet ◽  
...  

Author(s):  
Sven Schneider ◽  
Leen Lambers ◽  
Fernando Orejas

AbstractWe introduce a logic-based incremental approach to graph repair, generating a sound and complete (upon termination) overview of least-changing graph repairs from which a user may select a graph repair based on non-formalized further requirements. This incremental approach features delta preservation as it allows to restrict the generation of graph repairs to delta-preserving graph repairs, which do not revert the additions and deletions of the most recent consistency-violating graph update. We specify consistency of graphs using the logic of nested graph conditions, which is equivalent to first-order logic on graphs. Technically, the incremental approach encodes if and how the graph under repair satisfies a graph condition using the novel data structure of satisfaction trees, which are adapted incrementally according to the graph updates applied. In addition to the incremental approach, we also present two state-based graph repair algorithms, which restore consistency of a graph independent of the most recent graph update and which generate additional graph repairs using a global perspective on the graph under repair. We evaluate the developed algorithms using our prototypical implementation in the tool AutoGraph and illustrate our incremental approach using a case study from the graph database domain.


Author(s):  
A. Iodice D’Enza ◽  
A. Markos ◽  
F. Palumbo

AbstractStandard multivariate techniques like Principal Component Analysis (PCA) are based on the eigendecomposition of a matrix and therefore require complete data sets. Recent comparative reviews of PCA algorithms for missing data showed the regularised iterative PCA algorithm (RPCA) to be effective. This paper presents two chunk-wise implementations of RPCA suitable for the imputation of “tall” data sets, that is, data sets with many observations. A “chunk” is a subset of the whole set of available observations. In particular, one implementation is suitable for distributed computation as it imputes each chunk independently. The other implementation, instead, is suitable for incremental computation, where the imputation of each new chunk is based on all the chunks analysed that far. The proposed procedures were compared to batch RPCA considering different data sets and missing data mechanisms. Experimental results showed that the distributed approach had similar performance to batch RPCA for data with entries missing completely at random. The incremental approach showed appreciable performance when the data is missing not completely at random, and the first analysed chunks contain sufficient information on the data structure.


2021 ◽  
Vol 15 ◽  
pp. 14-18
Author(s):  
Arun Pratap Singh Kushwah ◽  
Shailesh Jaloree ◽  
Ramjeevan Singh Thakur

Clustering is an approach of data mining, which helps us to find the underlying hidden structure in the dataset. K-means is a clustering method which usages distance functions to find the similarities or dissimilarities between the instances. DBSCAN is a clustering algorithm, which discovers the arbitrary shapes & sizes of clusters from huge volume of using spatial density method. These two approaches of clustering are the classical methods for efficient clustering but underperform when the data is updated frequently in the databases so, the incremental or gradual clustering approaches are always preferred in this environment. In this paper, an incremental approach for clustering is introduced using K-means and DBSCAN to handle the new datasets dynamically updated in the database in an interval.


2021 ◽  
Vol 18 (3) ◽  
pp. 172988142110121
Author(s):  
David Portugal ◽  
André G Araújo ◽  
Micael S Couceiro

To move out of the lab, service robots must reveal a proven robustness so they can be deployed in operational environments. This means that they should function steadily for long periods of time in real-world areas under uncertainty, without any human intervention, and exhibiting a mature technology readiness level. In this work, we describe an incremental methodology for the implementation of an innovative service robot, entirely developed from the outset, to monitor large indoor areas shared by humans and other obstacles. Focusing especially on the reliability of the fundamental localization system of the robot in the long term, we discuss all the incremental software and hardware features, design choices, and adjustments conducted, and show their impact on the performance of the robot in the real world, in three distinct 24-h long trials, with the ultimate goal of validating the proposed mobile robot solution for indoor monitoring.


Sign in / Sign up

Export Citation Format

Share Document