scholarly journals The Automatic Methods GroupNewsletter

1998 ◽  
Vol 20 (4) ◽  
pp. 121-128
Keyword(s):  
Author(s):  
J.P. Fallon ◽  
P.J. Gregory ◽  
C.J. Taylor

Quantitative image analysis systems have been used for several years in research and quality control applications in various fields including metallurgy and medicine. The technique has been applied as an extension of subjective microscopy to problems requiring quantitative results and which are amenable to automatic methods of interpretation.Feature extraction. In the most general sense, a feature can be defined as a portion of the image which differs in some consistent way from the background. A feature may be characterized by the density difference between itself and the background, by an edge gradient, or by the spatial frequency content (texture) within its boundaries. The task of feature extraction includes recognition of features and encoding of the associated information for quantitative analysis.Quantitative Analysis. Quantitative analysis is the determination of one or more physical measurements of each feature. These measurements may be straightforward ones such as area, length, or perimeter, or more complex stereological measurements such as convex perimeter or Feret's diameter.


2018 ◽  
Vol 15 (4) ◽  
pp. 45-60
Author(s):  
Negar Abbasi ◽  
Ali Moeini ◽  
Taghi Javdani Gandomani

Identification of web service candidates in legacy software is a crucial process in the reengineering of legacy systems to service oriented architecture. Researchers have proposed various automatic and semi-automatic methods for this purpose, some of which have proved to be quite efficient, but there are still certain gaps which need to be addressed. This article discovers the strengths and weaknesses of previous methods and develops a method with improved service candidate identification performance. In this article, service identification is considered as a search and optimization problem and a firefly algorithm is developed accordingly to give high-quality solutions in reasonably short times. A filtering method is also developed to remove excess modules (false positives) from the algorithm outputs. A case study on a legacy flight reservation system demonstrates the high reliability of the outputs given by the proposed method.


2021 ◽  
Vol 35 (2) ◽  
pp. 209-222
Author(s):  
Dylan Serillon ◽  
Carles Bo ◽  
Xavier Barril

AbstractThe design of new host–guest complexes represents a fundamental challenge in supramolecular chemistry. At the same time, it opens new opportunities in material sciences or biotechnological applications. A computational tool capable of automatically predicting the binding free energy of any host–guest complex would be a great aid in the design of new host systems, or to identify new guest molecules for a given host. We aim to build such a platform and have used the SAMPL7 challenge to test several methods and design a specific computational pipeline. Predictions will be based on machine learning (when previous knowledge is available) or a physics-based method (otherwise). The formerly delivered predictions with an RMSE of 1.67 kcal/mol but will require further work to identify when a specific system is outside of the scope of the model. The latter is combines the semiempirical GFN2B functional, with docking, molecular mechanics, and molecular dynamics. Correct predictions (RMSE of 1.45 kcal/mol) are contingent on the identification of the correct binding mode, which can be very challenging for host–guest systems with a large number of degrees of freedom. Participation in the blind SAMPL7 challenge provided fundamental direction to the project. More advanced versions of the pipeline will be tested against future SAMPL challenges.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Margherita Mottola ◽  
Stephan Ursprung ◽  
Leonardo Rundo ◽  
Lorena Escudero Sanchez ◽  
Tobias Klatte ◽  
...  

AbstractComputed Tomography (CT) is widely used in oncology for morphological evaluation and diagnosis, commonly through visual assessments, often exploiting semi-automatic tools as well. Well-established automatic methods for quantitative imaging offer the opportunity to enrich the radiologist interpretation with a large number of radiomic features, which need to be highly reproducible to be used reliably in clinical practice. This study investigates feature reproducibility against noise, varying resolutions and segmentations (achieved by perturbing the regions of interest), in a CT dataset with heterogeneous voxel size of 98 renal cell carcinomas (RCCs) and 93 contralateral normal kidneys (CK). In particular, first order (FO) and second order texture features based on both 2D and 3D grey level co-occurrence matrices (GLCMs) were considered. Moreover, this study carries out a comparative analysis of three of the most commonly used interpolation methods, which need to be selected before any resampling procedure. Results showed that the Lanczos interpolation is the most effective at preserving original information in resampling, where the median slice resolution coupled with the native slice spacing allows the best reproducibility, with 94.6% and 87.7% of features, in RCC and CK, respectively. GLCMs show their maximum reproducibility when used at short distances.


2020 ◽  
pp. 1-21 ◽  
Author(s):  
Clément Dalloux ◽  
Vincent Claveau ◽  
Natalia Grabar ◽  
Lucas Emanuel Silva Oliveira ◽  
Claudia Maria Cabral Moro ◽  
...  

Abstract Automatic detection of negated content is often a prerequisite in information extraction systems in various domains. In the biomedical domain especially, this task is important because negation plays an important role. In this work, two main contributions are proposed. First, we work with languages which have been poorly addressed up to now: Brazilian Portuguese and French. Thus, we developed new corpora for these two languages which have been manually annotated for marking up the negation cues and their scope. Second, we propose automatic methods based on supervised machine learning approaches for the automatic detection of negation marks and of their scopes. The methods show to be robust in both languages (Brazilian Portuguese and French) and in cross-domain (general and biomedical languages) contexts. The approach is also validated on English data from the state of the art: it yields very good results and outperforms other existing approaches. Besides, the application is accessible and usable online. We assume that, through these issues (new annotated corpora, application accessible online, and cross-domain robustness), the reproducibility of the results and the robustness of the NLP applications will be augmented.


Sign in / Sign up

Export Citation Format

Share Document