scholarly journals ACME: Automatic feature extraction for Cell Migration Examination through intravital microscopy imaging

2022 ◽  
pp. 102358
Author(s):  
Miguel Molina-Moreno ◽  
Iván González-Díaz ◽  
Jon Sicilia ◽  
Georgiana Crainiciuc ◽  
Miguel Palomino-Segura ◽  
...  
Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1429
Author(s):  
Gang Hu ◽  
Kejun Wang ◽  
Liangliang Liu

Facing the complex marine environment, it is extremely challenging to conduct underwater acoustic target feature extraction and recognition using ship-radiated noise. In this paper, firstly, taking the one-dimensional time-domain raw signal of the ship as the input of the model, a new deep neural network model for underwater target recognition is proposed. Depthwise separable convolution and time-dilated convolution are used for passive underwater acoustic target recognition for the first time. The proposed model realizes automatic feature extraction from the raw data of ship radiated noise and temporal attention in the process of underwater target recognition. Secondly, the measured data are used to evaluate the model, and cluster analysis and visualization analysis are performed based on the features extracted from the model. The results show that the features extracted from the model have good characteristics of intra-class aggregation and inter-class separation. Furthermore, the cross-folding model is used to verify that there is no overfitting in the model, which improves the generalization ability of the model. Finally, the model is compared with traditional underwater acoustic target recognition, and its accuracy is significantly improved by 6.8%.


2021 ◽  
Vol 7 (1) ◽  
Author(s):  
Yue Liu ◽  
Zongjin Li

AbstractAcute kidney injury (AKI) is a common clinical symptom, which is mainly manifested by elevated serum creatinine and blood urea nitrogen levels. When AKI is not repaired in time, the patient is prone to develop chronic kidney disease (CKD). The kidney is composed of more than 30 different cells, and its structure is complex. It is extremely challenging to understand the lineage relationships and cell fate of these cells in the process of kidney injury and regeneration. Since the 20th century, lineage tracing technology has provided an important mean for studying organ development, tissue damage repair, and the differentiation and fate of single cells. However, traditional lineage tracing methods rely on sacrificing animals to make tissue slices and then take snapshots with conventional imaging tools to obtain interesting information. This method cannot achieve dynamic and continuous monitoring of cell actions on living animals. As a kind of intravital microscopy (IVM), two-photon microscopy (TPM) has successfully solved the above problems. Because TPM has the ability to penetrate deep tissues and can achieve imaging at the single cell level, lineage tracing technology with TPM is gradually becoming popular. In this review, we provided the key technical elements of lineage tracing, and how to use intravital imaging technology to visualize and quantify the fate of renal cells.


Mathematics ◽  
2021 ◽  
Vol 9 (6) ◽  
pp. 624
Author(s):  
Stefan Rohrmanstorfer ◽  
Mikhail Komarov ◽  
Felix Mödritscher

With the always increasing amount of image data, it has become a necessity to automatically look for and process information in these images. As fashion is captured in images, the fashion sector provides the perfect foundation to be supported by the integration of a service or application that is built on an image classification model. In this article, the state of the art for image classification is analyzed and discussed. Based on the elaborated knowledge, four different approaches will be implemented to successfully extract features out of fashion data. For this purpose, a human-worn fashion dataset with 2567 images was created, but it was significantly enlarged by the performed image operations. The results show that convolutional neural networks are the undisputed standard for classifying images, and that TensorFlow is the best library to build them. Moreover, through the introduction of dropout layers, data augmentation and transfer learning, model overfitting was successfully prevented, and it was possible to incrementally improve the validation accuracy of the created dataset from an initial 69% to a final validation accuracy of 84%. More distinct apparel like trousers, shoes and hats were better classified than other upper body clothes.


2020 ◽  
Vol 20 (5) ◽  
pp. 60-67
Author(s):  
Dilara Gumusbas ◽  
Tulay Yildirim

AbstractOffline signature is one of the frequently used biometric traits in daily life and yet skilled forgeries are posing a great challenge for offline signature verification. To differentiate forgeries, a variety of research has been conducted on hand-crafted feature extraction methods until now. However, these methods have recently been set aside for automatic feature extraction methods such as Convolutional Neural Networks (CNN). Although these CNN-based algorithms often achieve satisfying results, they require either many samples in training or pre-trained network weights. Recently, Capsule Network has been proposed to model with fewer data by using the advantage of convolutional layers for automatic feature extraction. Moreover, feature representations are obtained as vectors instead of scalar activation values in CNN to keep orientation information. Since signature samples per user are limited and feature orientations in signature samples are highly informative, this paper first aims to evaluate the capability of Capsule Network for signature identification tasks on three benchmark databases. Capsule Network achieves 97 96, 94 89, 95 and 91% accuracy on CEDAR, GPDS-100 and MCYT databases for 64×64 and 32×32 resolutions, which are lower than usual, respectively. The second aim of the paper is to generalize the capability of Capsule Network concerning the verification task. Capsule Network achieves average 91, 86, and 89% accuracy on CEDAR, GPDS-100 and MCYT databases for 64×64 resolutions, respectively. Through this evaluation, the capability of Capsule Network is shown for offline verification and identification tasks.


2020 ◽  
Author(s):  
Ying Bi ◽  
Bing Xue ◽  
Mengjie Zhang

© Springer International Publishing AG, part of Springer Nature 2018. Feature extraction is an essential process for image data dimensionality reduction and classification. However, feature extraction is very difficult and often requires human intervention. Genetic Programming (GP) can achieve automatic feature extraction and image classification but the majority of existing methods extract low-level features from raw images without any image-related operations. Furthermore, the work on the combination of image-related operators/descriptors in GP for feature extraction and image classification is limited. This paper proposes a multi-layer GP approach (MLGP) to performing automatic high-level feature extraction and classification. A new program structure, a new function set including a number of image operators/descriptors and two region detectors, and a new terminal set are designed in this approach. The performance of the proposed method is examined on six different data sets of varying difficulty and compared with five GP based methods and 42 traditional image classification methods. Experimental results show that the proposed method achieves better or comparable performance than these baseline methods. Further analysis on the example programs evolved by the proposed MLGP method reveals the good interpretability of MLGP and gives insight into how this method can effectively extract high-level features for image classification.


Author(s):  
Jian (John) Dong ◽  
Sreedharan Vijayan

Abstract Computers are being used increasingly in the process planning function. The starting point of this function involves interpreting design data from a CAD model of the designed component Feature-based technology is becoming an important tool for this. Automatic recognition of features and extraction of feature information from CAD data can be used to drive a process planning system. In this paper a new approach to automatic feature extraction called the Blank-Surface Concave-edge (BS-CE) approach is illustrated. This approach attempts to remove as much of the blank material with a given machine setup as possible. Hence intuitively one can say that the manufacturing cost of material removal may be minimized if this technique is employed. This feature extraction method is explained along with examples of its implementation. An analysis of alternate feature extraction results is performed and the cost of manufacture is compared to demonstrate the near optimal performance of this technique.


2021 ◽  
Author(s):  
M. Prakash ◽  
C. Saravanakumar ◽  
S. Kanaga Lakshmi ◽  
J Dafni Rose ◽  
B. Praba

Author(s):  
Lucia Ballerini ◽  
Marcello Calisti ◽  
Oscar Cordón ◽  
Sergio Damas ◽  
Jose Santamaría

Sign in / Sign up

Export Citation Format

Share Document