automatic batch
Recently Published Documents


TOTAL DOCUMENTS

36
(FIVE YEARS 9)

H-INDEX

6
(FIVE YEARS 1)

2021 ◽  
Vol 13 (19) ◽  
pp. 3872
Author(s):  
Jianlai Chen ◽  
Hanwen Yu ◽  
Gang Xu ◽  
Junchao Zhang ◽  
Buge Liang ◽  
...  

Existing airborne SAR autofocus methods can be classified as parametric and non-parametric. Generally, non-parametric methods, such as the widely used phase gradient autofocus (PGA) algorithm, are only suitable for scenes with many dominant point targets, while the parametric ones are suitable for all types of scenes, in theory, but their efficiency is generally low. In practice, whether many dominant point targets are present in the scene is usually unknown, so determining what kind of algorithm should be selected is not straightforward. To solve this issue, this article proposes an airborne SAR autofocus approach combined with blurry imagery classification to improve the autofocus efficiency for ensuring autofocus precision. In this approach, we embed the blurry imagery classification based on a typical VGGNet in a deep learning community into the traditional autofocus framework as a preprocessing step before autofocus processing to analyze whether dominant point targets are present in the scene. If many dominant point targets are present in the scene, the non-parametric method is used for autofocus processing. Otherwise, the parametric one is adopted. Therefore, the advantage of the proposed approach is the automatic batch processing of all kinds of airborne measured data.


2021 ◽  
Author(s):  
Jiaxing Huang ◽  
Chenguang Ding ◽  
Yanding Qin ◽  
Yaowei Liu ◽  
Xin Zhao ◽  
...  

Author(s):  
Douglas Mesadri GEWEHR ◽  
Allan Fernando GIOVANINI ◽  
Sofia Inez MUNHOZ ◽  
Seigo NAGASHIMA ◽  
Andressa de Souza BERTOLDI ◽  
...  

ABSTRACT Background: Heart dysfunction and liver disease often coexist because of systemic disorders. Any cause of right ventricular failure may precipitate hepatic congestion and fibrosis. Digital image technologies have been introduced to pathology diagnosis, allowing an objective quantitative assessment. The quantification of fibrous tissue in liver biopsy sections is extremely important in the classification, diagnosis and grading of chronic liver disease. Aim: To create a semi-automatic computerized protocol to quantify any amount of centrilobular fibrosis and sinusoidal dilatation in liver Masson’s Trichrome-stained specimen. Method: Once fibrosis had been established, liver samples were collected, histologically processed, stained with Masson’s trichrome, and whole-slide images were captured with an appropriated digital pathology slide scanner. After, a random selection of the regions of interest (ROI’s) was conducted. The data were subjected to software-assisted image analysis (ImageJ®). Results: The analysis of 250 ROI’s allowed to empirically obtain the best application settings to identify the centrilobular fibrosis (CF) and sinusoidal lumen (SL). After the establishment of the colour threshold application settings, an in-house Macro was recorded to set the measurements (fraction area and total area) and calculate the CF and SL ratios by an automatic batch processing. Conclusion: Was possible to create a more detailed method that identifies and quantifies the area occupied by fibrous tissue and sinusoidal lumen in Masson’s trichrome-stained livers specimens.


2020 ◽  
Vol 223 (17) ◽  
pp. jeb226720
Author(s):  
J.D. Laurence-Chasen ◽  
Armita R. Manafzadeh ◽  
Nicholas G. Hatsopoulos ◽  
Callum F. Ross ◽  
Fritzie I. Arce-McShane

ABSTRACTMarker tracking is a major bottleneck in studies involving X-ray reconstruction of moving morphology (XROMM). Here, we tested whether DeepLabCut, a new deep learning package built for markerless tracking, could be applied to videoradiographic data to improve data processing throughput. Our novel workflow integrates XMALab, the existing XROMM marker tracking software, and DeepLabCut while retaining each program's utility. XMALab is used for generating training datasets, error correction and 3D reconstruction, whereas the majority of marker tracking is transferred to DeepLabCut for automatic batch processing. In the two case studies that involved an in vivo behavior, our workflow achieved a 6 to 13-fold increase in data throughput. In the third case study, which involved an acyclic, post-mortem manipulation, DeepLabCut struggled to generalize to the range of novel poses and did not surpass the throughput of XMALab alone. Deployed in the proper context, this new workflow facilitates large scale XROMM studies that were previously precluded by software constraints.


2020 ◽  
Author(s):  
JD Laurence-Chasen ◽  
AR Manafzadeh ◽  
NG Hatsopoulos ◽  
CF Ross ◽  
FI Arce-McShane

ABSTRACTMarker tracking is a major bottleneck in studies involving X-ray Reconstruction of Moving Morphology (XROMM). Here, we tested whether DeepLabCut, a new deep learning package built for markerless tracking, could be applied to videoradiographic data to improve data processing throughput. Our novel workflow integrates XMALab, the existing XROMM marker tracking software, and DeepLabCut while retaining each program’s utility. XMALab is used for generating training datasets, error correction, and 3D reconstruction, whereas the majority of marker tracking is transferred to DeepLabCut for automatic batch processing. In the two case studies that involved an in vivo behavior, our workflow achieved a 6 to 13-fold increase in data throughput. In the third case study, which involved an acyclic, post mortem manipulation, DeepLabCut struggled to generalize to the range of novel poses and did not surpass the throughput of XMALab alone. Deployed in the proper context, this new workflow facilitates large scale XROMM studies that were previously precluded by software constraints.


Author(s):  
Katerina Zdravkova ◽  
Milica Mirkulovska

A b s t r a c t: Online bibliography of Macedonian language http://bmj.manu.edu.mk is a system consisting of two interconnected applications, a desktop application for automatic batch uploading of the volumes and their representation as bibliographic records, and an ASP.NET client-server application based on C# for the server code. It is based on recently published volumes covering the period between 1953 and 1985. This paper presents the implementation and the functioning of online bibliographic system. In the beginning, the steps leading to its creation are presented in details. Afterwards, all the functionalities of the system are presented, together with illustrations of their active operation. Then, the administrative part of the system is presented. The paper ends with the conclusions, and direction for further development of the system.


Sign in / Sign up

Export Citation Format

Share Document