automatically tracking
Recently Published Documents


TOTAL DOCUMENTS

38
(FIVE YEARS 9)

H-INDEX

7
(FIVE YEARS 1)

Sensors ◽  
2021 ◽  
Vol 21 (18) ◽  
pp. 6187
Author(s):  
Yeonggul Jang ◽  
Byunghwan Jeon

Accurate identification of the coronary ostia from 3D coronary computed tomography angiography (CCTA) is a essential prerequisite step for automatically tracking and segmenting three main coronary arteries. In this paper, we propose a novel deep reinforcement learning (DRL) framework to localize the two coronary ostia from 3D CCTA. An optimal action policy is determined using a fully explicit spatial-sequential encoding policy network applying 2.5D Markovian states with three past histories. The proposed network is trained using a dueling DRL framework on the CAT08 dataset. The experiment results show that our method is more efficient and accurate than the other methods. blueFloating-point operations (FLOPs) are calculated to measure computational efficiency. The result shows that there are 2.5M FLOPs on the proposed method, which is about 10 times smaller value than 3D box-based methods. In terms of accuracy, the proposed method shows that 2.22 ± 1.12 mm and 1.94 ± 0.83 errors on the left and right coronary ostia, respectively. The proposed method can be applied to the tasks to identify other target objects by changing the target locations in the ground truth data. Further, the proposed method can be utilized as a pre-processing method for coronary artery tracking methods.


F1000Research ◽  
2020 ◽  
Vol 9 ◽  
pp. 1279
Author(s):  
Elnaz Fazeli ◽  
Nathan H. Roy ◽  
Gautier Follain ◽  
Romain F. Laine ◽  
Lucas von Chamier ◽  
...  

The ability of cells to migrate is a fundamental physiological process involved in embryonic development, tissue homeostasis, immune surveillance, and wound healing. Therefore, the mechanisms governing cellular locomotion have been under intense scrutiny over the last 50 years. One of the main tools of this scrutiny is live-cell quantitative imaging, where researchers image cells over time to study their migration and quantitatively analyze their dynamics by tracking them using the recorded images. Despite the availability of computational tools, manual tracking remains widely used among researchers due to the difficulty setting up robust automated cell tracking and large-scale analysis. Here we provide a detailed analysis pipeline illustrating how the deep learning network StarDist can be combined with the popular tracking software TrackMate to perform 2D automated cell tracking and provide fully quantitative readouts. Our proposed protocol is compatible with both fluorescent and widefield images. It only requires freely available and open-source software (ZeroCostDL4Mic and Fiji), and does not require any coding knowledge from the users, making it a versatile and powerful tool for the field. We demonstrate this pipeline's usability by automatically tracking cancer cells and T cells using fluorescent and brightfield images. Importantly, we provide, as supplementary information, a detailed step-by-step protocol to allow researchers to implement it with their images.


F1000Research ◽  
2020 ◽  
Vol 9 ◽  
pp. 1279
Author(s):  
Elnaz Fazeli ◽  
Nathan H. Roy ◽  
Gautier Follain ◽  
Romain F. Laine ◽  
Lucas von Chamier ◽  
...  

The ability of cells to migrate is a fundamental physiological process involved in embryonic development, tissue homeostasis, immune surveillance, and wound healing. Therefore, the mechanisms governing cellular locomotion have been under intense scrutiny over the last 50 years. One of the main tools of this scrutiny is live-cell quantitative imaging, where researchers image cells over time to study their migration and quantitatively analyze their dynamics by tracking them using the recorded images. Despite the availability of computational tools, manual tracking remains widely used among researchers due to the difficulty setting up robust automated cell tracking and large-scale analysis. Here we provide a detailed analysis pipeline illustrating how the deep learning network StarDist can be combined with the popular tracking software TrackMate to perform 2D automated cell tracking and provide fully quantitative readouts. Our proposed protocol is compatible with both fluorescent and widefield images. It only requires freely available and open-source software (ZeroCostDL4Mic and Fiji), and does not require any coding knowledge from the users, making it a versatile and powerful tool for the field. We demonstrate this pipeline's usability by automatically tracking cancer cells and T cells using fluorescent and brightfield images. Importantly, we provide, as supplementary information, a detailed step-by-step protocol to allow researchers to implement it with their images.


2020 ◽  
Author(s):  
Elnaz Fazeli ◽  
Nathan H. Roy ◽  
Gautier Follain ◽  
Romain F. Laine ◽  
Lucas von Chamier ◽  
...  

The ability of cells to migrate is a fundamental physiological process involved in embryonic development, tissue homeostasis, immune surveillance, and wound healing. Therefore, the mechanisms governing cellular locomotion have been under intense scrutiny over the last 50 years. One of the main tools of this scrutiny is live-cell quantitative imaging, where researchers image cells over time to study their migration and quantitatively analyze their dynamics by tracking them using the recorded images. Despite the availability of computational tools, manual tracking remains widely used among researchers due to the difficulty setting up robust automated cell tracking and large-scale analysis. Here we provide a detailed analysis pipeline illustrating how the deep learning network StarDist can be combined with the popular tracking software TrackMate to perform 2D automated cell tracking and provide fully quantitative readouts. Our proposed protocol is compatible with both fluorescent and widefield images. It only requires freely available and open-source software (ZeroCostDL4Mic and Fiji), and does not require any coding knowledge from the users, making it a versatile and powerful tool for the field. We demonstrate this pipeline’s usability by automatically tracking cancer cells and T cells using fluorescent and brightfield images. Importantly, we provide, as supplementary information, a detailed step-by-step protocol to allow researchers to implement it with their images.


Doklady BGUIR ◽  
2019 ◽  
pp. 121-128
Author(s):  
A. V. Khizhniak

The paper discusses the work of the correlation algorithm for automatically tracking objects of interest in a two-channel optical-electronic system in a complex background-target environment and the presence of intentional interference. Conditions in which the contrast of the desired object in both channels is negligible but not equal to zero are considered difficult in the paper. Intentional interference refers to masking interference of natural and artificial origin, which helps to reduce the contrast of the object in both channels. By two-channel means an optical-electronic system, which includes channels of the visible and infrared ranges. It is believed that the multi-spectral images of both channels are reduced to a single time and scale, which allows them to be integrated using various methods. The purpose of this paper is to prove that the likelihood of disruption of automatic tracking is reduced when complexing the source images of the visible and infrared ranges, when the contrast of the desired object in both channels is negligible. For research the mathematical apparatus of the theory of random functions and simulation with subsequent statistical data processing are used. It is shown that the probability of disruption, characterized by the ratio of the correlation coefficients of two fragments of images, one of which contains the desired object, and the second not, depends on both the correlation coefficients and the values of their mean square deviations. The simulation shows that the breakdown of tracking occurs both when the mean square deviations of these correlation coefficients are equal, and at their different values, moreover, an increase in their difference increases the probability of a breakdown. The article shows that the likelihood of a breakdown in a two-channel optoelectronic system will decrease when using two channels, compared to working only in the visible or infrared channel. The obtained results substantiate the promise of using image complexing in multichannel systems of automatic tracking of objects in a complex background-target environment and the presence of deliberate interference.


In microscopy images automatically tracking of living cells is an important and challenging problem. So this paper will concentrate on global tracking algorithm, which links cell outlines generated by a segmentation algorithm into tracks. This is attained eventually finding those tracks which provide for the biggest can be allowed to build with probabilistic persuaded scoring function, utilizing another structure calculation. This paper provided for the vitality about advanced image transforming and the noteworthiness for their usage ahead FPGAs should accomplish exceptional performance, this fills in addresses of the execution of image transforming calculations in average filter, morphological, convolution furthermore smoothing operation and then edge identification on FPGA utilizing VHDL language.


2019 ◽  
Vol 04 (03n04) ◽  
pp. 1942005
Author(s):  
Qipei Mei ◽  
Jonathan Chainey ◽  
David Asgar-Deen ◽  
Daniel Aalto

The importance of surgical simulation has increased over the last decade and the majority of medical schools have incorporated simulation into their curriculum. An essential aspect of surgical education is to evaluate how the student performs when compared to an expert surgeon. Another way to evaluate the skill of the student would be by tracking the position of the needle during the procedure, a factor correlating to surgical skill. In this study, we developed deep learning algorithms for needle detection during a video of a surgical procedure. 78 videos of a person doing a running suture on synthetic skin were captured using an HD camera. A total of 3368 images were manually annotated with a VGG annotator tool. Two deep learning algorithms (YOLOv3 and Faster R-CNN) were pretrained on 2219 images extracted from the JIGSAWS dataset, then trained on the 804 images from the training set and finally applied to the 345 images from the evaluation set. The performance of the algorithm was evaluated using the intersection over union (IoU) method as well as by measuring the Euclidean distance between bounding box centroids. These values were compared against the inter-observer reliability among three authors. The best IoU value by deep learning algorithms compared against the ground truth was found to be 0.601 for Faster R-CNN while the average inter-observer value was 0.663. The average Euclidean distances between bounding box centroids for authors and for the Faster R-CNN algorithm were 21.9 pixels and 36.8 pixels, respectively. Through qualitative and quantitative assessment of the algorithm (visually observing the algorithm’s needle annotations), deep learning shows promise for automatically tracking the position of the needle during a suturing operation.


Animals ◽  
2018 ◽  
Vol 8 (7) ◽  
pp. 121 ◽  
Author(s):  
Camille Raoult ◽  
Lorenz Gygax

Stimuli are often presumed to be either negative or positive. However, animals’ judgement of their negativity or positivity cannot generally be assumed. A possibility to assess emotional states in animals elicited by stimuli is to investigate animal preferences and their motivation to gain access to these stimuli. This study’s aim was to assess the valence of social stimuli in sheep. We used silent videos of varying intensity of dogs as negative versus conspecifics as positive stimuli in three approaches: (1) an approach–avoidance paradigm; (2) operant conditioning using the video stimuli as reinforcers; and (3) an attention test. In the latter, we assessed differential attention of sheep to simultaneous projections by automatically tracking sheep head and ear postures and recording brain activity. With these approaches, it was difficult to support that the sheep’s reactions varied according to the stimuli’s presumed valence and intensity. The approach–avoidance paradigm and attention test did not support the assumption that dog videos were more negative than sheep videos, though sheep reacted to the stimuli presented. Results from the operant conditioning indicated that sheep were more prone to avoid videos of moving dogs. Overall, we found that standard video images may not be ideal to represent valence characteristics of stimuli to sheep.


Sign in / Sign up

Export Citation Format

Share Document