scholarly journals Centromere Detection of Human Metaphase Chromosome Images using a Candidate Based Method

2015 ◽  
Author(s):  
Akila Subasinghe ◽  
Jagath Samarabandu ◽  
Yanxin Li ◽  
Ruth Wilkins ◽  
Farrah Flegal ◽  
...  

AbstractAccurate detection of the human metaphase chromosome centromere is an critical element of cytogenetic diagnostic techniques, including chromosome enumeration, karyotyping and radiation biodosimetry. Existing image processing methods can perform poorly in the presence of irregular boundaries, shape variations and premature sister chromatid separation, which can adversely affect centromere localization. We present a centromere detection algorithm that uses a novel profile thickness measurement technique on irregular chromosome structures defined by contour partitioning. Our algorithm generates a set of centromere candidates which are then evaluated based on a set of features derived from images of chromosomes. Our method also partitions the chromosome contour to isolate its telomere regions and then detects and corrects for sister chromatid separation. When tested with a chromosome database consisting of 1400 chromosomes collected from 40 metaphase cell images, the candidate based centromere detection algorithm was able to accurately localize 1220 centromere locations yielding a detection accuracy of 87%. We also introduce a Candidate Based Centromere Confidence (CBCC) metric which indicates an approximate confidence value of a given centromere detection and can be readily extended into other candidate related detection problems.


F1000Research ◽  
2016 ◽  
Vol 5 ◽  
pp. 1565 ◽  
Author(s):  
Akila Subasinghe ◽  
Jagath Samarabandu ◽  
YanXin Li ◽  
Ruth Wilkins ◽  
Farrah Flegal ◽  
...  

Accurate detection of the human metaphase chromosome centromere is a critical element of cytogenetic diagnostic techniques, including chromosome enumeration, karyotyping and radiation biodosimetry. Existing centromere detection methods tends to perform poorly in the presence of irregular boundaries, shape variations and premature sister chromatid separation. We present a centromere detection algorithm that uses a novel contour partitioning technique to generate centromere candidates followed by a machine learning approach to select the best candidate that enhances the detection accuracy. The contour partitioning technique evaluates various combinations of salient points along the chromosome boundary using a novel feature set and is able to identify telomere regions as well as detect and correct for sister chromatid separation. This partitioning is used to generate a set of centromere candidates which are then evaluated based on a second set of proposed features. The proposed algorithm outperforms previously published algorithms and is shown to do so with a larger set of chromosome images. A highlight of the proposed algorithm is the ability to rank this set of centromere candidates and create a centromere confidence metric which may be used in post-detection analysis. When tested with a larger metaphase chromosome database consisting of 1400 chromosomes collected from 40 metaphase cell images, the proposed algorithm was able to accurately localize 1220 centromere locations yielding a detection accuracy of 87%.



Author(s):  
Dongxian Yu ◽  
Jiatao Kang ◽  
Zaihui Cao ◽  
Neha Jain

In order to solve the current traffic sign detection technology due to the interference of various complex factors, it is difficult to effectively carry out the correct detection of traffic signs, and the robustness is weak, a traffic sign detection algorithm based on the region of interest extraction and double filter is designed.First, in order to reduce environmental interference, the input image is preprocessed to enhance the main color of each logo.Secondly, in order to improve the extraction ability Of Regions Of Interest, a Region Of Interest (ROI) detector based on Maximally Stable Extremal Regions (MSER) and Wave Equation (WE) was defined, and candidate Regions were selected through the ROI detector.Then, an effective HOG (Histogram of Oriented Gradient) descriptor is introduced as the detection feature of traffic signs, and SVM (Support Vector Machine) is used to classify them into traffic signs or background.Finally, the context-aware filter and the traffic light filter are used to further identify the false traffic signs and improve the detection accuracy.In the GTSDB database, three kinds of traffic signs, which are indicative, prohibited and dangerous, are tested, and the results show that the proposed algorithm has higher detection accuracy and robustness compared with the current traffic sign recognition technology.



2021 ◽  
Vol 13 (10) ◽  
pp. 1909
Author(s):  
Jiahuan Jiang ◽  
Xiongjun Fu ◽  
Rui Qin ◽  
Xiaoyan Wang ◽  
Zhifeng Ma

Synthetic Aperture Radar (SAR) has become one of the important technical means of marine monitoring in the field of remote sensing due to its all-day, all-weather advantage. National territorial waters to achieve ship monitoring is conducive to national maritime law enforcement, implementation of maritime traffic control, and maintenance of national maritime security, so ship detection has been a hot spot and focus of research. After the development from traditional detection methods to deep learning combined methods, most of the research always based on the evolving Graphics Processing Unit (GPU) computing power to propose more complex and computationally intensive strategies, while in the process of transplanting optical image detection ignored the low signal-to-noise ratio, low resolution, single-channel and other characteristics brought by the SAR image imaging principle. Constantly pursuing detection accuracy while ignoring the detection speed and the ultimate application of the algorithm, almost all algorithms rely on powerful clustered desktop GPUs, which cannot be implemented on the frontline of marine monitoring to cope with the changing realities. To address these issues, this paper proposes a multi-channel fusion SAR image processing method that makes full use of image information and the network’s ability to extract features; it is also based on the latest You Only Look Once version 4 (YOLO-V4) deep learning framework for modeling architecture and training models. The YOLO-V4-light network was tailored for real-time and implementation, significantly reducing the model size, detection time, number of computational parameters, and memory consumption, and refining the network for three-channel images to compensate for the loss of accuracy due to light-weighting. The test experiments were completed entirely on a portable computer and achieved an Average Precision (AP) of 90.37% on the SAR Ship Detection Dataset (SSDD), simplifying the model while ensuring a lead over most existing methods. The YOLO-V4-lightship detection algorithm proposed in this paper has great practical application in maritime safety monitoring and emergency rescue.



2021 ◽  
Vol 11 (13) ◽  
pp. 6016
Author(s):  
Jinsoo Kim ◽  
Jeongho Cho

For autonomous vehicles, it is critical to be aware of the driving environment to avoid collisions and drive safely. The recent evolution of convolutional neural networks has contributed significantly to accelerating the development of object detection techniques that enable autonomous vehicles to handle rapid changes in various driving environments. However, collisions in an autonomous driving environment can still occur due to undetected obstacles and various perception problems, particularly occlusion. Thus, we propose a robust object detection algorithm for environments in which objects are truncated or occluded by employing RGB image and light detection and ranging (LiDAR) bird’s eye view (BEV) representations. This structure combines independent detection results obtained in parallel through “you only look once” networks using an RGB image and a height map converted from the BEV representations of LiDAR’s point cloud data (PCD). The region proposal of an object is determined via non-maximum suppression, which suppresses the bounding boxes of adjacent regions. A performance evaluation of the proposed scheme was performed using the KITTI vision benchmark suite dataset. The results demonstrate the detection accuracy in the case of integration of PCD BEV representations is superior to when only an RGB camera is used. In addition, robustness is improved by significantly enhancing detection accuracy even when the target objects are partially occluded when viewed from the front, which demonstrates that the proposed algorithm outperforms the conventional RGB-based model.



Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1081
Author(s):  
Tamon Miyake ◽  
Shintaro Yamamoto ◽  
Satoshi Hosono ◽  
Satoshi Funabashi ◽  
Zhengxue Cheng ◽  
...  

Gait phase detection, which detects foot-contact and foot-off states during walking, is important for various applications, such as synchronous robotic assistance and health monitoring. Gait phase detection systems have been proposed with various wearable devices, sensing inertial, electromyography, or force myography information. In this paper, we present a novel gait phase detection system with static standing-based calibration using muscle deformation information. The gait phase detection algorithm can be calibrated within a short time using muscle deformation data by standing in several postures; it is not necessary to collect data while walking for calibration. A logistic regression algorithm is used as the machine learning algorithm, and the probability output is adjusted based on the angular velocity of the sensor. An experiment is performed with 10 subjects, and the detection accuracy of foot-contact and foot-off states is evaluated using video data for each subject. The median accuracy is approximately 90% during walking based on calibration for 60 s, which shows the feasibility of the static standing-based calibration method using muscle deformation information for foot-contact and foot-off state detection.



Sign in / Sign up

Export Citation Format

Share Document