MAGNETOENCEPHALOGRAPHY–ELECTROENCEPHALOGRAPHY CO-REGISTRATION USING 3D GENERALIZED HOUGH TRANSFORM

2020 ◽  
Vol 32 (03) ◽  
pp. 2050024
Author(s):  
Sheng-Kai Lin ◽  
Rong-Chin Lo ◽  
Ren-Guey Lee

This study proposes an advanced co-registration method for an integrated high temporal resolution electroencephalography (EEG) and magnetoencephalography (MEG) data. The MEG has a higher accuracy for source localization techniques and spatial resolution by sensing magnetic fields generated by the entire brain using multichannel superconducting quantum interference devices, whereas EEG can record electrical activities from larger cortical surface to detect epilepsy. However, by integrating the two modality tools, we can accurately localize the epileptic activity compared to other non-invasive modalities. Integrating the two modality tools is challenging and important. This study proposes a new algorithm using an extended three-dimensional generalized Hough transform (3D GHT) to co-register the two modality data. The pre-process steps require the locations of EEG electrodes, MEG sensors, head-shape points of subjects and fiducial landmarks. The conventional GHT algorithm is a well-known method used for identifying or locating two 2D images. This study proposes a new co-registration method that extends the 2D GHT algorithm to a 3D GHT algorithm that can automatically co-register 3D image data. It is important to study the prospective brain source activity in bio-signal analysis. Furthermore, the study examines the registration accuracy evaluation by calculating the root mean square of the Euclidean distance of MEG–EEG co-registration data. Several experimental results are used to show that the proposed method for co-registering the two modality data is accurate and efficient. The results demonstrate that the proposed method is feasible, sufficiently automatic, and fast for investigating brain source images.

2020 ◽  
Vol 32 (04) ◽  
pp. 2050028
Author(s):  
Sheng-Kai Lin ◽  
Rong-Chin Lo ◽  
Ren-Guey Lee

In this paper, we propose a method to use the three-dimensional (3D) generalized Hough transform (GHT) to co-register magnetoencephalography (MEG) and magnetic resonance imaging (MRI) of a brain automatically, whose results can be used to align MRI images and MEG data accurately and efficiently. Recently, many medical devices have been developed to study the neuronal activity in the human brain. MEG is a high-temporal-resolution measurement tool to study the physiological functions of brain nerves noninvasively; whereas the MRI of the scalp, skull, and cortex of the human brain is a high-spatial-resolution tool. The proposed method combines two tools for investigating the cognitive neuroscience between the human brain structure and weak magnetic fields from two different medical systems. An accurate and automatic registration method is necessitated to improve the brain analysis processes by combining multimodal data. The conventional GHT is a well-known method for detecting two-dimensional (2D) images or locating transformed planar shapes in 2D image processes. To further improve the 2D GHT, we extended it to a 3D GHT, which can co-register MEG and MRI data automatically and accurately. Some experimental results are included to demonstrate and evaluate the error and applicability of MEG–MRI co-registration.


2020 ◽  
Vol 32 (03) ◽  
pp. 2050019
Author(s):  
Sheng-Kai Lin ◽  
Rong-Chin Lo ◽  
Ren-Guey Lee

In this study, we propose a new automatic co-registration method for the coordinate systems of magnetoencephalography (MEG) data and third dimension digitizer (3D DIG) data of a head using the 3D generalized Hough transform (GHT) during image processing. The technique is important for the research of brain functionalities; it can be done automatically, and quickly combines data from functional brain mapping tools like MEG and DIG, etc. MEG is a measurement instrument used to noninvasively analyze the physiological activity of neurons with high temporal resolution, but it lacks the head-shape of subjects and head with respect to the MEG sensors. 3D DIG can record head- shape, facial features, and anatomical markers in a 3D coordinate system in real time. Thus, combining the two modalities is beneficial in correlating the obtained brain data with physiological activity. According to much of the research, the GHT is useful for recognizing or locating two 2D images. However, the GHT algorithm can be extended to the 3D GHT to automatically co-register the 3D data. In this study, we use the 3D GHT to co-register three subject datasets with MEG and 3D DIG data, and evaluate the average distance errors between the proposed method and the MEG160 system. Some of the experimental results demonstrate the applicability of the proposed 3D GHT accurately and efficiently.


2014 ◽  
Vol 2014 ◽  
pp. 1-8
Author(s):  
Liang Hua ◽  
Kean Yu ◽  
Lijun Ding ◽  
Juping Gu ◽  
Xinsong Zhang ◽  
...  

A three-dimensional multimodality medical image registration method using geometric invariant based on conformal geometric algebra (CGA) theory is put forward for responding to challenges resulting from many free degrees and computational burdens with 3D medical image registration problems. The mathematical model and calculation method of dual-vector projection invariant are established using the distribution characteristics of point cloud data and the point-to-plane distance-based measurement in CGA space. The translation operator and geometric rotation operator during registration operation are built in Clifford algebra (CA) space. The conformal geometrical algebra is used to realize the registration of 3D CT/MR-PD medical image data based on the dual vector geometric invariant. The registration experiment results indicate that the methodology proposed in this paper is of stronger commonality, less computation burden, shorter time consumption, and intuitive geometric meaning. Both subjective evaluation and objective indicators show that the methodology proposed here is of high registration accuracy and suitable for 3D medical image registration.


2019 ◽  
Vol 21 (Supplement_6) ◽  
pp. vi172-vi172
Author(s):  
Tsukasa Koike ◽  
Taichi Kin ◽  
Yasuhiro Takeda ◽  
Hiroki Uchikawa ◽  
Taketo Shiode ◽  
...  

Abstract PURPOSE Diffusion tensor-based tractography (DTT) is a method to estimate the direction of white matter fibers, but it is difficult to verify the relationship with brain function spatially with high accuracy. We developed a registration method to fuse the real space (brain surface photograph) and preoperative fused 3D image (virtual space) using the landmark method and thin plate spline method. In our previous study, this method was able to achieve highly accurate alignment registration error 0.7±0.1mm (mean±SE) even after brain shift due to craniotomy. In this study, we proposed a method to examine spatial errors of DTT and direct cortical stimulation (DCS) and verify its accuracy. METHODS We included 7 gliomas performed awake surgery. We created the fused three – dimensional image before surgery and acquired the brain surface photograph immediately after craniotomy, then we aligned them using the proposed method. Sites that showed speech arrest by DCS were plotted on the fused image. A circle with a radius of 15 mm centered on the same site was taken as the range over which the current spreads. The surface area of each of the circles was calculated to make it true if there was arcuate fasciculus drawn with DTT in the circle, and false if it did not exist. By using this method, the accuracy of the DTT was verified. RESULT: In 7 cases, speech arrest was shown at 21 DCS plots. The probability of the presence of DTT within the current spread of DCS was 64.4%. CONCLUSION The proposed method indicates that DTT does not necessarily match the DCS results by verification using real space and virtual space. We present some illustrative cases.


Author(s):  
M. Wang ◽  
Y. Ye ◽  
M. Sun ◽  
X. Tan ◽  
L. Li

Abstract. Automatic registration of optical and synthetic aperture radar (SAR) images is a challenging task due to significant geometric deformation and radiometric differences between two images. To address this issue, this paper proposes an automatic registration method for optical and SAR images based on spatial geometric constraint and structure features. Firstly, the Harris detector with a block strategy is used to extract evenly distributed feature points in the images. Subsequently, a local geometric correction is performed by using the Rational Function Model, which eliminates the rotation and scale differences between optical and SAR images. Secondly, orientated gradient information of images is used to construct a geometric structural feature descriptor. Then, the feature descriptor is transformed into the frequency domain, and the three-dimensional (3-D) phase correlation is used as the similarity metric to achieve correspondences by employing a template matching scheme. Finally, mismatches are eliminated based on spatial geometric constraint relationship between images, followed by a process of geometric correction to achieve the image registration. Experimental results with multiple high-resolution optical and SAR images show that the proposed method can achieve reliable registration accuracy, and outperforms the state of the art methods.


2011 ◽  
Vol 50-51 ◽  
pp. 790-793
Author(s):  
Shao Yan Sun ◽  
Lei Chen

Function of Degree of Disagreement (FDOD), a new measure of information discrepancy, quantifies the discrepancy of multiple sequences. This function has some peculiar mathematical properties, such as symmetry, boundedness and monotonicity. In this contribution, we first introduce the FDOD function to solve the three-dimensional (3-D) medical image registration problem. Numerical experiments illustrate that the new registration method based on the FDOD function can obtain subvoxel registration accuracy, and it is a competitive method with the mutual information based method.


2012 ◽  
Vol 443-444 ◽  
pp. 537-541
Author(s):  
Xiao Peng Wang ◽  
Yuan Zhi Cheng ◽  
Ming Ming Zhao ◽  
Xiao Hua Ding ◽  
Jing Bai

We describe a technique for the registration of three dimensional (3D) knee bone surface points from MR image data sets. This technique is grounded on a mathematical theory – Lipschitz optimization. Based on this theory, we propose a global search algorithm that simultaneously determines the transformation and point correspondences. Compared with the other three registration approaches (ICP, EM-ICP, and genetic algorithms), the new proposed method achieved the highest registration accuracy on animal data.


Author(s):  
Robert W. Mackin

This paper presents two advances towards the automated three-dimensional (3-D) analysis of thick and heavily-overlapped regions in cytological preparations such as cervical/vaginal smears. First, a high speed 3-D brightfield microscope has been developed, allowing the acquisition of image data at speeds approaching 30 optical slices per second. Second, algorithms have been developed to detect and segment nuclei in spite of the extremely high image variability and low contrast typical of such regions. The analysis of such regions is inherently a 3-D problem that cannot be solved reliably with conventional 2-D imaging and image analysis methods.High-Speed 3-D imaging of the specimen is accomplished by moving the specimen axially relative to the objective lens of a standard microscope (Zeiss) at a speed of 30 steps per second, where the stepsize is adjustable from 0.2 - 5μm. The specimen is mounted on a computer-controlled, piezoelectric microstage (Burleigh PZS-100, 68/μm displacement). At each step, an optical slice is acquired using a CCD camera (SONY XC-11/71 IP, Dalsa CA-D1-0256, and CA-D2-0512 have been used) connected to a 4-node array processor system based on the Intel i860 chip.


Author(s):  
Jiayong Yu ◽  
Longchen Ma ◽  
Maoyi Tian, ◽  
Xiushan Lu

The unmanned aerial vehicle (UAV)-mounted mobile LiDAR system (ULS) is widely used for geomatics owing to its efficient data acquisition and convenient operation. However, due to limited carrying capacity of a UAV, sensors integrated in the ULS should be small and lightweight, which results in decrease in the density of the collected scanning points. This affects registration between image data and point cloud data. To address this issue, the authors propose a method for registering and fusing ULS sequence images and laser point clouds, wherein they convert the problem of registering point cloud data and image data into a problem of matching feature points between the two images. First, a point cloud is selected to produce an intensity image. Subsequently, the corresponding feature points of the intensity image and the optical image are matched, and exterior orientation parameters are solved using a collinear equation based on image position and orientation. Finally, the sequence images are fused with the laser point cloud, based on the Global Navigation Satellite System (GNSS) time index of the optical image, to generate a true color point cloud. The experimental results show the higher registration accuracy and fusion speed of the proposed method, thereby demonstrating its accuracy and effectiveness.


Sign in / Sign up

Export Citation Format

Share Document