mapping data
Recently Published Documents


TOTAL DOCUMENTS

451
(FIVE YEARS 115)

H-INDEX

32
(FIVE YEARS 4)

2022 ◽  
Vol 24 (1) ◽  
Author(s):  
Rui Guo ◽  
Hossam El-Rewaidy ◽  
Salah Assana ◽  
Xiaoying Cai ◽  
Amine Amyar ◽  
...  

Abstract Purpose To develop and evaluate MyoMapNet, a rapid myocardial T1 mapping approach that uses fully connected neural networks (FCNN) to estimate T1 values from four T1-weighted images collected after a single inversion pulse in four heartbeats (Look-Locker, LL4). Method We implemented an FCNN for MyoMapNet to estimate T1 values from a reduced number of T1-weighted images and corresponding inversion-recovery times. We studied MyoMapNet performance when trained using native, post-contrast T1, or a combination of both. We also explored the effects of number of T1-weighted images (four and five) for native T1. After rigorous training using in-vivo modified Look-Locker inversion recovery (MOLLI) T1 mapping data of 607 patients, MyoMapNet performance was evaluated using MOLLI T1 data from 61 patients by discarding the additional T1-weighted images. Subsequently, we implemented a prototype MyoMapNet and LL4 on a 3 T scanner. LL4 was used to collect T1 mapping data in 27 subjects with inline T1 map reconstruction by MyoMapNet. The resulting T1 values were compared to MOLLI. Results MyoMapNet trained using a combination of native and post-contrast T1-weighted images had excellent native and post-contrast T1 accuracy compared to MOLLI. The FCNN model using four T1-weighted images yields similar performance compared to five T1-weighted images, suggesting that four T1 weighted images may be sufficient. The inline implementation of LL4 and MyoMapNet enables successful acquisition and reconstruction of T1 maps on the scanner. Native and post-contrast myocardium T1 by MOLLI and MyoMapNet was 1170 ± 55 ms vs. 1183 ± 57 ms (P = 0.03), and 645 ± 26 ms vs. 630 ± 30 ms (P = 0.60), and native and post-contrast blood T1 was 1820 ± 29 ms vs. 1854 ± 34 ms (P = 0.14), and 508 ± 9 ms vs. 514 ± 15 ms (P = 0.02), respectively. Conclusion A FCNN, trained using MOLLI data, can estimate T1 values from only four T1-weighted images. MyoMapNet enables myocardial T1 mapping in four heartbeats with similar accuracy as MOLLI with inline map reconstruction.


2021 ◽  
Vol 12 ◽  
Author(s):  
Michela Masè ◽  
Alessandro Cristoforetti ◽  
Maurizio Del Greco ◽  
Flavia Ravelli

The expanding role of catheter ablation of atrial fibrillation (AF) has stimulated the development of novel mapping strategies to guide the procedure. We introduce a novel approach to characterize wave propagation and identify AF focal drivers from multipolar mapping data. The method reconstructs continuous activation patterns in the mapping area by a radial basis function (RBF) interpolation of multisite activation time series. Velocity vector fields are analytically determined, and the vector field divergence is used as a marker of focal drivers. The method was validated in a tissue patch cellular automaton model and in an anatomically realistic left atrial (LA) model with Courtemanche–Ramirez–Nattel ionic dynamics. Divergence analysis was effective in identifying focal drivers in a complex simulated AF pattern. Localization was reliable even with consistent reduction (47%) in the number of mapping points and in the presence of activation time misdetections (noise <10% of the cycle length). Proof-of-concept application of the method to human AF mapping data showed that divergence analysis consistently detected focal activation in the pulmonary veins and LA appendage area. These results suggest the potential of divergence analysis in combination with multipolar mapping to identify AF critical sites. Further studies on large clinical datasets may help to assess the clinical feasibility and benefit of divergence analysis for the optimization of ablation treatment.


2021 ◽  
Vol 22 (24) ◽  
pp. 13598
Author(s):  
Guohua Meng ◽  
Andrea Lauria ◽  
Mara Maldotti ◽  
Francesca Anselmi ◽  
Isabelle Laurence Polignano ◽  
...  

Smad7 has been identified as a negative regulator of the transforming growth factor TGF-β pathway by direct interaction with the TGF-β type I receptor (TβR-I). Although Smad7 has also been shown to play TGF-β unrelated functions in the cytoplasm and in the nucleus, a comprehensive analysis of its nuclear function has not yet been performed. Here, we show that in ESCs Smad7 is mainly nuclear and acts as a general transcription factor regulating several genes unrelated to the TGF-β pathway. Loss of Smad7 results in the downregulation of several key stemness master regulators, including Pou5f1 and Zfp42, and in the upregulation of developmental genes, with consequent loss of the stem phenotype. Integrative analysis of genome-wide mapping data for Smad7 and ESC self-renewal and pluripotency transcriptional regulators revealed that Smad7 co-occupies promoters of highly expressed key stemness regulators genes, by binding to a specific consensus response element NCGGAAMM. Altogether, our data establishes Smad7 as a new, integral component of the regulatory circuitry that controls ESC identity.


2021 ◽  
Vol 12 ◽  
Author(s):  
Jan Lebert ◽  
Namita Ravi ◽  
Flavio H. Fenton ◽  
Jan Christoph

The analysis of electrical impulse phenomena in cardiac muscle tissue is important for the diagnosis of heart rhythm disorders and other cardiac pathophysiology. Cardiac mapping techniques acquire local temporal measurements and combine them to visualize the spread of electrophysiological wave phenomena across the heart surface. However, low spatial resolution, sparse measurement locations, noise and other artifacts make it challenging to accurately visualize spatio-temporal activity. For instance, electro-anatomical catheter mapping is severely limited by the sparsity of the measurements, and optical mapping is prone to noise and motion artifacts. In the past, several approaches have been proposed to create more reliable maps from noisy or sparse mapping data. Here, we demonstrate that deep learning can be used to compute phase maps and detect phase singularities in optical mapping videos of ventricular fibrillation, as well as in very noisy, low-resolution and extremely sparse simulated data of reentrant wave chaos mimicking catheter mapping data. The self-supervised deep learning approach is fundamentally different from classical phase mapping techniques. Rather than encoding a phase signal from time-series data, a deep neural network instead learns to directly associate phase maps and the positions of phase singularities with short spatio-temporal sequences of electrical data. We tested several neural network architectures, based on a convolutional neural network (CNN) with an encoding and decoding structure, to predict phase maps or rotor core positions either directly or indirectly via the prediction of phase maps and a subsequent classical calculation of phase singularities. Predictions can be performed across different data, with models being trained on one species and then successfully applied to another, or being trained solely on simulated data and then applied to experimental data. Neural networks provide a promising alternative to conventional phase mapping and rotor core localization methods. Future uses may include the analysis of optical mapping studies in basic cardiovascular research, as well as the mapping of atrial fibrillation in the clinical setting.


2021 ◽  
Vol 4 ◽  
pp. 1-8
Author(s):  
Jonas Luft ◽  
Jochen Schiewe

Abstract. In recent years, libraries have made great progress in digitising troves of historical maps with high-resolution scanners. Providing user-friendly information access for cultural heritage through spatial search and webGIS requires georeferencing of the hundreds of thousands of digitised maps.Georeferencing is usually done manually by finding “ground control points”, locations in the digital map image, whose identity is unambiguous and can easily be found in modern-day reference geodata/mapping data. To decide whether two symbols from different maps describe the same object, their semantic and spatial relations need to be matched. Automating this process is the only feasible way to georeference the immense quantities of maps in conceivable time. However, automated solutions for spatial matching quickly fail when faced with incomplete data – which is the greatest challenge when comparing maps of different ages or scales.These problems can be overcome by computing map similarity in the image domain. Treating maps as a special case of image processing allows efficient and robust matching and thus identification of geographical regions without the need to explicitly model semantics. We propose a method to encode worldwide reference VGI mapping data as image features, allowing the construction of an efficient lookup index. With this index, content-based image retrieval can be used for both geolocating a given map for georeferencing with high accuracy. We demonstrate our approach on hundreds of map sheets of different historical topographical survey map series, successfully georeferencing most of them within mere seconds.


2021 ◽  
Vol 18 ◽  
pp. 100078
Author(s):  
Chanachon Paijitprapaporn ◽  
Thayathip Thongtan ◽  
Chalermchon Satirapod

2021 ◽  
Vol 13 (23) ◽  
pp. 4765
Author(s):  
Patrick Hübner ◽  
Martin Weinmann ◽  
Sven Wursthorn ◽  
Stefan Hinz

Due to their great potential for a variety of applications, digital building models are well established in all phases of building projects. Older stock buildings however frequently lack digital representations, and creating these manually is a tedious and time-consuming endeavor. For this reason, the automated reconstruction of building models from indoor mapping data has arisen as an active field of research. In this context, many approaches rely on simplifying suppositions about the structure of buildings to be reconstructed such as, e.g., the well-known Manhattan World assumption. This however not only presupposes that a given building structure itself is compliant with this assumption, but also that the respective indoor mapping dataset is aligned with the coordinate axes. Indoor mapping systems, on the other hand, typically initialize the coordinate system arbitrarily by the sensor pose at the beginning of the mapping process. Thus, indoor mapping data need to be transformed from the local coordinate system, resulting from the mapping process, to a local coordinate system where the coordinate axes are aligned with the Manhattan World structure of the building. This necessary preprocessing step for many indoor reconstruction approaches is also frequently known as pose normalization. In this paper, we present a novel pose-normalization method for indoor mapping point clouds and triangle meshes that is robust against large portions of the indoor mapping geometries deviating from an ideal Manhattan World structure. In the case of building structures that contain multiple Manhattan World systems, the dominant Manhattan World structure supported by the largest fraction of geometries was determined and used for alignment. In a first step, a vertical alignment orienting a chosen axis to be orthogonal to horizontal floor and ceiling surfaces was conducted. Subsequently, a rotation around the resulting vertical axis was determined that aligned the dataset horizontally with the axes of the local coordinate system. The performance of the proposed method was evaluated quantitatively on several publicly available indoor mapping datasets of different complexity. The achieved results clearly revealed that our method is able to consistently produce correct poses for the considered datasets for different input rotations with high accuracy. The implementation of our method along with the code for reproducing the evaluation is made available to the public.


2021 ◽  
Vol 2 (2) ◽  
pp. 43
Author(s):  
Reny Rian Marliana

ABSTRAK Statistika memiliki peranan yang sangat penting pada sebuah penelitian. Dengan meningkatkan kompetensi di bidang Statistika, para peneliti akan mampu meminimumkan kesalahan yang dapat terjadi dan dapat meningkatkan kualitas output penelitian. Berdasarkan observasi, masih banyak dosen atau peneliti yang terpaku pada metode analisis tertentu dalam mengolah dan menguji hipotesis penelitian dari data yang dikumpulkan melalui sebuah kuesioner. Salah satu penyebabnya adalah kurangnya pemahaman dan kemampuan dalam menerapkan dan mengolah data menggunakan metode yang sesuai dengan karakteristik data yang dimiliki baik secara teori maupun secara praktik menggunakan software. Tujuan dari kegiatan ini adalah untuk memberikan pelatihan mengenai metode PLS-SEM menggunakan SmartPLS 3.0 dalam mengolah data yang diperoleh dari kuesioner pada Dosen mata kuliah Statistika di Fakultas Ilmu Sosial dan Ilmu Politik UIN Sunan Gunung Djati Bandung. Dalam pelatihan ini, peserta dibekali dengan pemaparan teori PLS-SEM meliputi tentang pemaparan atau review mapping metode analisis data, skala pengukuran variabel, teori dasar dari PLS-SEM serta praktikum menggunakan SmartPLS 3.0. Kegiatan ini mampu mengenalkan dan meningkatkan pemahaman serta kemampuan Dosen mata kuliah Statistika di Fakultas Ilmu Sosial dan Ilmu Politik UIN Sunan Gunung Djati Bandung dalam menerapkan metode PLS-SEM menggunakan SmartPLS 3.0. Dengan demikian, kegiatan ini juga mampu mendorong motivasi dosen dalam melaksanakan penelitian dan pengembangan transfer ilmu kepada mahasiswa. ABSTRACTStatistics has a very important role in a study. By increasing Statistics competence, researchers will be able to minimize errors and can improve the quality of research output. Based on observations, there are lecturers or researchers who tend to use certain methods on research hypotheses assessment on data which collected through questionaires. Caused by the lack of understanding and ability to choose a suitably method with the characteristics of the data both in theory and in practice using software. The aim of this activity is to provide training on the PLS-SEM method using SmartPLS 3.0 on data obtained from questionnaires for Lecturers of Statistics courses at the Faculty of Social and Political Sciences, UIN Sunan Gunung Djati Bandung. In this training, participants are provided with an explanation of PLS-SEM theory including review of mapping data analysis methods, variable measurement scales, basic theory of PLS-SEM with practical analysis using SmartPLS 3.0. Not only able to introduce and improve the understanding of Lecturers of Statistics courses at the Faculty of Social and Political Sciences UIN Sunan Gunung Djati Bandung on the application of the PLS-SEM method using SmartPLS 3.0, this activity also encourage lecturers' motivation in carrying out research and developing knowledge transfer to students.


2021 ◽  
Vol 21 (2) ◽  
pp. 97-107
Author(s):  
Ignasius Liliek Senaharjanta ◽  
Shella Fendista Shella Fendista

Abstract - The development of Information Technology which has impacts on the ease of producing and accessing information has caused the rapid circulation of information in the community. However, the circulation of this information does not completely contain the truth. Hoax information is intentionally produced and spread to the public through various application platforms aimed to distorting the facts so that the information is believed to be the real truth. This condition can be seen from the ease people share the information they receive through their smart devices to a number of people closest to them or to groups or community wherr they participate in.            This research is a descriptive qualitative research. The data in this study was carried out by analyzing hoax news mapping data collected by the covid-19 handling task force through the covid19.go.id website obtained through the Indonesian Anti-Defamation Society (Mafindo). The news data and hoax information in the form of verbal and visual were analyzed with a qualitative content analysis approach to analyze how the hoax phenomenon during this pandemic was deliberately created and spread by the public.Furthermore, the results of the analysis are examined with the perspective of Jean Baudillard's simulacra and hyperreality.            The result shows that information as the main product of the information society is no longer dominated by information producers such as television and newspapers, but now anyone who has a device and is connected to the internet can produce information. The impact of this is that humans are trapped in false reality and dwell on duplication and superficiality. Keyword: information technology, hoax covid-19, simulacrum jean baudillard


Sign in / Sign up

Export Citation Format

Share Document