accuracy indices
Recently Published Documents


TOTAL DOCUMENTS

37
(FIVE YEARS 14)

H-INDEX

8
(FIVE YEARS 3)

Author(s):  
Svitlana V. Fedkiv ◽  
Sergiy V. Potashev ◽  
Olha M. Unitska ◽  
Vasyl V. Lazoryshynets

Background. Left ventricular aneurysm (LVA) is a complication occurring in 5–10% of acute myocardial infarction (AMI) patients significantly complicating AMI acute stage course and leading to advanced congestive heart failure (CHF) progress. Non-invasive LVA visualization includes echocardiography, cardiac magnetic resonance imaging (MRI), radio-nuclide ventriculography, and multi-slice computed tomography (MSCT). It can also be detected during heart catheteriza-tion by coronary ventriculography (CVG). Each method has its advantages and drawbacks. The aim. To analyze multimodal non-invasive LVA visualization methods (echocardiography and MSCT) in order to establish accuracy of these methods compared to CVG regarding the diagnosis of LVA and LVA thrombosis. Methods. We examined 60 patients after AMI with LVA admitted for surgical revascularization and left ventricular aneurysm resection (LVAR). Control group included 110 patients after AMI prior to revascularization without history of LVA. All the patients underwent CVG, heart MSCT and echocardiography prior to surgery. Results. Mean patients’ age was 60.9±11.4 years (46 [76.7%] men and 14 [23.3%] women, mean LVEF was 42.7±11.1%. Significant CAD according to coronary angiography (CAG) before surgery was proved in 59 (98.3%) pa-tients, and 1 (1.7%) patient had no significant coronary lesions. The majority of patients had anterior LVA localization after AMI in LAD area (57 [95.0%] patients), 2 (3.3%) patients were diagnosed with inferior LVA after AMI in RCA area, and 1 (1.7%) patient had posterior-lateral LVA in Cx area. There was high correlation between LVEF obtained with echo-cardiography and that obtained with MSCT (r=0.955, p<0.0001), although mean LVEF obtained with echocardiography was significantly higher compared to MSCT results (42.7±11.1% vs. 32.7±9.3%, p<0.0001). Comparison of accuracy of the methods in LVA diagnosis showed that MSCT was the most precise method with significantly higher sensitivity compared to CVG and echocardiography (94.9% vs. 75.0%, p=0.002, and 88.0%, p=0.023, respectively), and MSCT significantly ex-ceeded CVG in all diagnostic method accuracy indices. Echocardiography also significantly exceeded CVG in all diagnostic accuracy indices. Comparison of accuracy of the methods in LVA thrombosis diagnosis showed similar results: echocar-diography was much more precise in terms of sensitivity (79.4% vs. 58.8%, p<0.0001) and the rest of indices. MSCT was much more precise in terms of all indices compared to CVG, and also significantly exceeded echocardiography results in terms of sensitivity (97.1% vs. 79.4%, p<0.0001), positive (PPV) (100.0% vs. 93.1%, p=0.0005) and negative predictive value (NPV) (99.1% vs. 93.9%, p=0.0091), integral “area under curve” index (AUC) (0.99 vs. 0.89, p=0.0001) and odds ratio (OR) (3630 vs. 208, p<0.0001). Conclusions. High correlation of LVEF according to echocardiography and MSCT results allows to skip CVG as a global LV contractility evaluation method enabling to reduce the procedure time. The lowest accuracy of CVG in the diagnosis of LVA and LVA thrombosis also allows to reduce the duration and volume of the invasive procedure to selective CAG and to reduce radiation exposure for patients and operators in favor of non-invasive and more accurate methods (MSCT and echocardiography). MSCT is the most accurate method for LVA thrombosis diagnosis, but it is completely comparable to echocardiography in LVA diagnosis per se, making echocardiography the method of choice in screening and stratification of patients after AMI regarding myocardial revascularization only or combined surgical revascularization with LVAR due to its rapidness, low cost and absence of patient-related adverse effects.


Author(s):  
Madson Tavares Silva ◽  
Eduardo da Silva Margalho ◽  
Edivaldo Afonso de Oliveira Serrão ◽  
Amanda Cartaxo de Souza ◽  
Caroline de Sá Soares ◽  
...  

Abstract The type of land use and land cover plays a decisive role in land surface temperature (LST). As cities are composed of varied covers, including vegetation, built-up areas, buildings, roads and areas without vegetation, understanding LST patterns in complex urban spaces is becoming increasingly important. The present study investigated the relationship between LST and albedo, NDVI, NDWI, NDBI and NDBaI in the period between 1994 and 2017. Images from Thematic Mapper (TM) and Thermal Infrared Sensor (TIRS) onboard the Landsat 5 and 8 satellites, respectively, were used in the study. The images were processed, resampled (spatial resolution of 120 m) in the environment of the QGIS 3.0 software and, finally, centroids were extracted resulting in a total of 1252 points. A classical regression (CR) model was applied to the variables, followed by spatial autoregressive (SARM) and spatial error (SEM) models, and the results were compared using accuracy indices. The results showed that the highest correlation coefficient was found between albedo and NDBaI (r = 0.88). The relationship between albedo and LST (r = 0.7) was also positive and significant at р < 0.05. The global Moran's I index showed spatial dependence and non-stationarity of the LST (I = 0.44). The SEM presented the best accuracy metrics (AIC = 3307.15 and R2 = 0.65) for the metropolitan region of Belém, explaining considerably more variations in the relationship between explanatory factors and LST when compared to conventional CR models.


2021 ◽  
Vol 11 (2) ◽  
pp. 106-114
Author(s):  
Branko Popović

In modern Quality Engineering, the quality of process results, the quality of measuring instruments, and the quality of completed processes are considered according to their quality characteristics. However, in traditional engineering, only the quality of measuring instruments and the quality of completed processes (precision or accuracy indices PCI, PPI) can be successfully defined, while the quality of process results (semi-finished product, product, software, service) can only be described (has or does not have the required quality), good or bad quality, better or worse quality). In the modern consideration of quality, the quality of process results is now defined by the number of decibels [dB], according to the discovery of the genius Japanese scientist Genichi Taguchi (1924-2012), with the methods of Robust Technology Development and Standard Ratio (S/N). This paper discusses definition of the quality of process results with one input variable and continuous characteristics with three illustrative examples.


2020 ◽  
Vol 12 (21) ◽  
pp. 3591
Author(s):  
Matheus Gabriel Acorsi ◽  
Leandro Maria Gimenez ◽  
Maurício Martello

The development of low-cost miniaturized thermal cameras has expanded the use of remotely sensed surface temperature and promoted advances in applications involving proximal and aerial data acquisition. However, deriving accurate temperature readings from these cameras is often challenging due to the sensitivity of the sensor, which changes according to the internal temperature. Moreover, the photogrammetry processing required to produce orthomosaics from aerial images can also be problematic and introduce errors to the temperature readings. In this study, we assessed the performance of the FLIR Lepton 3.5 camera in both proximal and aerial conditions based on precision and accuracy indices derived from reference temperature measurements. The aerial analysis was conducted using three flight altitudes replicated along the day, exploring the effect of the distance between the camera and the target, and the blending mode configuration used to create orthomosaics. During the tests, the camera was able to deliver results within the accuracy reported by the manufacturer when using factory calibration, with a root mean square error (RMSE) of 1.08 °C for proximal condition and ≤3.18 °C during aerial missions. Results among different flight altitudes revealed that the overall precision remained stable (R² = 0.94–0.96), contrasting with the accuracy results, decreasing towards higher flight altitudes due to atmospheric attenuation, which is not accounted by factory calibration (RMSE = 2.63–3.18 °C). The blending modes tested also influenced the final accuracy, with the best results obtained with the average (RMSE = 3.14 °C) and disabled mode (RMSE = 3.08 °C). Furthermore, empirical line calibration models using ground reference targets were tested, reducing the errors on temperature measurements by up to 1.83 °C, with a final accuracy better than 2 °C. Other important results include a simplified co-registering method developed to overcome alignment issues encountered during orthomosaic creation using non-geotagged thermal images, and a set of insights and recommendations to reduce errors when deriving temperature readings from aerial thermal imaging.


2020 ◽  
Vol 12 (12) ◽  
pp. 2010 ◽  
Author(s):  
Seyd Teymoor Seydi ◽  
Mahdi Hasanlou ◽  
Meisam Amani

The diversity of change detection (CD) methods and the limitations in generalizing these techniques using different types of remote sensing datasets over various study areas have been a challenge for CD applications. Additionally, most CD methods have been implemented in two intensive and time-consuming steps: (a) predicting change areas, and (b) decision on predicted areas. In this study, a novel CD framework based on the convolutional neural network (CNN) is proposed to not only address the aforementioned problems but also to considerably improve the level of accuracy. The proposed CNN-based CD network contains three parallel channels: the first and second channels, respectively, extract deep features on the original first- and second-time imagery and the third channel focuses on the extraction of change deep features based on differencing and staking deep features. Additionally, each channel includes three types of convolution kernels: 1D-, 2D-, and 3D-dilated-convolution. The effectiveness and reliability of the proposed CD method are evaluated using three different types of remote sensing benchmark datasets (i.e., multispectral, hyperspectral, and Polarimetric Synthetic Aperture RADAR (PolSAR)). The results of the CD maps are also evaluated both visually and statistically by calculating nine different accuracy indices. Moreover, the results of the CD using the proposed method are compared to those of several state-of-the-art CD algorithms. All the results prove that the proposed method outperforms the other remote sensing CD techniques. For instance, considering different scenarios, the Overall Accuracies (OAs) and Kappa Coefficients (KCs) of the proposed CD method are better than 95.89% and 0.805, respectively, and the Miss Detection (MD) and the False Alarm (FA) rates are lower than 12% and 3%, respectively.


SAGE Open ◽  
2020 ◽  
Vol 10 (2) ◽  
pp. 215824402093106
Author(s):  
Mona Tabatabaee-Yazdi

The Hierarchical Diagnostic Classification Model (HDCM) reflects on the sequences of the presentation of the essential materials and attributes to answer the items of a test correctly. In this study, a foreign language reading comprehension test was analyzed employing HDCM and the generalized deterministic-input, noisy and gate (G-DINA) model to determine and compare respondents’ mastery profiles in the test’s predefined skills and to illustrate the relationships among the attributes involved in the test to capture the influence of sequential teaching of materials on increasing the probability of getting an item a correct answer. Furthermore, Differential Item Functioning (DIF) analysis was applied to detect whether the test functions as a reason for the gender gap in participants’ achievement. Finally, classification consistency and accuracy indices are studied. The results showed that the G-DINA and one of the HDCMs fit the data well. However, although the results of HDCM showed the existence of attribute dependencies in the reading comprehension test, the relative fit indices highlight a significant difference between the G-DINA and HDCM, favoring G-DINA. Moreover, results indicate that there is a significant difference between males and females in six items in favor of females. Besides, classification consistency and accuracy indices specify that the Iranian University Entrance Examination holds a 71% chance of categorizing a randomly selected test taker consistently on two distinct test settings and a 78% likelihood of accurately classifying any randomly selected student into the true latent classes. As a result, it can be concluded that the Iranian University Entrance Examination can be considered as a valid and reliable test.


2020 ◽  
Vol 12 (7) ◽  
pp. 2503
Author(s):  
Ana Paula Sena de Souza ◽  
Ivonice Sena de Souza ◽  
George Olavo ◽  
Jocimara Souza Britto Lobão ◽  
Rafael Vinícius de São José

O ecossistema manguezal representa 8% de toda a linha de costa do planeta ocupando uma área total de 181.077 km2. O Brasil é o segundo país em extensão de áreas de manguezal, ficando atrás apenas da Indonésia. O objetivo do presente estudo foi mapear e identificar os principais vetores responsáveis pela supressão da cobertura das áreas de manguezal na região do Baixo Sul da Bahia, Brasil, a partir de imagens de satélite Landsat disponíveis para o período entre 1994 e 2017. Os mapeamentos foram realizados a partir de classificação supervisionada, utilizando o método Maxver. A acurácia da classificação obtida foi verificada através da verdade de campo, de índices de Exatidão Global, e dos coeficientes de concordância kappa e Tau. As classes que apresentaram maior área de cobertura no período analisado foram: vegetação ombrófila densa, agropecuária, solo exposto e manguezal. Foram identificados dois vetores principais responsáveis pela supressão dos bosques de mangue: a expansão desordenada das áreas urbanas (com destaque para o município de Valença) e o avanço da atividade de carcinicultura clandestina, devido a instalação de tanques de cultivo de camarão sem o devido processo de licenciamento ambiental (sobretudo no município de Nilo Peçanha). O uso das geotecnologias, em especial o Sensoriamento Remoto e os Sistemas de Informações Geográficas, foram ferramentas fundamentais na identificação destes vetores responsáveis pela supressão das áreas de manguezal na área de estudo região do Baixo Sul da Bahia.  Mapping and identification of vectors responsible for mangrove suppression in the Southern Bahia Lowlands, BrazilA B S T R A C TThe mangrove ecosystem represents 8% of the entire coastline of the planet and occupies a total area of 181,077 km2. Brazil is the second largest country in terms of mangrove areas, second only to Indonesia. The aim of the present study was to map and identify the main vectors responsible for the suppression of mangrove cover in the Southern Lowlands of Bahia, Brazil, from Landsat satellite images available for the period 1994-2017. based on supervised classification using the Maxver method. The accuracy of the classification obtained was verified through field truth, Global Accuracy indices, and kappa and Tau agreement coefficients. The classes that presented larger coverage area in the analyzed period were: dense ombrophilous vegetation, agriculture, exposed soil and mangrove. Two main vectors responsible for the suppression of mangrove forests were identified: the disorderly expansion of urban areas (especially the municipality of Valença) and the advance of clandestine shrimp farming due to the installation of shrimp farms without due environmental licensing process (mainly in the municipality of Nilo Peçanha). The use of geotechnologies, especially Remote Sensing and Geographic Information Systems, were fundamental tools in the identification of these vectors responsible for the suppression of mangrove areas in the study area of the Southern Bahia Lowlands.Key-words: environmental impacts, satellite image, shrimp farming.


2020 ◽  
Vol 127 (2) ◽  
pp. 401-414
Author(s):  
Dustin B. Hammers ◽  
Sara Weisenbach

The debate over Hasher and Zacks’ effort hypothesis—that performance on effortful tasks by patients with depression will be disproportionately worse than their performance on automatic tasks—shows a need for additional research to settle whether or not this notion is “clinical lore.” In this study, we categorized 285 outpatient recipients of neuropsychological evaluations into three groups—No Depression, Mild-to-Moderate Depression, and Severe Depression—based on their Beck Depression Inventory-2 self-reports. We then compared these groups’ performances on both “automatic” and “effortful” versions of the Ruff 2 & 7 Selective Attention Test Total Speed and Total Accuracy Indices, the Digit Span subtest from the Wechsler Adult Intellectual Scale—Fourth Edition, and Trail Making Test Parts A and B, using a two-way (3 × 2) mixed multivariate analysis of variance. Patients with Mild-to-Moderate Depression or Severe Depression performed disproportionately worse than patients with No Depression in our sample on more effortful versions of only one of the four attention or executive functioning measures (Trail Making Test). Thus, these data failed to fully support a hypothesis of disproportionately worse performance on more effortful tasks. While this study failed to negate the effort hypothesis in some specific instances, particularly for use in the Trail Making Test, there is cause for caution in routinely applying the effort hypothesis when interpreting test findings in most clinical settings and for most measures.


Author(s):  
Yang Yu ◽  
Yasushi Makihara ◽  
Yasushi Yagi

AbstractWe address a method of pedestrian segmentation in a video in a spatio-temporally consistent way. For this purpose, given a bounding box sequence of each pedestrian obtained by a conventional pedestrian detector and tracker, we construct a spatio-temporal graph on a video and segment each pedestrian on the basis of a well-established graph-cut segmentation framework. More specifically, we consider three terms as an energy function for the graph-cut segmentation: (1) a data term, (2) a spatial pairwise term, and (3) a temporal pairwise term. To maintain better temporal consistency of segmentation even under relatively large motions, we introduce a transportation minimization framework that provides a temporal correspondence. Moreover, we introduce the edge-sticky superpixel to maintain the spatial consistency of object boundaries. In experiments, we demonstrate that the proposed method improves segmentation accuracy indices, such as the average and weighted intersection of union on TUD datasets and the PETS2009 dataset at both the instance level and semantic level.


Sign in / Sign up

Export Citation Format

Share Document