human observer
Recently Published Documents


TOTAL DOCUMENTS

353
(FIVE YEARS 97)

H-INDEX

29
(FIVE YEARS 4)

2022 ◽  
Author(s):  
Chi Zhang ◽  
Arthur Porto ◽  
Sara Rolfe ◽  
Altan Kocatulum ◽  
A. Murat Maga

Geometric morphometrics based on landmark data has been increasingly used in biomedical and biological researchers for quantifying complex phenotypes. However, manual landmarking can be laborious and subject to intra and interobserver errors. This has motivated the development of automated landmarking methods. We have recently introduced ALPACA (Automated Landmarking through Point cloud Alignment and Correspondence), a fast method to automatically annotate landmarks via use of a landmark template as part of the SlicerMorph toolkit. Yet, using a single template may not consistently perform well for large study samples, especially when the sample consists of specimen with highly variable morphology, as it is common evolutionary studies. In this study, we introduce a variation on our ALPACA pipeline that supports multiple specimen templates, which we call MALPACA. We show that MALPACA outperforms ALPACA consistently by testing on two different datasets. We also introduce a method of choosing the templates that can be used in conjunction with MALPACA, when no prior information is available. This K-means method uses an approximation of the total morphological variation in the dataset to suggest samples within the population to be used as landmark templates. While we advise investigators to pay careful attention to the template selection process in any of the template-based automated landmarking approaches, our analyses show that the introduced K-means based method of templates selection is better than randomly choosing the templates. In summary, MALPACA can accommodate larger morphological disparity commonly found in evolutionary studies with performance comparable to human observer.


2021 ◽  
Author(s):  
Isaac Shiri ◽  
Yazdan Salimi ◽  
Masoumeh Pakbin ◽  
Ghasem Hajianfar ◽  
Atlas Haddadi Avval ◽  
...  

AbstractObjectiveIn this large multi-institutional study, we aimed to analyze the prognostic power of computed tomography (CT)-based radiomics models in COVID-19 patients.MethodsCT images of 14,339 COVID-19 patients with overall survival outcome were collected from 19 medical centers. Whole lung segmentations were performed automatically using a previously validated deep learning-based model, and regions of interest were further evaluated and modified by a human observer. All images were resampled to an isotropic voxel size, intensities were discretized into 64-binning size, and 105 radiomics features, including shape, intensity, and texture features were extracted from the lung mask. Radiomics features were normalized using Z-score normalization. High-correlated features using Pearson (R2>0.99) were eliminated. We applied the Synthetic Minority Oversampling Technique (SMOT) algorithm in only the training set for different models to overcome unbalance classes. We used 4 feature selection algorithms, namely Analysis of Variance (ANOVA), Kruskal- Wallis (KW), Recursive Feature Elimination (RFE), and Relief. For the classification task, we used seven classifiers, including Logistic Regression (LR), Least Absolute Shrinkage and Selection Operator (LASSO), Linear Discriminant Analysis (LDA), Random Forest (RF), AdaBoost (AB), Naïve Bayes (NB), and Multilayer Perceptron (MLP). The models were built and evaluated using training and testing sets, respectively. Specifically, we evaluated the models using 10 different splitting and cross-validation strategies, including different types of test datasets (e.g. non-harmonized vs. ComBat-harmonized datasets). The sensitivity, specificity, and area under the receiver operating characteristic (ROC) curve (AUC) were reported for models evaluation.ResultsIn the test dataset (4301) consisting of CT and/or RT-PCR positive cases, AUC, sensitivity, and specificity of 0.83±0.01 (CI95%: 0.81-0.85), 0.81, and 0.72, respectively, were obtained by ANOVA feature selector + RF classifier. In RT-PCR-only positive test sets (3644), similar results were achieved, and there was no statistically significant difference. In ComBat harmonized dataset, Relief feature selector + RF classifier resulted in highest performance of AUC, reaching 0.83±0.01 (CI95%: 0.81-0.85), with sensitivity and specificity of 0.77 and 0.74, respectively. At the same time, ComBat harmonization did not depict statistically significant improvement relevant to non-harmonized dataset. In leave-one-center-out, the combination of ANOVA feature selector and LR classifier resulted in the highest performance of AUC (0.80±0.084) with sensitivity and specificity of 0.77 ± 0.11 and 0.76 ± 0.075, respectively.ConclusionLung CT radiomics features can be used towards robust prognostic modeling of COVID-19 in large heterogeneous datasets gathered from multiple centers. As such, CT radiomics-based model has significant potential for use in prospective clinical settings towards improved management of COVID-19 patients.


Biology ◽  
2021 ◽  
Vol 10 (12) ◽  
pp. 1269
Author(s):  
Nicholas Bacci ◽  
Joshua G. Davimes ◽  
Maryna Steyn ◽  
Nanette Briers

Global escalation of crime has necessitated the use of digital imagery to aid the identification of perpetrators. Forensic facial comparison (FFC) is increasingly employed, often relying on poor-quality images. In the absence of standardized criteria, especially in terms of video recordings, verification of the methodology is needed. This paper addresses aspects of FFC, discussing relevant terminology, investigating the validity and reliability of the FISWG morphological feature list using a new South African database, and advising on standards for CCTV equipment. Suboptimal conditions, including poor resolution, unfavorable angle of incidence, color, and lighting, affected the accuracy of FFC. Morphological analysis of photographs, standard CCTV, and eye-level CCTV showed improved performance in a strict iteration analysis, but not when using analogue CCTV images. Therefore, both strict and lenient iterations should be conducted, but FFC must be abandoned when a strict iteration performs worse than a lenient one. This threshold ought to be applied to the specific CCTV equipment to determine its utility. Chance-corrected accuracy was the most representative measure of accuracy, as opposed to the commonly used hit rate. While the use of automated systems is increasing, trained human observer-based morphological analysis, using the FISWG feature list and an Analysis, Comparison, Evaluation, and Verification (ACE-V) approach, should be the primary method of facial comparison.


2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Hanan S. Al-Saadi ◽  
Ahmed Elhadad ◽  
A. Ghareeb

Watermarking techniques in a wide range of digital media was utilized as a host cover to hide or embed a piece of information message in such a way that it is invisible to a human observer. This study aims to develop an enhanced rapid and blind method for producing a watermarked 3D object using QR code images with high imperceptibility and transparency. The proposed method is based on the spatial domain, and it starts with converting the 3D object triangles from the three-dimensional Cartesian coordinate system to the two-dimensional coordinates domain using the corresponding transformation matrix. Then, it applies a direct modification on the third vertex point of each triangle. Each triangle’s coordinates in the 3D object can be used to embed one pixel from the QR code image. In the extraction process, the QR code pixels can be successfully extracted without the need for the original image. The imperceptibly and the transparency performances of the proposed watermarking algorithm were evaluated using Euclidean distance, Manhattan distance, cosine distance, and the correlation distance values. The proposed method was tested under various filtering attacks, such as rotation, scaling, and translation. The proposed watermarking method improved the robustness and visibility of extracting the QR code image. The results reveal that the proposed watermarking method yields watermarked 3D objects with excellent execution time, imperceptibility, and robustness to common filtering attacks.


2021 ◽  
Vol 4 ◽  
Author(s):  
Peta Masters ◽  
Wally Smith ◽  
Michael Kirley

The “science of magic” has lately emerged as a new field of study, providing valuable insights into the nature of human perception and cognition. While most of us think of magic as being all about deception and perceptual “tricks”, the craft—as documented by psychologists and professional magicians—provides a rare practical demonstration and understanding of goal recognition. For the purposes of human-aware planning, goal recognition involves predicting what a human observer is most likely to understand from a sequence of actions. Magicians perform sequences of actions with keen awareness of what an audience will understand from them and—in order to subvert it—the ability to predict precisely what an observer’s expectation is most likely to be. Magicians can do this without needing to know any personal details about their audience and without making any significant modification to their routine from one performance to the next. That is, the actions they perform are reliably interpreted by any human observer in such a way that particular (albeit erroneous) goals are predicted every time. This is achievable because people’s perception, cognition and sense-making are predictably fallible. Moreover, in the context of magic, the principles underlying human fallibility are not only well-articulated but empirically proven. In recent work we demonstrated how aspects of human cognition could be incorporated into a standard model of goal recognition, showing that—even though phenomena may be “fully observable” in that nothing prevents them from being observed—not all are noticed, not all are encoded or remembered, and few are remembered indefinitely. In the current article, we revisit those findings from a different angle. We first explore established principles from the science of magic, then recontextualise and build on our model of extended goal recognition in the context of those principles. While our extensions relate primarily to observations, this work extends and explains the definitions, showing how incidental (and apparently incidental) behaviours may significantly influence human memory and belief. We conclude by discussing additional ways in which magic can inform models of goal recognition and the light that this sheds on the persistence of conspiracy theories in the face of compelling contradictory evidence.


2021 ◽  
Vol 15 ◽  
Author(s):  
Iman Chatterjee ◽  
Maja Goršič ◽  
Joshua D. Clapp ◽  
Domen Novak

Physiological responses of two interacting individuals contain a wealth of information about the dyad: for example, the degree of engagement or trust. However, nearly all studies on dyadic physiological responses have targeted group-level analysis: e.g., correlating physiology and engagement in a large sample. Conversely, this paper presents a study where physiological measurements are combined with machine learning algorithms to dynamically estimate the engagement of individual dyads. Sixteen dyads completed 15-min naturalistic conversations and self-reported their engagement on a visual analog scale every 60 s. Four physiological signals (electrocardiography, skin conductance, respiration, skin temperature) were recorded, and both individual physiological features (e.g., each participant’s heart rate) and synchrony features (indicating degree of physiological similarity between two participants) were extracted. Multiple regression algorithms were used to estimate self-reported engagement based on physiological features using either leave-interval-out crossvalidation (training on 14 60-s intervals from a dyad and testing on the 15th interval from the same dyad) or leave-dyad-out crossvalidation (training on 15 dyads and testing on the 16th). In leave-interval-out crossvalidation, the regression algorithms achieved accuracy similar to a ‘baseline’ estimator that simply took the median engagement of the other 14 intervals. In leave-dyad-out crossvalidation, machine learning achieved a slightly higher accuracy than the baseline estimator and higher accuracy than an independent human observer. Secondary analyses showed that removing synchrony features and personality characteristics from the input dataset negatively impacted estimation accuracy and that engagement estimation error was correlated with personality traits. Results demonstrate the feasibility of dynamically estimating interpersonal engagement during naturalistic conversation using physiological measurements, which has potential applications in both conversation monitoring and conversation enhancement. However, as many of our estimation errors are difficult to contextualize, further work is needed to determine acceptable estimation accuracies.


PLoS ONE ◽  
2021 ◽  
Vol 16 (10) ◽  
pp. e0258672
Author(s):  
Gabriel Carreira Lencioni ◽  
Rafael Vieira de Sousa ◽  
Edson José de Souza Sardinha ◽  
Rodrigo Romero Corrêa ◽  
Adroaldo José Zanella

The aim of this study was to develop and evaluate a machine vision algorithm to assess the pain level in horses, using an automatic computational classifier based on the Horse Grimace Scale (HGS) and trained by machine learning method. The use of the Horse Grimace Scale is dependent on a human observer, who most of the time does not have availability to evaluate the animal for long periods and must also be well trained in order to apply the evaluation system correctly. In addition, even with adequate training, the presence of an unknown person near an animal in pain can result in behavioral changes, making the evaluation more complex. As a possible solution, the automatic video-imaging system will be able to monitor pain responses in horses more accurately and in real-time, and thus allow an earlier diagnosis and more efficient treatment for the affected animals. This study is based on assessment of facial expressions of 7 horses that underwent castration, collected through a video system positioned on the top of the feeder station, capturing images at 4 distinct timepoints daily for two days before and four days after surgical castration. A labeling process was applied to build a pain facial image database and machine learning methods were used to train the computational pain classifier. The machine vision algorithm was developed through the training of a Convolutional Neural Network (CNN) that resulted in an overall accuracy of 75.8% while classifying pain on three levels: not present, moderately present, and obviously present. While classifying between two categories (pain not present and pain present) the overall accuracy reached 88.3%. Although there are some improvements to be made in order to use the system in a daily routine, the model appears promising and capable of measuring pain on images of horses automatically through facial expressions, collected from video images.


Diagnostics ◽  
2021 ◽  
Vol 11 (10) ◽  
pp. 1932
Author(s):  
Ahmed Jibril Abdi ◽  
Bo Mussmann ◽  
Alistair Mackenzie ◽  
Oke Gerke ◽  
Gitte Maria Jørgensen ◽  
...  

The purpose of this study was to assess the image quality of the low dose 2D/3D slot scanner (LDSS) imaging system compared to conventional digital radiography (DR) imaging systems. Visual image quality was assessed using the visual grading analysis (VGA) method. This method is a subjective approach that uses a human observer to evaluate and optimise radiographic images for different imaging technologies. Methods and materials: ten posterior-anterior (PA) and ten lateral (LAT) images of a chest anthropomorphic phantoms and a knee phantom were acquired by an LDSS imaging system and two conventional DR imaging systems. The images were shown in random order to three (chest) radiologists and three experienced (knee) radiographers, who scored the images against a number of criteria. Inter- and intraobserver agreement was assessed using Fleiss’ kappa and weighted kappa. Results: the statistical comparison of the agreement between the observers showed good interobserver agreement, with Fleiss’ kappa coefficients of 0.27–0.63 and 0.23–0.45 for the chest and knee protocols, respectively. Comparison of intraobserver agreement also showed good agreement with weighted kappa coefficients of 0.27–0.63 and 0.23–0.45 for the chest and knee protocols, respectively. The LDSS imaging system achieved significantly higher VGA image quality compared to the DR imaging systems in the AP and LAT chest protocols (p < 0.001). However, the LDSS imaging system achieved lower image quality than one DR system (p ≤ 0.016) and equivalent image quality to the other DR systems (p ≤ 0.27) in the knee protocol. The LDSS imaging system achieved effective dose savings of 33–52% for the chest protocol and 30–35% for the knee protocol compared with DR systems. Conclusions: this work has shown that the LDSS imaging system has the potential to acquire chest and knee images at diagnostic quality and at a lower effective dose than DR systems.


Neurology ◽  
2021 ◽  
pp. 10.1212/WNL.0000000000012884
Author(s):  
Hugo Vrenken ◽  
Mark Jenkinson ◽  
Dzung Pham ◽  
Charles R.G. Guttmann ◽  
Deborah Pareto ◽  
...  

Multiple sclerosis (MS) patients have heterogeneous clinical presentations, symptoms and progression over time, making MS difficult to assess and comprehend in vivo. The combination of large-scale data-sharing and artificial intelligence creates new opportunities for monitoring and understanding MS using magnetic resonance imaging (MRI).First, development of validated MS-specific image analysis methods can be boosted by verified reference, test and benchmark imaging data. Using detailed expert annotations, artificial intelligence algorithms can be trained on such MS-specific data. Second, understanding disease processes could be greatly advanced through shared data of large MS cohorts with clinical, demographic and treatment information. Relevant patterns in such data that may be imperceptible to a human observer could be detected through artificial intelligence techniques. This applies from image analysis (lesions, atrophy or functional network changes) to large multi-domain datasets (imaging, cognition, clinical disability, genetics, etc.).After reviewing data-sharing and artificial intelligence, this paper highlights three areas that offer strong opportunities for making advances in the next few years: crowdsourcing, personal data protection, and organized analysis challenges. Difficulties as well as specific recommendations to overcome them are discussed, in order to best leverage data sharing and artificial intelligence to improve image analysis, imaging and the understanding of MS.


2021 ◽  
Vol 118 (38) ◽  
pp. e2024966118
Author(s):  
Sarah Nicholas ◽  
Karin Nordström

For the human observer, it can be difficult to follow the motion of small objects, especially when they move against background clutter. In contrast, insects efficiently do this, as evidenced by their ability to capture prey, pursue conspecifics, or defend territories, even in highly textured surrounds. We here recorded from target selective descending neurons (TSDNs), which likely subserve these impressive behaviors. To simulate the type of optic flow that would be generated by the pursuer’s own movements through the world, we used the motion of a perspective corrected sparse dot field. We show that hoverfly TSDN responses to target motion are suppressed when such optic flow moves syn-directional to the target. Indeed, neural responses are strongly suppressed when targets move over either translational sideslip or rotational yaw. More strikingly, we show that TSDNs are facilitated by optic flow moving counterdirectional to the target, if the target moves horizontally. Furthermore, we show that a small, frontal spatial window of optic flow is enough to fully facilitate or suppress TSDN responses to target motion. We argue that such TSDN response facilitation could be beneficial in modulating corrective turns during target pursuit.


Sign in / Sign up

Export Citation Format

Share Document