TIBIAL NERVE PERINEURAL INJECTIONS AT THE POSTERIOR TARSAL TUNNEL

2013 ◽  
Vol 16 (03) ◽  
pp. 1350014
Author(s):  
Oliver C. Joseph ◽  
Oleg Uryasev ◽  
John P. McNamara ◽  
Apostolos P. Dallas

Introduction: Posterior tarsal tunnel syndrome (PostTTS) refers to compression of the tibial nerve (TN) within this tunnel. PostTTS is most commonly secondary to entrapment with subsequent inflammation. As it is true with other entrapment-type neuropathies, corticosteroids could provide therapeutic relief. To the authors' knowledge, the feasibility of such injections using ultrasound guidance has not been described in the literature. We hypothesize that one can inject the TN perineural space immediately proximal to the posterior tarsal tunnel utilizing ultrasonography US-guidance. Methods: This research was a pilot study using four cadaveric models. US was utilized to image the proximal posterior tarsal tunnel. Perineural injections of methylene blue were performed with subsequent dissection. Injections were designated as accurate (referring to nerve staining) and precise (referring to dye localization). Results: One cadaver was precluded due to pronounced musculoskeletal abnormality. 5-of-6 (83%) injections were accurate and 6-of-6 (100%) precise. Conclusion: Initial attempt was inaccurate and precise, while later injections were both accurate and precise. The most apparent source of error was from one cadaver's pronounced musculoskeletal deformity, which precluded successful injections bilaterally. Of the three cadavers unaffected by musculoskeletal deformity, accuracy was 5-of-6 (83%) and precision was 6-of-6 (100%). While surgery is the definitive treatment for refractory PostTTS, therapeutic effect of corticosteroid injections has not been evaluated in this patient population. Such injections could provide symptomatic relief and postpone surgical intervention. Small sample size not withstanding the results suggest that TN perineural injections are feasible under US-guidance. This study suggests that US-guidance can increase accuracy and precision and is a potential adjunct to the treatment. Future study will expand the initial data set and categorize consistent protocol. Subsequent translational research will then be sought to evaluate therapeutic efficacy in this patient population.

Author(s):  
Carlos Eduardo Thomaz ◽  
Vagner do Amaral ◽  
Gilson Antonio Giraldi ◽  
Edson Caoru Kitani ◽  
João Ricardo Sato ◽  
...  

This chapter describes a multi-linear discriminant method of constructing and quantifying statistically significant changes on human identity photographs. The approach is based on a general multivariate two-stage linear framework that addresses the small sample size problem in high-dimensional spaces. Starting with a 2D data set of frontal face images, the authors determine a most characteristic direction of change by organizing the data according to the patterns of interest. These experiments on publicly available face image sets show that the multi-linear approach does produce visually plausible results for gender, facial expression and aging facial changes in a simple and efficient way. The authors believe that such approach could be widely applied for modeling and reconstruction in face recognition and possibly in identifying subjects after a lapse of time.


Author(s):  
Xiaoyu Lu ◽  
Szu-Wei Tu ◽  
Wennan Chang ◽  
Changlin Wan ◽  
Jiashi Wang ◽  
...  

Abstract Deconvolution of mouse transcriptomic data is challenged by the fact that mouse models carry various genetic and physiological perturbations, making it questionable to assume fixed cell types and cell type marker genes for different data set scenarios. We developed a Semi-Supervised Mouse data Deconvolution (SSMD) method to study the mouse tissue microenvironment. SSMD is featured by (i) a novel nonparametric method to discover data set-specific cell type signature genes; (ii) a community detection approach for fixing cell types and their marker genes; (iii) a constrained matrix decomposition method to solve cell type relative proportions that is robust to diverse experimental platforms. In summary, SSMD addressed several key challenges in the deconvolution of mouse tissue data, including: (i) varied cell types and marker genes caused by highly divergent genotypic and phenotypic conditions of mouse experiment; (ii) diverse experimental platforms of mouse transcriptomics data; (iii) small sample size and limited training data source and (iv) capable to estimate the proportion of 35 cell types in blood, inflammatory, central nervous or hematopoietic systems. In silico and experimental validation of SSMD demonstrated its high sensitivity and accuracy in identifying (sub) cell types and predicting cell proportions comparing with state-of-the-arts methods. A user-friendly R package and a web server of SSMD are released via https://github.com/xiaoyulu95/SSMD.


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Stefan Lenz ◽  
Moritz Hess ◽  
Harald Binder

Abstract Background The best way to calculate statistics from medical data is to use the data of individual patients. In some settings, this data is difficult to obtain due to privacy restrictions. In Germany, for example, it is not possible to pool routine data from different hospitals for research purposes without the consent of the patients. Methods The DataSHIELD software provides an infrastructure and a set of statistical methods for joint, privacy-preserving analyses of distributed data. The contained algorithms are reformulated to work with aggregated data from the participating sites instead of the individual data. If a desired algorithm is not implemented in DataSHIELD or cannot be reformulated in such a way, using artificial data is an alternative. Generating artificial data is possible using so-called generative models, which are able to capture the distribution of given data. Here, we employ deep Boltzmann machines (DBMs) as generative models. For the implementation, we use the package “BoltzmannMachines” from the Julia programming language and wrap it for use with DataSHIELD, which is based on R. Results We present a methodology together with a software implementation that builds on DataSHIELD to create artificial data that preserve complex patterns from distributed individual patient data. Such data sets of artificial patients, which are not linked to real patients, can then be used for joint analyses. As an exemplary application, we conduct a distributed analysis with DBMs on a synthetic data set, which simulates genetic variant data. Patterns from the original data can be recovered in the artificial data using hierarchical clustering of the virtual patients, demonstrating the feasibility of the approach. Additionally, we compare DBMs, variational autoencoders, generative adversarial networks, and multivariate imputation as generative approaches by assessing the utility and disclosure of synthetic data generated from real genetic variant data in a distributed setting with data of a small sample size. Conclusions Our implementation adds to DataSHIELD the ability to generate artificial data that can be used for various analyses, e.g., for pattern recognition with deep learning. This also demonstrates more generally how DataSHIELD can be flexibly extended with advanced algorithms from languages other than R.


2014 ◽  
Vol 2014 ◽  
pp. 1-9 ◽  
Author(s):  
Zhihua Wang ◽  
Yongbo Zhang ◽  
Huimin Fu

Reasonable prediction makes significant practical sense to stochastic and unstable time series analysis with small or limited sample size. Motivated by the rolling idea in grey theory and the practical relevance of very short-term forecasting or 1-step-ahead prediction, a novel autoregressive (AR) prediction approach with rolling mechanism is proposed. In the modeling procedure, a new developed AR equation, which can be used to model nonstationary time series, is constructed in each prediction step. Meanwhile, the data window, for the next step ahead forecasting, rolls on by adding the most recent derived prediction result while deleting the first value of the former used sample data set. This rolling mechanism is an efficient technique for its advantages of improved forecasting accuracy, applicability in the case of limited and unstable data situations, and requirement of little computational effort. The general performance, influence of sample size, nonlinearity dynamic mechanism, and significance of the observed trends, as well as innovation variance, are illustrated and verified with Monte Carlo simulations. The proposed methodology is then applied to several practical data sets, including multiple building settlement sequences and two economic series.


2020 ◽  
Vol 41 (S1) ◽  
pp. s445-s446
Author(s):  
Megan DiGiorgio ◽  
Lori Moore ◽  
Greg Robbins ◽  
Albert Parker ◽  
James Arbogast

Background: Hand hygiene (HH) has long been a focus in the prevention of healthcare-associated infections. The limitations of direct observation, including small sample size (often 20–100 observations per month) and the Hawthorne effect, have cast doubt on the accuracy of reported compliance rates. As a result, hospitals are exploring the use of automated HH monitoring systems (AHHMS) to overcome the limitations of direct observation and to provide a more robust and realistic estimation of HH behaviors. Methods: Data analyzed in this study were captured utilizing a group-based AHHMS installed in a number of North American hospitals. Emergency departments, overflow units, and units with <1 year of data were excluded from the study. The final analysis included data from 58 inpatient units in 10 hospitals. Alcohol-based hand rub and soap dispenses HH events (HHEs) and room entries and exits (HH opportunities (HHOs) were used to calculate unit-level compliance rates. Statistical analysis was performed on the annual number of dispenses and opportunities using a mixed effects Poisson regression with random effects for facility, unit, and year, and fixed effects for unit type. Interactions were not included in the model based on interaction plots and significance tests. Poisson assumptions were verified with Pearson residual plots. Results: Over the study period, 222.7 million HHOs and 99 million HHEs were captured in the data set. There were an average of 18.7 beds per unit. The average number of HHOs per unit per day was 3,528, and the average number of HHEs per unit per day was 1,572. The overall median compliance rate was 35.2 (95% CI, 31.5%–39.3%). Unit-to-unit comparisons revealed some significant differences: compliance rates for medical-surgical units were 12.6% higher than for intensive care units (P < .0001). Conclusions: This is the largest HH data set ever reported. The results illustrate the magnitude of HHOs captured (3,528 per unit per day) by an AHHMS compared to that possible through direct observation. It has been previously suggested that direct observation samples between 0.5% to 1.7% of all HHOs. In healthcare, it is unprecedented for a patient safety activity that occurs as frequently as HH to not be accurately monitored and reported, especially with HH compliance as low as it is in this multiyear, multicenter study. Furthermore, hospitals relying on direct observation alone are likely insufficiently allocating and deploying valuable resources for improvement efforts based on the scant information obtained. AHHMSs have the potential to introduce a new era in HH improvement.Funding: GOJO Industries, Inc., provided support for this study.Disclosures: Lori D. Moore and James W. Arbogast report salary from GOJO.


2019 ◽  
Vol 6 (Supplement_2) ◽  
pp. S111-S111
Author(s):  
Sydney Agnello ◽  
Shandra R Day ◽  
Lynn Wardlow ◽  
Erica E Reed ◽  
Jessica M Smith ◽  
...  

Abstract Background The preferred management of patients with MSSA bacteremia includes definitive therapy with intravenous anti-staphylococcal β-lactam antibiotics. In β-lactam allergic or intolerant patients, daptomycin has been targeted as a viable alternative. The objective of this study was to assess clinical outcomes of daptomycin compared with nafcillin or cefazolin for the treatment of MSSA bacteremia. Methods This was a retrospective cohort study of patients hospitalized from November 1, 2011 to October 31, 2018 at The Ohio State University Wexner Medical Center with MSSA bacteremia. Patients treated with nafcillin, cefazolin or daptomycin were included with 1:1:1 random selection. The primary outcome was a composite of clinical failure, defined as a change in therapy due to persistent/worsening signs and symptoms, bacteremia recurrence or persistence, or inpatient infection-related mortality. Secondary endpoints included 30-day infection-related mortality, duration of bacteremia, 30-day all-cause mortality and adverse events (AEs) necessitating a change in therapy. Results Among patients with MSSA bacteremia, 162 received at least one dose of daptomycin. Of those, 29 received at least 14 days of daptomycin and/or received daptomycin as definitive therapy and thus were included in the analysis. There was no difference in the primary outcome of composite clinical failure comparing daptomycin vs. nafcillin/cefazolin (P = 0.71). In addition, no difference was observed in 30-day infection-related mortality (P = 0.51), duration of MSSA bacteremia (P = 0.9), or 30-day all-cause mortality (P = 0.64). A higher number of AEs necessitating change in therapy were seen in the daptomycin group (P = 0.0002), reflecting initial β-lactam intolerance. Conclusion No difference in clinical failure was identified in patients treated with daptomycin vs. nafcillin/cefazolin suggesting that daptomycin may serve as a non-inferior alternative for treatment of MSSA bacteremia. A higher number of AEs occurred in the daptomycin group indicating β-lactam intolerance as a primary indication for daptomycin therapy. Given the small sample size, subsequent studies are needed to further evaluate the use of daptomycin in the treatment of MSSA bacteremia. Disclosures All authors: No reported disclosures.


Water ◽  
2018 ◽  
Vol 10 (8) ◽  
pp. 1049 ◽  
Author(s):  
Zakia Sultana ◽  
Tobias Sieg ◽  
Patric Kellermann ◽  
Meike Müller ◽  
Heidi Kreibich

Losses due to floods have dramatically increased over the past decades, and losses of companies, comprising direct and indirect losses, have a large share of the total economic losses. Thus, there is an urgent need to gain more quantitative knowledge about flood losses, particularly losses caused by business interruption, in order to mitigate the economic loss of companies. However, business interruption caused by floods is rarely assessed because of a lack of sufficiently detailed data. A survey was undertaken to explore processes influencing business interruption, which collected information on 557 companies affected by the severe flood in June 2013 in Germany. Based on this data set, the study aims to assess the business interruption of directly affected companies by means of a Random Forests model. Variables that influence the duration and costs of business interruption were identified by the variable importance measures of Random Forests. Additionally, Random Forest-based models were developed and tested for their capacity to estimate business interruption duration and associated costs. The water level was found to be the most important variable influencing the duration of business interruption. Other important variables, relating to the estimation of business interruption duration, are the warning time, perceived danger of flood recurrence and inundation duration. In contrast, the amount of business interruption costs is strongly influenced by the size of the company, as assessed by the number of employees, emergency measures undertaken by the company and the fraction of customers within a 50 km radius. These results provide useful information and methods for companies to mitigate their losses from business interruption. However, the heterogeneity of companies is relatively high, and sector-specific analyses were not possible due to the small sample size. Therefore, further sector-specific analyses on the basis of more flood loss data of companies are recommended.


2013 ◽  
Vol 25 (6) ◽  
pp. 1548-1584 ◽  
Author(s):  
Sascha Klement ◽  
Silke Anders ◽  
Thomas Martinetz

By minimizing the zero-norm of the separating hyperplane, the support feature machine (SFM) finds the smallest subspace (the least number of features) of a data set such that within this subspace, two classes are linearly separable without error. This way, the dimensionality of the data is more efficiently reduced than with support vector–based feature selection, which can be shown both theoretically and empirically. In this letter, we first provide a new formulation of the previously introduced concept of the SFM. With this new formulation, classification of unbalanced and nonseparable data is straightforward, which allows using the SFM for feature selection and classification in a large variety of different scenarios. To illustrate how the SFM can be used to identify both the smallest subset of discriminative features and the total number of informative features in biological data sets we apply repetitive feature selection based on the SFM to a functional magnetic resonance imaging data set. We suggest that these capabilities qualify the SFM as a universal method for feature selection, especially for high-dimensional small-sample-size data sets that often occur in biological and medical applications.


2008 ◽  
Vol 08 (04) ◽  
pp. 495-512 ◽  
Author(s):  
PIETRO COLI ◽  
GIAN LUCA MARCIALIS ◽  
FABIO ROLI

The automatic vitality detection of a fingerprint has become an important issue in personal verification systems based on this biometric. It has been shown that fake fingerprints made using materials like gelatine or silicon can deceive commonly used sensors. Recently, the extraction of vitality features from fingerprint images has been proposed to address this problem. Among others, static and dynamic features have been separately studied so far, thus their respective merits are not yet clear; especially because reported results were often obtained with different sensors and using small data sets which could have obscured relative merits, due to the potential small sample-size issues. In this paper, we compare some static and dynamic features by experiments on a larger data set and using the same optical sensor for the extraction of both feature sets. We dealt with fingerprint stamps made using liquid silicon rubber. Reported results show the relative merits of static and dynamic features and the performance improvement achievable by using such features together.


Author(s):  
Lianbo Yu ◽  
Parul Gulati ◽  
Soledad Fernandez ◽  
Michael Pennell ◽  
Lawrence Kirschner ◽  
...  

Gene expression microarray experiments with few replications lead to great variability in estimates of gene variances. Several Bayesian methods have been developed to reduce this variability and to increase power. Thus far, moderated t methods assumed a constant coefficient of variation (CV) for the gene variances. We provide evidence against this assumption, and extend the method by allowing the CV to vary with gene expression. Our CV varying method, which we refer to as the fully moderated t-statistic, was compared to three other methods (ordinary t, and two moderated t predecessors). A simulation study and a familiar spike-in data set were used to assess the performance of the testing methods. The results showed that our CV varying method had higher power than the other three methods, identified a greater number of true positives in spike-in data, fit simulated data under varying assumptions very well, and in a real data set better identified higher expressing genes that were consistent with functional pathways associated with the experiments.


Sign in / Sign up

Export Citation Format

Share Document