scholarly journals Síndrome Antifosfolípido Obstétrico

Author(s):  
Gabriel Calderón Valverde ◽  
Mauricio Cordero Alfaro
Keyword(s):  

El síndrome antifosfolípido corresponde a una enfermedad autoinmune sistémica mediada por diversos grupos de anticuerpos dirigidos hacia las proteínas de unión a fosfolípidos; dicho síndrome se particulariza por manifestaciones trombóticas y obstétricas diversas. El manejo obstétrico de la enfermedad se basa en tratamiento farmacológico preventivo, sin embargo, un grupo importante de pacientes presenta manifestaciones refractarias a las medidas farmacológicas empleadas. El caso actual data de una paciente de 24 años con diagnóstico reciente de síndrome antifosfolípido, quien, a pesar del tratamiento pleno con aspirina, heparina a dosis terapéutica e hidroxicloroquina desarrolla una preeclampsia severa que requiere la posterior interrupción de su embarazo mediante cesárea de emergencia.  

2020 ◽  
Vol 4 (1) ◽  
pp. 51-63
Author(s):  
Peter Neuhaus ◽  
Chris Jumonville ◽  
Rachel A. Perry ◽  
Roman Edwards ◽  
Jake L. Martin ◽  
...  

AbstractTo assess the comparative similarity of squat data collected as they wore a robotic exoskeleton, female athletes (n=14) did two exercise bouts spaced 14 days apart. Data from their exoskeleton workout was compared to a session they did with free weights. Each squat workout entailed a four-set, four-repetition paradigm with 60-second rest periods. Sets for each workout involved progressively heavier (22.5, 34, 45.5, 57 kg) loads. The same physiological, perceptual, and exercise performance dependent variables were measured and collected from both workouts. Per dependent variable, Pearson correlation coefficients, t-tests, and Cohen's d effect size compared the degree of similarity between values obtained from the exoskeleton and free weight workouts. Results show peak O2, heart rate, and peak force data produced the least variability. In contrast, far more inter-workout variability was noted for peak velocity, peak power, and electromyography (EMG) values. Overall, an insufficient amount of comparative similarity exists for data collected from both workouts. Due to the limited data similarity, the exoskeleton does not exhibit an acceptable degree of validity. Likely the cause for the limited similarity was due to the brief amount of familiarization subjects had to the exoskeleton prior to actual data collection. A familiarization session that accustomed subjects to squats done with the exoskeleton prior to actual data collection may have considerably improved the validity of data obtained from that device.


2020 ◽  
Vol 2 (2) ◽  
pp. 454
Author(s):  
Julkifli Purnama ◽  
Ahmad Juliana

Investment in the capital market every manager needs to analyze to make decisions so that the right target to produce profits in accordance with what is expected. For that, we need a way to predict the decisions that will be taken in the future. The research objective is to find the best model and forecasting of the composite stock price index (CSPI). Data analysis technique The ARIMA Model time series data from historical data is the basis for forecasting. Secondary data is the closing price of the JCI on July 16 2018 to July 16 2019 to see how accurate the forecasting is done on the actual data at that time. The results of the study that the best Arima model is Arima 2.1.2 with an R-squared value of 0.014500, Schwarz criterion 10.83497 and Akaike info criterion of 10.77973. Results of forecasting actual data are 6394,609, dynamic forecast 6387,551 selisish -7,05799, statistics forecas 6400,653 difference of 6,043909. For investors or the public can use the ARIMA method to be able to predict or predict the capital market that will occur in the next period.


Author(s):  
Reinhold Steinacker

AbstractTime series with a significant trend, as is now being the case for the temperature in the course of climate change, need a careful approach for statistical evaluations. Climatological means and moments are usually taken from past data which means that the statistics does not fit to actual data anymore. Therefore, we need to determine the long-term trend before comparing actual data with the actual climate. This is not an easy task, because the determination of the signal—a climatic trend—is influenced by the random scatter of observed data. Different filter methods are tested upon their quality to obtain realistic smoothed trends of observed time series. A new method is proposed, which is based on a variational principle. It outperforms other conventional methods of smoothing, especially if periodic time series are processed. This new methodology is used to test, how extreme the temperature of 2018 in Vienna actually was. It is shown that the new annual temperature record of 2018 is not too extreme, if we consider the positive trend of the last decades. Also, the daily mean temperatures of 2018 are not found to be really extreme according to the present climate. The real extreme of the temperature record of Vienna—and many other places around the world—is the strongly increased positive temperature trend over the last years.


1978 ◽  
Vol 21 (2) ◽  
pp. 98-104
Author(s):  
W. L. Honig ◽  
C. R. Carlson
Keyword(s):  

2001 ◽  
Vol 15 (4) ◽  
pp. 11-28 ◽  
Author(s):  
John DiNardo ◽  
Justin L Tobias

We provide a nontechnical review of recent nonparametric methods for estimating density and regression functions. The methods we describe make it possible for a researcher to estimate a regression function or density without having to specify in advance a particular--and hence potentially misspecified functional form. We compare these methods to more popular parametric alternatives (such as OLS), illustrate their use in several applications, and demonstrate their flexibility with actual data and generated-data experiments. We show that these methods are intuitive and easily implemented, and in the appropriate context may provide an attractive alternative to “simpler” parametric methods.


1993 ◽  
Vol 30 (03) ◽  
pp. 153-171
Author(s):  
Ludwig H. Seidl ◽  
William F. Clifford ◽  
James P. Cummings

A presentation is attempted linking the historical development, general design considerations for Small Waterplane Area, Twin-Hull (SWATH) hull shapes, the design of a particular SWATH, the Navatek/, and her operational experience. The "carrier vessel" concept on which the Navatek I is based is introduced. Principal dimensions and general arrangements are shown. A parametric study of twin-strut SWATH hull forms for a hull of constant displacement is presented in some detail. Stability and ship motion are discussed and actual data for the Navatek I presented. The overall structural analysis is briefly presented, including the method of analysis for the Navatek I. The SWATH captain's operational experience with the Navatek I during her extensive journeys is related to quite some extent.


2017 ◽  
Vol 33 (4) ◽  
pp. 1005-1019 ◽  
Author(s):  
Bronwyn Loong ◽  
Donald B. Rubin

AbstractSeveral statistical agencies have started to use multiply-imputed synthetic microdata to create public-use data in major surveys. The purpose of doing this is to protect the confidentiality of respondents’ identities and sensitive attributes, while allowing standard complete-data analyses of microdata. A key challenge, faced by advocates of synthetic data, is demonstrating that valid statistical inferences can be obtained from such synthetic data for non-confidential questions. Large discrepancies between observed-data and synthetic-data analytic results for such questions may arise because of uncongeniality; that is, differences in the types of inputs available to the imputer, who has access to the actual data, and to the analyst, who has access only to the synthetic data. Here, we discuss a simple, but possibly canonical, example of uncongeniality when using multiple imputation to create synthetic data, which specifically addresses the choices made by the imputer. An initial, unanticipated but not surprising, conclusion is that non-confidential design information used to impute synthetic data should be released with the confidential synthetic data to allow users of synthetic data to avoid possible grossly conservative inferences.


2021 ◽  
Vol 8 (5) ◽  
pp. 987
Author(s):  
Novi Koesoemaningroem ◽  
Endroyono Endroyono ◽  
Supeno Mardi Susiki Nugroho

<p>Peramalan pencemaran udara yang  akurat  diperlukan untuk mengurangi dampak pencemaran udara. Peramalan yang belum akurat akan berdampak kurang efektifnya tindakan yang dilakukan untuk mengantisipasi dampak pencemaran udara. Sehingga diperlukan sebuah pendekatan yang dapat mengetahui keakuratan plot data hasil peramalan. Penelitian ini dilakukan dengan tujuan melakukan peramalan pencemaran udara berdasarkan parameter PM<sub>10</sub>, NO<sub>2</sub>, CO, SO<sub>2</sub>, dan O<sub>3</sub>dengan metode DSARIMA. Data dalam penelitian ini sebanyak 8.760 data yang berasal dari Dinas Lingkungan Hidup Kota Surabaya. Berdasarkan hasil peramalan selama 168 jam kadar parameter PM<sub>10</sub>, NO<sub>2</sub>, SO<sub>2</sub> dan O<sub>3</sub> cenderung  menurun. Hasil peramalan selama 168 jam dengan menggunakan DSARIMA memberikan hasil peramalan yang nilainya mendekati data aktual terbukti dari polanya yang sesuai atau mirip dengan grafik plot data aktual dengan hasil ramalan. Dengan pendekatan PEB, selisih antara data aktual dan data ramalan kecil dan plot grafik PEB mengikuti plot grafik di data aktual, sehingga dapat dikatakan bahwa model sudah sesuai. Hasil akurasi terbaik yang dihasilkan adalah model DSARIMA dengan RMSE terkecil 0,59 didapatkan dari parameter CO yaitu ARIMA(0,1,[1,2,3])(0,1,1)<sup>24</sup>(0,1,1)<sup>168</sup>.</p><p> </p><p><em><strong>Abstract</strong></em></p><p class="Judul2"><em>Accurate air pollution forecasting is needed to reduce the impact of air pollution. Inaccurate forecasting will result in less effective actions taken to anticipate the impact of air pollution. So we need an approach that can determine the accuracy of the forecast data plot. This research was conducted with the aim of forecasting air pollution based on the PM<sub>10</sub>, NO<sub>2</sub>, CO, <sub>SO2</sub>, and O<sub>3</sub> parameters using the DSARIMA method. The data in this study were 8.760 data from the Surabaya City Environmental Service. Based on the results of forecasting for 168 hours, the levels of PM<sub>10</sub>, NO<sub>2, </sub>SO<sub>2</sub>, and O<sub>3</sub> parameters tend to decrease. Forecasting results for 168 hours using DSARIMA provide forecasting results whose values are close to the actual data as evidenced by the pattern that matches or is similar to the actual data plot graph with the forecast results. With the PEB approach, the difference between the actual data and the forecast data is small and the PEB graph plot follows the graph plot in the actual data, so it can be said that the model is appropriate. The best accuracy result is DSARIMA with the smallest RMSE 0,59 obtained from the CO parameter, namely </em>ARIMA(0,1,[1,2,3])(0,1,1)<sup>24</sup>(0,1,1)<sup>168</sup>.</p><p> </p><p> </p>


Sign in / Sign up

Export Citation Format

Share Document