scholarly journals Principle of bioethical responsibility in moral values continuum of modern medicine

2006 ◽  
Vol 5 (5) ◽  
pp. 147-150
Author(s):  
V. M. Sokolov

Determination of responsibly professional position of modern medicine supposes investigation of the medicine activity basis and is connected with such domain reflections which appeal for the new ethical reference points of practical and theoretical medi- cine. At present modern medicine progress and the newly practice of professional medical care is at variance with settled moral principles and values and raises medical and philosophical problems which could not been considered objectively in Hippocrat's eth- ics or in the traditional medical norms of ethics and deontology. Necessity in the elaboration of bioethical imperative of responsibil- ity is keenly revealed in consiquense of the emerged breach between the level of biomedical theory and practice development on the one hand and the lag of medicine moral components of theoretical and practical medicine care on the other. In the article under consideration the condition of bioethical responsibility problem is analyzed in the professional legitimaton of medicine activity aspect. It also deals with formation conditions of bioethically responsible students in medical professional schools.

1967 ◽  
Vol 18 (01/02) ◽  
pp. 198-210 ◽  
Author(s):  
Ronald S Reno ◽  
Walter H Seegers

SummaryA two-stage assay procedure was developed for the determination of the autoprothrombin C titre which can be developed from prothrombin or autoprothrombin III containing solutions. The proenzyme is activated by Russell’s viper venom and the autoprothrombin C activity that appears is measured by its ability to shorten the partial thromboplastin time of bovine plasma.Using the assay, the autoprothrombin C titre was determined in the plasma of several species, as well as the percentage of it remaining in the serum from blood clotted in glass test tubes. Much autoprothrombin III remains in human serum. With sufficient thromboplastin it was completely utilized. Plasma from selected patients with coagulation disorders was assayed and only Stuart plasma was abnormal. In so-called factor VII, IX, and P.T.A. deficiency the autoprothrombin C titre and thrombin titre that could be developed was normal. In one case (prethrombin irregularity) practically no thrombin titre developed but the amount of autoprothrombin C which generated was in the normal range.Dogs were treated with Dicumarol and the autoprothrombin C titre that could be developed from their plasmas decreased until only traces could be detected. This coincided with a lowering of the thrombin titre that could be developed and a prolongation of the one-stage prothrombin time. While the Dicumarol was acting, the dogs were given an infusion of purified bovine prothrombin and the levels of autoprothrombin C, thrombin and one-stage prothrombin time were followed for several hours. The tests became normal immediately after the infusion and then went back to preinfusion levels over a period of 24 hrs.In other dogs the effect of Dicumarol was reversed by giving vitamin K1 intravenously. The effect of the vitamin was noticed as early as 20 min after administration.In response to vitamin K the most pronounced increase was with that portion of the prothrombin molecule which yields thrombin. The proportion of that protein with respect to the precursor of autoprothrombin C increased during the first hour and then started to go down and after 3 hrs was equal to the proportion normally found in plasma.


1969 ◽  
Vol 61 (2) ◽  
pp. 219-231 ◽  
Author(s):  
V. H. Asfeldt

ABSTRACT This is an investigation of the practical clinical value of the one mg dexamethasone suppression test of Nugent et al. (1963). The results, evaluated from the decrease in fluorimetrically determined plasma corticosteroids in normal subjects, as well as in cases of exogenous obesity, hirsutism and in Cushing's syndrome, confirm the findings reported in previous studies. Plasma corticosteroid reduction after one mg of dexamethasone in cases of stable diabetes was not significantly different from that observed in control subjects, but in one third of the insulin-treated diabetics only a partial response was observed, indicating a slight hypercorticism in these patients. An insufficient decrease in plasma corticosteroids was observed in certain other conditions (anorexia nervosa, pituitary adenoma, patients receiving contraceptive or anticonvulsive treatment) with no hypercorticism. The physiological significance of these findings is discussed. It is concluded that the test, together with a determination of the basal urinary 17-ketogenic steroid excretion, is suitable as the first diagnostic test in patients in whom Cushing's syndrome is suspected. In cases of insufficient suppression of plasma corticosteroids, further studies, including the suppression test of Liddle (1960), must be carried out.


1989 ◽  
Vol 21 (8-9) ◽  
pp. 1057-1064 ◽  
Author(s):  
Vijay Joshi ◽  
Prasad Modak

Waste load allocation for rivers has been a topic of growing interest. Dynamic programming based algorithms are particularly attractive in this context and are widely reported in the literature. Codes developed for dynamic programming are however complex, require substantial computer resources and importantly do not allow interactions of the user. Further, there is always resistance to utilizing mathematical programming based algorithms for practical applications. There has been therefore always a gap between theory and practice in systems analysis in water quality management. This paper presents various heuristic algorithms to bridge this gap with supporting comparisons with dynamic programming based algorithms. These heuristics make a good use of the insight gained in the system's behaviour through experience, a process akin to the one adopted by field personnel and therefore can readily be understood by a user familiar with the system. Also they allow user preferences in decision making via on-line interaction. Experience has shown that these heuristics are indeed well founded and compare very favourably with the sophisticated dynamic programming algorithms. Two examples have been included which demonstrate such a success of the heuristic algorithms.


2020 ◽  
Vol 17 ◽  
Author(s):  
Houli Li ◽  
Di Zhang ◽  
Xiaoliang Cheng ◽  
Qiaowei Zheng ◽  
Kai Cheng ◽  
...  

Background: The trough concentration (Cmin) of Imatinib (IM) is closely related to the treatment outcomes and adverse reactions of patients with gastrointestinal stromal tumors (GIST). However, the drug plasma level has great interand intra-individual variability, and therapeutic drug monitoring (TDM) is highly recommended. Objective: To develop a novel, simple, and economical two-dimensional liquid chromatography method with ultraviolet detector (2D-LC-UV) for simultaneous determination of IM and its major active metabolite, N-demethyl imatinib (NDIM) in human plasma, and then apply the method for TDM of the drug. Method: Sample was processed by simple protein precipitation. Two target analytes were separated on the one-dimension column, captured on the middle column, and then transferred to the two-dimension column for further analysis. The detection was performed at 264 nm. The column temperature was maintained at 40˚C and the injection volume was 500 μL. Totally 32 plasma samples were obtained from patients with GIST who were receiving IM. Method: Sample was processed by simple protein precipitation. Two target analytes were separated on the one-dimension column, captured on the middle column, and then transferred to the two-dimension column for further analysis. The detection was performed at 264 nm. The column temperature was maintained at 40˚C and the injection volume was 500 μL. Totally 32 plasma samples were obtained from patients with GIST who were receiving IM. Conclusion: The novel 2D-LC-UV method is simple, stable, highly automated and independent of specialized technicians, which greatly increases the real-time capability of routine TDM for IM in hospital.


2015 ◽  
Vol 72 (2) ◽  
pp. 123-131 ◽  
Author(s):  
Marko Igic ◽  
Nebojsa Krunic ◽  
Ljiljana Aleksov ◽  
Milena Kostic ◽  
Aleksandra Igic ◽  
...  

Background/Aim. The vertical dimension of occlusion is a very important parameter for proper reconstruction of the relationship between the jaws. The literature describes many methods for its finding, from the simple, easily applicable clinically, to quite complicated, with the use of one or more devices for determination. The aim of this study was to examine the possibility of determining the vertical dimension of occlusion using the vocals ?O? and ?E? with the control of values o btained by applying cognitive functions. Methods. This investigation was performed with the two groups of patients. The first group consisted of 50 females and 50 males, aged 18 to 30 years. In this group the distance between the reference points (on top of the nose and chin) was measured in the position of the mandible in the vertical dimension of occlusion, the vertical dimension at rest and the pronunciation of the words ?OLO? and ?ELE?. Checking the correctness of the particular value for the word ?OLO? was also performed by the phonetic method with the application of cognitive exercises when the patients counted from 89 to 80. The obtained difference in the average values i n determining the vertical dimension of occlusion and the ?OLO? and ?ELE? in the first group was used as the reference for determining the vertical dimension of occlusion in the second group of patients. The second group comprised of 31 edentulous persons (14 females and 17 males), aged from 54 to 85 years who had been made a complete denture. Results. The average value obtained for the vertical dimension of rest for the entire sample was 2.16 mm, for the word ?OLO? for the entire sample was 5.51 mm and for the word ?ELE? for the entire sample was 7.47 mm. There was no statistically significant difference between the genders for the value of the vertical dimension at rest, ?ELE? and ?OLO?. There was a statistically significant difference between the values f or the vertical dimension at rest, ?OLO? and ?ELE? for both genders. There was a statistically significant correlation between the value for the vertical dimension at rest, ?OLO? and ?ELE?, for both groups of subjects. Conclusion. Determining the vertical dimension of occlusion requires 5.5 mm subtraction from the position of the mandible in pronunciation of the word ?OLO? or 7.5 mm in pronunciation of the word ?ELE?.


Sensors ◽  
2021 ◽  
Vol 21 (14) ◽  
pp. 4842
Author(s):  
Waldemar Kamiński

Nowadays, hydrostatic levelling is a widely used method for the vertical displacements’ determinations of objects such as bridges, viaducts, wharfs, tunnels, high buildings, historical buildings, special engineering objects (e.g., synchrotron), sports and entertainment halls. The measurements’ sensors implemented in the hydrostatic levelling systems (HLSs) consist of the reference sensor (RS) and sensors located on the controlled points (CPs). The reference sensor is the one that is placed at the point that (in theoretical assumptions) is not a subject to vertical displacements and the displacements of controlled points are determined according to its height. The hydrostatic levelling rule comes from the Bernoulli’s law. While using the Bernoulli’s principle in hydrostatic levelling, the following components have to be taken into account: atmospheric pressure, force of gravity, density of liquid used in sensors places at CPs. The parameters mentioned above are determined with some mean errors that influence on the accuracy assessment of vertical displacements. In the subject’s literature, there are some works describing the individual accuracy analyses of the components mentioned above. In this paper, the author proposes the concept of comprehensive determination of mean error of vertical displacement (of each CPs), calculated from the mean errors’ values of components dedicated for specific HLS. The formulas of covariances’ matrix were derived and they enable to make the accuracy assessment of the calculations’ results. The author also presented the subject of modelling of vertical displacements’ gained values. The dependences, enabling to conduct the statistic tests of received model’s parameters, were implemented. The conducted tests make it possible to verify the correctness of used theoretical models of the examined object treated as the rigid body. The practical analyses were conducted for two simulated variants of sensors’ connections in HLS. Variant no. I is the sensors’ serial connection. Variant no. II relies on the connection of each CPs with the reference sensor. The calculations’ results show that more detailed value estimations of the vertical displacements can be obtained using variant no. II.


Electronics ◽  
2021 ◽  
Vol 10 (9) ◽  
pp. 1117
Author(s):  
Bin Li ◽  
Zhikang Jiang ◽  
Jie Chen

Computing the sparse fast Fourier transform (sFFT) has emerged as a critical topic for a long time because of its high efficiency and wide practicability. More than twenty different sFFT algorithms compute discrete Fourier transform (DFT) by their unique methods so far. In order to use them properly, the urgent topic of great concern is how to analyze and evaluate the performance of these algorithms in theory and practice. This paper mainly discusses the technology and performance of sFFT algorithms using the aliasing filter. In the first part, the paper introduces the three frameworks: the one-shot framework based on the compressed sensing (CS) solver, the peeling framework based on the bipartite graph and the iterative framework based on the binary tree search. Then, we obtain the conclusion of the performance of six corresponding algorithms: the sFFT-DT1.0, sFFT-DT2.0, sFFT-DT3.0, FFAST, R-FFAST, and DSFFT algorithms in theory. In the second part, we make two categories of experiments for computing the signals of different SNRs, different lengths, and different sparsities by a standard testing platform and record the run time, the percentage of the signal sampled, and the L0, L1, and L2 errors both in the exactly sparse case and the general sparse case. The results of these performance analyses are our guide to optimize these algorithms and use them selectively.


Sign in / Sign up

Export Citation Format

Share Document