scholarly journals SEPARATION OF DIFFRACTION SPECTRA BY PRINCIPAL COMPONENT METHOD BY THE EXAMPLE OF ARIFON DRUG

Author(s):  
R. V. Chekhova ◽  
V. M. Pyshniy ◽  
L. A. Pyankova ◽  
V. A. Elokhin

The article deals with the results of experimental studies on statistical processing of diffraction spectra of solid drugs for the purpose of their separation and identification. Diffractograms of the original and falsified drugs Arifon were used for the study. They were obtained on a desktop diffractometer Difray 401 produced by Scientific Instruments Inc. (Saint Petersburg, Russia). The research was conducted in the Scilab environment distributed under a free license. The captured diffraction spectra were processed using a smoothing procedure that eliminated the influence of a random component in the original data. Analysis of the results of smoothing by the moving average method showed that the smoothing algorithm with the window 41 point is most preferable. The results of statistical processing of diffractograms of the drugs investigated by the principal component analysis (PCA) in graphical and numerical form, which showed good convergence and efficiency of this method in the separation of diffraction spectra, are presented. The conducted studies make it possible to create a technique that allows identifying solid drugs by X-ray diffraction.

2021 ◽  
Vol 12 (1) ◽  
pp. 5
Author(s):  
Su Zhou ◽  
Jie Jin ◽  
Yuehua Wei

The purpose of this research is to develop a representative driving cycle for fuel cell logistics vehicles running on the roads of Guangdong Province for subsequent energy management research and control system optimization. Firstly, we collected and preliminarily screened the 42-day driving data of a logistics vehicle through the remote monitoring platform, and determined the vehicle characteristic signal vector for analysis. Secondly, the principal component analysis method is used to reduce the dimensionality of these characteristic parameters, avoiding the linear correlation between them and increase the comprehensiveness of the upcoming clustering. Next, the dimensionality-reduced data are fed to a clustering machine. K-means clustering method is used to gather the segmented road sections into highway, urban road, national highway and others. Finally, several segments are chosen in accordance to the occurrence possibility of the four types of road conditions, minimizing the deviation with the original data. By joining the segments and using a moving average filtering window, a typical driving cycle for this fuel cell logistics vehicle on a fixed route is constructed. Some statistical methods are done to validate the driving cycle.The effectiveness analysis shows the driving cycle we constructed has a high degree of overlap with the original data. This positive result provides a solid foundation for our follow-up research, and we can also apply this method to develop other urban driving cycles of fuel cell logistics vehicle.


2021 ◽  
Vol 11 (3) ◽  
pp. 359
Author(s):  
Katharina Hogrefe ◽  
Georg Goldenberg ◽  
Ralf Glindemann ◽  
Madleen Klonowski ◽  
Wolfram Ziegler

Assessment of semantic processing capacities often relies on verbal tasks which are, however, sensitive to impairments at several language processing levels. Especially for persons with aphasia there is a strong need for a tool that measures semantic processing skills independent of verbal abilities. Furthermore, in order to assess a patient’s potential for using alternative means of communication in cases of severe aphasia, semantic processing should be assessed in different nonverbal conditions. The Nonverbal Semantics Test (NVST) is a tool that captures semantic processing capacities through three tasks—Semantic Sorting, Drawing, and Pantomime. The main aim of the current study was to investigate the relationship between the NVST and measures of standard neurolinguistic assessment. Fifty-one persons with aphasia caused by left hemisphere brain damage were administered the NVST as well as the Aachen Aphasia Test (AAT). A principal component analysis (PCA) was conducted across all AAT and NVST subtests. The analysis resulted in a two-factor model that captured 69% of the variance of the original data, with all linguistic tasks loading high on one factor and the NVST subtests loading high on the other. These findings suggest that nonverbal tasks assessing semantic processing capacities should be administered alongside standard neurolinguistic aphasia tests.


2021 ◽  
pp. 000370282098784
Author(s):  
James Renwick Beattie ◽  
Francis Esmonde-White

Spectroscopy rapidly captures a large amount of data that is not directly interpretable. Principal Components Analysis (PCA) is widely used to simplify complex spectral datasets into comprehensible information by identifying recurring patterns in the data with minimal loss of information. The linear algebra underpinning PCA is not well understood by many applied analytical scientists and spectroscopists who use PCA. The meaning of features identified through PCA are often unclear. This manuscript traces the journey of the spectra themselves through the operations behind PCA, with each step illustrated by simulated spectra. PCA relies solely on the information within the spectra, consequently the mathematical model is dependent on the nature of the data itself. The direct links between model and spectra allow concrete spectroscopic explanation of PCA, such the scores representing ‘concentration’ or ‘weights’. The principal components (loadings) are by definition hidden, repeated and uncorrelated spectral shapes that linearly combine to generate the observed spectra. They can be visualized as subtraction spectra between extreme differences within the dataset. Each PC is shown to be a successive refinement of the estimated spectra, improving the fit between PC reconstructed data and the original data. Understanding the data-led development of a PCA model shows how to interpret application specific chemical meaning of the PCA loadings and how to analyze scores. A critical benefit of PCA is its simplicity and the succinctness of its description of a dataset, making it powerful and flexible.


Author(s):  
Lyudmila V. Trubitsyna ◽  
◽  
Elena Y. Johnson ◽  

The task of this paper is to identify a set of subjective factors that affect the decision-making in the family about the birth of the second and subsequent children. For the broadest possible description of the phenomenon under study, the method of slightly-structured interviews was used. To assess the severity of the established subjective factors based on the analysis of qualitative interviews, a questionnaire consisting of two parts was developed and tested. The first part is intended for parents of any children, the second — only for parents of children with special needs. The questionnaire was completed by 122 women with at least one child. 75 of the respondents had a child with special needs and completed both parts of the questionnaire. 92 respondents lived in Russia, the rest respondents lived in English-speaking countries. Based on the results of responses for each part of the questionnaire, an exploratory factor analysis was performed using the principal component method with Kaiser normalization. As a result, the first part of the questionnaire revealed 11 significant scale-factors, and the second part — 3 significant scale-factors. Since the survey was conducted with both Russian-speaking and English-speaking respondents, the questionnaire is available in two versions (in two languages). This questionnaire will be useful for building correctional and rehabilitation work with parents of children with disabilities. The authors are ready to provide the questionnaire to interested specialists (free of charge).


Sensors ◽  
2018 ◽  
Vol 18 (11) ◽  
pp. 3730 ◽  
Author(s):  
Jhonatan Camacho ◽  
Andrés Quintero ◽  
Magda Ruiz ◽  
Rodolfo Villamizar ◽  
Luis Mujica

The implementation of damage-detection methods for continuously assessing structural integrity entails systems with attractive features such as storage capabilities, memory capacity, computational complexity and time-consuming processing. In this sense, embedded hardware platforms are a promising technology for developing integrated solutions in Structural Health Monitoring. In this paper, design, test, and specifications for a standalone inspection prototype are presented, which take advantage of piezo-diagnostics principle, statistical processing via Principal Component Analysis (PCA) and embedded systems. The equipment corresponds to a piezoelectric active system with the capability to detect defects in structures, by using a PCA-based algorithm embedded in the Odroid-U3 ARM Linux platform. The operation of the equipment consists of applying, at one side of the structure, wide guided waves by means of piezoelectric devices operated in actuation mode and to record the wave response in another side of the structure by using the same kind of piezoelectric devices operated in sensor mode. Based on the nominal response of the guide wave (no damages), represented by means of a PCA statistical model, the system can detect damages between the actuated/sensed points through squared prediction error (Q-statistical index). The system performance was evaluated in a pipe test bench where two kinds of damages were studied: first, a mass is added to the pipe surface, and then leaks are provoked to the pipe structure by means of a drill tool. The experiments were conducted on two lab structures: (i) a meter carbon-steel pipe section and (ii) a pipe loop structure. The wave response was recorded between the instrumented points for two conditions: (i) The pipe in nominal conditions, where several repetitions will be applied to build the nominal statistical model and (ii) when damage is caused to the pipe (mass adding or leak). Damage conditions were graphically recognized through the Q-statistic chart. Thus, the feasibility to implement an automated real-time diagnostic system is demonstrated with minimum processing resources and hardware flexibility.


2020 ◽  
Vol 19 (4) ◽  
pp. 243-253
Author(s):  
Ivan S. Laktionov ◽  
Oleksandr V. Vovna ◽  
Maryna M. Kabanets ◽  
Iryna A. Getman ◽  
Oksana V. Zolotarova

The purpose of the article is to improve procedures of computerized monitoring and control of technological processes of growing greenhouse crops by substantiating methods of improving the accuracy of computer-integrated devices for measuring irrigation solution acidity. The article solves the topical scientific and applied problem of determining the conversion characteristics of computerized acidity monitoring systems with integral and differential assessment of their metrological parameters. Theoretical and experimental studies were obtained based on structural-algorithmic synthesis methods for information-measuring systems; methods of mathematical planning of experiments; regression analysis of experimental data and the concept of uncertainty. The computerized acidity meter was implemented on the basis of an ion-selective pH electrode, Arduino microprocessor platform, and ThingSpeak cloud computing service. The relative total boundary uncertainty of acidity measurement is not more than ±1.1 %. Methods of compensating of the random component of uncertainty based on the median filtering algorithm and additional uncertainty from the destabilizing effect of temperature were introduced when implementing the measuring device. Promising areas of priority research to improve the efficiency of the developed computerized acidity meter were justified. The developed device can be used in the complex automation of greenhouse cultivation processes. The developed and implemented measuring tool can be used when planning agricultural operations in greenhouse conditions.


Author(s):  
Zhasur Kulmukhamedov ◽  
Ravshan Khikmatov ◽  
Alisher Saidumarov ◽  
Yulduz Kulmukhamedova

The manuscript proposes analytical methods for calculating fuel economy and traction-speed properties when modeling the movement of cargo-carrying vehicles on real routes, based on theoretical and experimental studies in a hot and dry climate, which allows for determining the efficiency of cargo-carrying vehicles objectively in terms of traction and speed, fuel and economic indicators. Using the statistical processing of experimental, theoretical research data, the authors calculate the coefficient X2, which allows for evaluating the adequacy of the mathematical model and experimental data. As an example, the manuscript provides for an assessment of fuel economy and traction and speed properties. The authors presented the results in graphs for the ease of evaluating the effect of external temperature on fuel consumption and the average speed of a road train. The authors’ methodology allows for determining the efficiency of cargo-carrying vehicles in a hot and dry climate.


2021 ◽  
Author(s):  
Francesco Carlomagno ◽  
Carlotta Pozza ◽  
Marta Tenuta ◽  
Riccardo Pofi ◽  
Luigi Tarani ◽  
...  

ABSTRACTContextExperimental studies on Klinefelter syndrome (KS) reported increased intratesticular testosterone (T) levels coexisting with reduced circulating levels. Abnormalities in testicular microcirculation have been claimed; however, no studies investigated in vivo testicular blood flow dynamics in humans with KS.ObjectiveTo analyze the testicular microcirculation in KS by contrast-enhanced ultrasonography (CEUS) and correlate vascular parameters with endocrine function.Design and SettingProspective study. University Settings.Patients51 testicular scans, 17 testes from 10 T-naïve subjects with KS and 34 testes from age-matched eugonadal men (CNT) who underwent CEUS for incidental nonpalpable testicular lesions.Main OutcomesCEUS kinetic parameters.ResultsCEUS revealed slower testicular perfusion kinetics in subjects with KS than in age-matched CNT. Specifically, the wash-in time (Tin, p = 0.008), mean transit time (MTT, p = 0.008), time to peak (TTP, p < 0.001), and washout time (Tout 50%, p = 0.008) were all prolonged. Faster testicular blood flow was associated with higher total T levels. Principal component analysis and multiple linear regression analyses confirmed the findings, and supported a role for reduced venous blood flow as independent predictor of total T levels.ConclusionsTesticular venous blood flow is altered in KS and independently predicts T peripheral release.


2021 ◽  
Vol 64 (10) ◽  
pp. 728-735
Author(s):  
I. A. Rybenko ◽  
O. I. Nokhrina ◽  
I. D. Rozhikhina ◽  
M. A. Golodova ◽  
I. E. Khodosov

The article presents results of theoretical and experimental studies of the processes of iron solid-phase reduction from an iron-containing concentrate obtained as a result of hydrometallurgical dressing of ferromanganese and polymetallic manganese-containing ores with coals of grades D (long-flame) and 2B (brown). The method of thermodynamic modeling using TERRA software complex was used to study the reducing properties of hydrocarbons by calculating equilibrium compositions in the temperature range of 373 - 1873 K. The authors obtained the dependences of compositions and volume of the gas phase formed as a result of the release of volatile components during heating on the temperature for the coals of the grades under consideration. As a result of thermodynamic modeling, the optimal temperatures and consumption are determined, which ensure the complete iron reduction from an iron-containing concentrate. The results of experimental studies were obtained by modern research methods using laboratory and analytical equipment, as well as methods of statistical processing. Results of the coals analysis carried out using the Setaram LabSys Evo thermal analyzer showed that the process of thermal decomposition of coals of the studied grades proceeds according to general laws. The process of thermal decomposition of long-flame coal proceeds less intensively than of brown coal. The results of an experimental study of the processes of thermal decomposition of reducing agents have shown that volumes of the gas phases, formed when coals are heated to a temperature of 1173 K in an argon atmosphere, practically coincide with the calculated values. As a result of thermodynamic modeling and experimental study, the optimal consumption of D and 2B grades of coal is determined at a temperature of 1473 K. The best reducing agent with a minimum specific consumption is long-flame coal of D grade. When determining the optimal amount of reducing agent in charge mixtures during the study of metallization processes, it was found that with an excess of reducing agent, it is possible to achieve almost complete extraction (98 - 99 %) of iron from the concentrate.


Sign in / Sign up

Export Citation Format

Share Document