scholarly journals Assessment of the Development Level of the Moral Competences of Police Decision-Makers from the Perspective of the Assumptions of Police Partnership in the Sphere of Human Security

Author(s):  
Witold Mroziewski

The study considers the assumptions for the cultural transformation of the Polish Police for the benefit of further socialization of its activities. The point of reference in this study was the assumptions of a partnership police culture preferred in Western culture and based mainly on moral rules. The description of the police partnership culture was also made in relation to the classic police culture. On the basis of the presented empirical research, the gap in moral competences of the assessed decision-makers was described in a variant version, which occurs between the identified level of development of moral competences and the one postulated by the assumptions of the police partnership culture, in the structure of the entire research sample and taking into account its division into decision-makers from local and higher-level units command. The results of the research showed, socially unacceptable in the sphere of human safety, discrepancies between the identified and postulated state in the context of the requirements of a police partnership culture, as well as a higher level of development of moral competences of decision-makers from local Police units in relation to decision-makers from provincial Police headquarters and the Police Headquarters.

1991 ◽  
Vol 2 (2) ◽  
pp. 242-251 ◽  
Author(s):  
Michaelene P. Mirr

An acute injury in which a family member requires critical care creates a period of intense stress for families. During such time, family members are often faced with decisions about the patient or the family. The ability of families to make decisions during this stressful period is not addressed in the literature. The purpose of this study was to determine what decisions families made in the one-month period after a patient’s admission. Families of patients with severe head injury were chosen because these families are often forced to make decisions quickly and to act as proxy decision-makers for the injured person. Nurses can make important contributions to assist family members in making decisions about the patient or the family. Nurses need to understand what decisions family members need to make and the circumstances surrounding the decision-making process to intervene appropriately


2012 ◽  
Vol 3 (1) ◽  
pp. 63-73 ◽  
Author(s):  
I. Csáky ◽  
F. Kalmár

Abstract Nowadays the facades of newly built buildings have significant glazed surfaces. The solar gains in these buildings can produce discomfort caused by direct solar radiation on the one hand and by the higher indoor air temperature on the other hand. The amplitude of the indoor air temperature variation depends on the glazed area, orientation of the facade and heat storage capacity of the building. This paper presents the results of a simulation, which were made in the Passol Laboratory of University of Debrecen in order to define the internal temperature variation. The simulation proved that the highest amplitudes of the internal temperature are obtained for East orientation of the facade. The upper acceptable limit of the internal air temperature is exceeded for each analyzed orientation: North, South, East, West. Comparing different building structures, according to the obtained results, in case of the heavy structure more cooling hours are obtained, but the energy consumption for cooling is lower.


2021 ◽  
Vol 11 (11) ◽  
pp. 5070
Author(s):  
Xesús Prieto-Blanco ◽  
Carlos Montero-Orille

In the last few years, some advances have been made in the theoretical modelling of ion exchange processes in glass. On the one hand, the equations that describe the evolution of the cation concentration were rewritten in a more rigorous manner. This was made into two theoretical frameworks. In the first one, the self-diffusion coefficients were assumed to be constant, whereas, in the second one, a more realistic cation behaviour was considered by taking into account the so-called mixed ion effect. Along with these equations, the boundary conditions for the usual ion exchange processes from molten salts, silver and copper films and metallic cathodes were accordingly established. On the other hand, the modelling of some ion exchange processes that have attracted a great deal of attention in recent years, including glass poling, electro-diffusion of multivalent metals and the formation/dissolution of silver nanoparticles, has been addressed. In such processes, the usual approximations that are made in ion exchange modelling are not always valid. An overview of the progress made and the remaining challenges in the modelling of these unique processes is provided at the end of this review.


Author(s):  
Unai Zabala ◽  
Igor Rodriguez ◽  
José María Martínez-Otzeta ◽  
Elena Lazkano

AbstractNatural gestures are a desirable feature for a humanoid robot, as they are presumed to elicit a more comfortable interaction in people. With this aim in mind, we present in this paper a system to develop a natural talking gesture generation behavior. A Generative Adversarial Network (GAN) produces novel beat gestures from the data captured from recordings of human talking. The data is obtained without the need for any kind of wearable, as a motion capture system properly estimates the position of the limbs/joints involved in human expressive talking behavior. After testing in a Pepper robot, it is shown that the system is able to generate natural gestures during large talking periods without becoming repetitive. This approach is computationally more demanding than previous work, therefore a comparison is made in order to evaluate the improvements. This comparison is made by calculating some common measures about the end effectors’ trajectories (jerk and path lengths) and complemented by the Fréchet Gesture Distance (FGD) that aims to measure the fidelity of the generated gestures with respect to the provided ones. Results show that the described system is able to learn natural gestures just by observation and improves the one developed with a simpler motion capture system. The quantitative results are sustained by questionnaire based human evaluation.


Author(s):  
Alexander Boll ◽  
Florian Brokhausen ◽  
Tiago Amorim ◽  
Timo Kehrer ◽  
Andreas Vogelsang

AbstractSimulink is an example of a successful application of the paradigm of model-based development into industrial practice. Numerous companies create and maintain Simulink projects for modeling software-intensive embedded systems, aiming at early validation and automated code generation. However, Simulink projects are not as easily available as code-based ones, which profit from large publicly accessible open-source repositories, thus curbing empirical research. In this paper, we investigate a set of 1734 freely available Simulink models from 194 projects and analyze their suitability for empirical research. We analyze the projects considering (1) their development context, (2) their complexity in terms of size and organization within projects, and (3) their evolution over time. Our results show that there are both limitations and potentials for empirical research. On the one hand, some application domains dominate the development context, and there is a large number of models that can be considered toy examples of limited practical relevance. These often stem from an academic context, consist of only a few Simulink blocks, and are no longer (or have never been) under active development or maintenance. On the other hand, we found that a subset of the analyzed models is of considerable size and complexity. There are models comprising several thousands of blocks, some of them highly modularized by hierarchically organized Simulink subsystems. Likewise, some of the models expose an active maintenance span of several years, which indicates that they are used as primary development artifacts throughout a project’s lifecycle. According to a discussion of our results with a domain expert, many models can be considered mature enough for quality analysis purposes, and they expose characteristics that can be considered representative for industry-scale models. Thus, we are confident that a subset of the models is suitable for empirical research. More generally, using a publicly available model corpus or a dedicated subset enables researchers to replicate findings, publish subsequent studies, and use them for validation purposes. We publish our dataset for the sake of replicating our results and fostering future empirical research.


2021 ◽  
pp. 030157422098054
Author(s):  
Renu Datta

Introduction: The upper lateral incisor is the most commonly missing tooth in the anterior segment. It leads to esthetic and functional imbalance for the patients. The ideal solution is the one that is most conservative and which fulfills the functional and esthetic needs of the concerned individual. Canine substitution is evolving to be the treatment of choice in most of the cases, because of its various advantages. These are special cases that need more time and effort from the clinicians due to space discrepancy in the upper and lower arches, along with the presentation of individual malocclusion. Aims and Objectives: Malocclusion occurring due to missing laterals is more complex, needing more time and effort from the clinicians because of space discrepancy, esthetic compromise, and individual presentation of the malocclusion. An attempt has been made in this article to review, evaluate, and tabulate the important factors for the convenience of clinicians. Method: All articles related to canine substitution were searched in the electronic database PubMed, and the important factors influencing the decision were reviewed. After careful evaluation, the checklist was evolved. Result: The malocclusions in which canine substitution is the treatment of choice are indicated in the tabular form for the convenience of clinicians. Specific treatment-planning considerations and biomechanics that can lead to an efficient and long-lasting result are also discussed. Conclusion: The need of the hour is an evidence-based approach, along with a well-designed prospective randomized control trial to understand the importance of each factor influencing these cases. Until that time, giving the available information in a simplified way can be a quality approach to these cases.


AI & Society ◽  
2021 ◽  
Author(s):  
Simona Chiodo

AbstractWe continuously talk about autonomous technologies. But how can words qualifying technologies be the very same words chosen by Kant to define what is essentially human, i.e. being autonomous? The article focuses on a possible answer by reflecting upon both etymological and philosophical issues, as well as upon the case of autonomous vehicles. Most interestingly, on the one hand, we have the notion of (human) “autonomy”, meaning that there is a “law” that is “self-given”, and, on the other hand, we have the notion of (technological) “automation”, meaning that there is something “offhand” that is “self-given”. Yet, we are experiencing a kind of twofold shift: on the one hand, the shift from defining technologies in terms of automation to defining technologies in terms of autonomy and, on the other hand, the shift from defining humans in terms of autonomy to defining humans in terms of automation. From a philosophical perspective, the shift may mean that we are trying to escape precisely from what autonomy founds, i.e. individual responsibility of humans that, in the Western culture, have been defined for millennia as rational and moral decision-makers, even when their decisions have been the toughest. More precisely, the shift may mean that we are using technologies, and in particular emerging algorithmic technologies, as scapegoats that bear responsibility for us by making decisions for us. Moreover, if we consider the kind of emerging algorithmic technologies that increasingly surround us, starting from autonomous vehicles, then we may argue that we also seem to create a kind of technological divine that, by being always with us through its immanent omnipresence, omniscience, omnipotence and inscrutability, can always be our technological scapegoat freeing us from the most unbearable burden of individual responsibility resulting from individual autonomy.


1969 ◽  
Vol 63 (2) ◽  
pp. 427-441 ◽  
Author(s):  
Kenneth Prewitt ◽  
Heinz Eulau

Scholars interested in theorizing about political representation in terms relevant to democratic governance in mid-twentieth century America find themselves in a quandary. We are surrounded by functioning representative institutions, or at least by institutions formally described as representative. Individuals who presumably “represent” other citizens govern some 90 thousand different political units—they sit on school and special district boards, on township and city councils, on county directorates, on state and national assemblies, and so forth. But the flourishing activity of representation has not yet been matched by a sustained effort to explain what makes the representational process tick.Despite the proliferation of representative governments over the past century,theoryabout representation has not moved much beyond the eighteenth-century formulation of Edmund Burke. Certainly most empirical research has been cast in the Burkean vocabulary. But in order to think in novel ways about representative government in the twentieth-century, we may have to admit that present conceptions guiding empirical research are obsolete. This in turn means that the spell of Burke's vocabulary over scientific work on representation must be broken.To look afresh at representation, it is necessary to be sensitive to the unresolved tension between the two main currents of contemporary thinking about representational relationships. On the one hand, representation is treated as a relationship between any one individual, the represented, and another individual, the representative—aninter-individualrelationship. On the other hand, representatives are treated as a group, brought together in the assembly, to represent the interest of the community as a whole—aninter-grouprelationship. Most theoretical formulations since Burke are cast in one or the other of these terms.


The freeze-etching technique must be improved if structures at the molecular size level are to be seen. The limitations of the technique are discussed here together with the progress made in alleviating them. The vitrification of living specimens is limited by the fact that very high freezing rates are needed. The critical freezing rate can be lowered on the one hand by the introduction of antifreeze agents, on the other hand by the application of high hydrostatic pressure. The fracture process may cause structural distortions in the fracture face of the frozen specimen. The ‘double-replica’ method allows one to evaluate such artefacts and provides an insight into the way that membranes split. During etching there exists the danger of contaminating the fracture faces with condensable gases. Because of specimen temperatures below —110 °C, special care has to be taken in eliminating water vapour from the high vacuum. An improvement in coating freeze-etched specimens has resulted from the application of electron guns for evaporation of the highest melting-point metals. If heat transfer from gun to specimen is reduced to a minimum, Pt, Ir, Ta, W and C can be used for shadow casting. Best results are obtained with Pt-C and Ta-W . With the help of decoration effects Pt-C shadow castings give the most information about the fine structural details of the specimen.


Author(s):  
O. Adamidis ◽  
G. S. P. Madabhushi

Loosely packed sand that is saturated with water can liquefy during an earthquake, potentially causing significant damage. Once the shaking is over, the excess pore water pressures that developed during the earthquake gradually dissipate, while the surface of the soil settles, in a process called post-liquefaction reconsolidation. When examining reconsolidation, the soil is typically divided in liquefied and solidified parts, which are modelled separately. The aim of this paper is to show that this fragmentation is not necessary. By assuming that the hydraulic conductivity and the one-dimensional stiffness of liquefied sand have real, positive values, the equation of consolidation can be numerically solved throughout a reconsolidating layer. Predictions made in this manner show good agreement with geotechnical centrifuge experiments. It is shown that the variation of one-dimensional stiffness with effective stress and void ratio is the most crucial parameter in accurately capturing reconsolidation.


Sign in / Sign up

Export Citation Format

Share Document