scholarly journals Gambaran Derajat Grit Pada Mahasiswa Akademi Keperawatan “X” di Kabupaten Kepulauan Aru

2017 ◽  
Vol 1 (1) ◽  
Author(s):  
Roseilla Nora Izaach

This study aimed to describe the level of grit in the Nursing Academy student X in the Aru Islands. Grit is the one of the latest theory in the study of Positive Psychology which emphasizes of two important aspects are perseverance of efforts and consistency of interest, that determines the success of individuals in achieving their life goals. The goal of achieving future success through education is the reason this research is conducted. Respondents in this study were students in 2014. The number of respondents are 51 people with entirely female. Measuring instrument used in this study was grit scale consists of 12 items with reliability of 0.85 and a validity coefficient range  from 0.44 to 0.82 ( Duckworth, et.al.,2007) . Based on the results of the processing of descriptive data, it was found that the majority of respondents have a low level of grit with percentage of 86.3%. Variable aspect of grit perseverance of efforts, the majority of respondents have a low level of 90.2%, and the consistency aspect of interest, the majority of respondents have a high level of 66.7%. The socioeconomic status of the students is based on the type of work of the parents, not indicating the tendency to be related to the degree of grit. Further research that can be done is to investigate more deeply about the contribution of personality factors, differences in cultural background and demographics that affect grit. Keywords: Grit, socioeconomic status, demographics

2001 ◽  
Vol 01 (01) ◽  
pp. 63-81 ◽  
Author(s):  
ALAN HANJALIC ◽  
REGINALD L. LAGENDIJK ◽  
JAN BIEMOND

This paper addresses the problem of automatically partitioning a video into semantic segments using visual low-level features only. Semantic segments may be understood as building content blocks of a video with a clear sequential content structure. Examples are reports in a news program, episodes in a movie, scenes of a situation comedy or topic segments of a documentary. In some video genres like news programs or documentaries, the usage of different media (visual, audio, speech, text) may be beneficial or is even unavoidable for reliably detecting the boundaries between semantic segments. In many other genres, however, the pay-off in using different media for the purpose of high-level segmentation is not high. On the one hand, relating the audio, speech or text to the semantic temporal structure of video content is generally very difficult. This is especially so in "acting" video genres like movies and situation comedies. On the other hand, the information contained in the visual stream of these video genres often seems to provide the major clue about the position of semantic segments boundaries. Partitioning a video into semantic segments can be performed by measuring the coherence of the content along neighboring video shots of a sequence. The segment boundaries are then found at places (e.g., shot boundaries) where the values of content coherence are sufficiently low. On the basis of two state-of-the-art techniques for content coherence modeling, we illustrate in this paper the current possibilities for detecting the boundaries of semantic segments using visual low-level features only.


2016 ◽  
Vol 6 (3) ◽  
pp. 137-154 ◽  
Author(s):  
Hui Wei

Abstract We have two motivations. Firstly, semantic gap is a tough problem puzzling almost all sub-fields of Artificial Intelligence. We think semantic gap is the conflict between the abstractness of high-level symbolic definition and the details, diversities of low-level stimulus. Secondly, in object recognition, a pre-defined prototype of object is crucial and indispensable for bi-directional perception processing. On the one hand this prototype was learned from perceptional experience, and on the other hand it should be able to guide future downward processing. Human can do this very well, so physiological mechanism is simulated here. We utilize a mechanism of classical and non-classical receptive field (nCRF) to design a hierarchical model and form a multi-layer prototype of an object. This also is a realistic definition of concept, and a representation of denoting semantic. We regard this model as the most fundamental infrastructure that can ground semantics. Here a AND-OR tree is constructed to record prototypes of a concept, in which either raw data at low-level or symbol at high-level is feasible, and explicit production rules are also available. For the sake of pixel processing, knowledge should be represented in a data form; for the sake of scene reasoning, knowledge should be represented in a symbolic form. The physiological mechanism happens to be the bridge that can join them together seamlessly. This provides a possibility for finding a solution to semantic gap problem, and prevents discontinuity in low-order structures.


Author(s):  
Max Hoffmann ◽  
Christof Paar

Hardware obfuscation is widely used in practice to counteract reverse engineering. In recent years, low-level obfuscation via camouflaged gates has been increasingly discussed in the scientific community and industry. In contrast to classical high-level obfuscation, such gates result in recovery of an erroneous netlist. This technology has so far been regarded as a purely defensive tool. We show that low-level obfuscation is in fact a double-edged sword that can also enable stealthy malicious functionalities.In this work, we present Doppelganger, the first generic design-level obfuscation technique that is based on low-level camouflaging. Doppelganger obstructs central control modules of digital designs, e.g., Finite State Machines (FSMs) or bus controllers, resulting in two different design functionalities: an apparent one that is recovered during reverse engineering and the actual one that is executed during operation. Notably, both functionalities are under the designer’s control.In two case studies, we apply Doppelganger to a universal cryptographic coprocessor. First, we show the defensive capabilities by presenting the reverse engineer with a different mode of operation than the one that is actually executed. Then, for the first time, we demonstrate the considerable threat potential of low-level obfuscation. We show how an invisible, remotely exploitable key-leakage Trojan can be injected into the same cryptographic coprocessor just through obfuscation. In both applications of Doppelganger, the resulting design size is indistinguishable from that of an unobfuscated design, depending on the choice of encodings.


2020 ◽  
Author(s):  
Marcos S. Pereira ◽  
Bruno V. Adorno

This paper addresses the integration of task planning and motion control in robotic manipulation, where automatically generated feasible manipulation sequences are executed by a controller that explicitly accounts for the task geometric constraints. To cope with the high dimensionality of the manipulation problem and the complexity of specifying the tasks, we use a multi-layered framework for task and motion planning adapted from the literature. The adapted framework consists of a high-level planner, which generates task plans for linear temporal logic specifications, and a low-level motion controller, based on constrained optimization, that allows to define regions of interest instead of exact locations while being reactive to changes in theworkspace. Thus, there is no low-level motion planning time added to the total planning time. Moreover, since there is no replanning phase due to motion planner failures, the robot actions are generated only once for each task because the search for a plan occurs on a static graph. We evaluated this approach with two pick-and-place tasks with similar complexity to the original framework and showed that the number of plan nodes generated is smaller than the one in the original framework, which implies less total planning time.


2019 ◽  
Vol 1 (1) ◽  
pp. 31-39
Author(s):  
Ilham Safitra Damanik ◽  
Sundari Retno Andani ◽  
Dedi Sehendro

Milk is an important intake to meet nutritional needs. Both consumed by children, and adults. Indonesia has many producers of fresh milk, but it is not sufficient for national milk needs. Data mining is a science in the field of computers that is widely used in research. one of the data mining techniques is Clustering. Clustering is a method by grouping data. The Clustering method will be more optimal if you use a lot of data. Data to be used are provincial data in Indonesia from 2000 to 2017 obtained from the Central Statistics Agency. The results of this study are in Clusters based on 2 milk-producing groups, namely high-dairy producers and low-milk producing regions. From 27 data on fresh milk production in Indonesia, two high-level provinces can be obtained, namely: West Java and East Java. And 25 others were added in 7 provinces which did not follow the calculation of the K-Means Clustering Algorithm, including in the low level cluster.


Author(s):  
Margarita Khomyakova

The author analyzes definitions of the concepts of determinants of crime given by various scientists and offers her definition. In this study, determinants of crime are understood as a set of its causes, the circumstances that contribute committing them, as well as the dynamics of crime. It is noted that the Russian legislator in Article 244 of the Criminal Code defines the object of this criminal assault as public morality. Despite the use of evaluative concepts both in the disposition of this norm and in determining the specific object of a given crime, the position of criminologists is unequivocal: crimes of this kind are immoral and are in irreconcilable conflict with generally accepted moral and legal norms. In the paper, some views are considered with regard to making value judgments which could hardly apply to legal norms. According to the author, the reasons for abuse of the bodies of the dead include economic problems of the subject of a crime, a low level of culture and legal awareness; this list is not exhaustive. The main circumstances that contribute committing abuse of the bodies of the dead and their burial places are the following: low income and unemployment, low level of criminological prevention, poor maintenance and protection of medical institutions and cemeteries due to underperformance of state and municipal bodies. The list of circumstances is also open-ended. Due to some factors, including a high level of latency, it is not possible to reflect the dynamics of such crimes objectively. At the same time, identification of the determinants of abuse of the bodies of the dead will reduce the number of such crimes.


2000 ◽  
Vol 41 (4-5) ◽  
pp. 253-260 ◽  
Author(s):  
P. Buffière ◽  
R. Moletta

An anaerobic inverse turbulent bed, in which the biogas only ensures fluidisation of floating carrier particles, was investigated for carbon removal kinetics and for biofilm growth and detachment. The range of operation of the reactor was kept within 5 and 30 kgCOD· m−3· d−1, with Hydraulic Retention Times between 0.28 and 1 day. The carbon removal efficiency remained between 70 and 85%. Biofilm size were rather low (between 5 and 30 μm) while biofilm density reached very high values (over 80 kgVS· m−3). The biofilm size and density varied with increasing carbon removal rates with opposite trends; as biofilm size increases, its density decreases. On the one hand, biomass activity within the reactor was kept at a high level, (between 0.23 and 0.75 kgTOC· kgVS· d−1, i.e. between 0.6 and 1.85 kgCOD·kgVS · d−1).This result indicates that high turbulence and shear may favour growth of thin, dense and active biofilms. It is thus an interesting tool for biomass control. On the other hand, volatile solid detachment increases quasi linearly with carbon removal rate and the total amount of solid in the reactor levels off at high OLR. This means that detachment could be a limit of the process at higher organic loading rates.


2021 ◽  
pp. 002224372199837
Author(s):  
Walter Herzog ◽  
Johannes D. Hattula ◽  
Darren W. Dahl

This research explores how marketing managers can avoid the so-called false consensus effect—the egocentric tendency to project personal preferences onto consumers. Two pilot studies were conducted to provide evidence for the managerial importance of this research question and to explore how marketing managers attempt to avoid false consensus effects in practice. The results suggest that the debiasing tactic most frequently used by marketers is to suppress their personal preferences when predicting consumer preferences. Four subsequent studies show that, ironically, this debiasing tactic can backfire and increase managers’ susceptibility to the false consensus effect. Specifically, the results suggest that these backfire effects are most likely to occur for managers with a low level of preference certainty. In contrast, the results imply that preference suppression does not backfire but instead decreases false consensus effects for managers with a high level of preference certainty. Finally, the studies explore the mechanism behind these results and show how managers can ultimately avoid false consensus effects—regardless of their level of preference certainty and without risking backfire effects.


Author(s):  
Richard Stone ◽  
Minglu Wang ◽  
Thomas Schnieders ◽  
Esraa Abdelall

Human-robotic interaction system are increasingly becoming integrated into industrial, commercial and emergency service agencies. It is critical that human operators understand and trust automation when these systems support and even make important decisions. The following study focused on human-in-loop telerobotic system performing a reconnaissance operation. Twenty-four subjects were divided into groups based on level of automation (Low-Level Automation (LLA), and High-Level Automation (HLA)). Results indicated a significant difference between low and high word level of control in hit rate when permanent error occurred. In the LLA group, the type of error had a significant effect on the hit rate. In general, the high level of automation was better than the low level of automation, especially if it was more reliable, suggesting that subjects in the HLA group could rely on the automatic implementation to perform the task more effectively and more accurately.


2020 ◽  
Vol 4 (POPL) ◽  
pp. 1-32 ◽  
Author(s):  
Michael Sammler ◽  
Deepak Garg ◽  
Derek Dreyer ◽  
Tadeusz Litak
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document