scholarly journals Perbandingan Metode Selection Sort dan Insertion Sort Dalam Pengurutan Data Menggunakan Bahasa Program Java

Petir ◽  
2019 ◽  
Vol 12 (2) ◽  
pp. 172-178
Author(s):  
Endang Sunandar

There are various kinds of sorting data methods that we know of which are Bubble Sort, Selection Sort, Insertion Sort, Quick Sort, Shell Sort, and Heap Sort methods. All of these methods have their respective strengths and weaknesses, the use of which is determined based on needs. Each method has a different algorithm, where the difference in this algorithm affects the execution time. In this paper the authors make a comparison of the 2 methods of sorting data, namely the Sort Sort and Insertion Sort methods, with consideration that the two methods are concise algorithms and have almost the same algorithm pattern. , using the same number and data model. The purpose of this comparison is to provide an overview of the two methods, which method has faster execution time, whether the Selection sort method or the Insertion Sort method.

2019 ◽  
Vol 19 (4) ◽  
pp. 186-192
Author(s):  
A. P. Demichkovskyi

The purpose of the study was to define informative indicators of technical and tactical actions of qualified rifle shooting athletes. Materials and methods. The study involved MSU (number of athletes n = 10), CMSU (number of athletes n = 9). To solve the tasks set, the following research methods were used: analysis and generalization of scientific and methodological literature, pedagogical observation. Pedagogical observation was used to study the peculiarities of technical and tactical indicators of qualified athletes, as well as their motor abilities; methods of mathematical statistics were used to process the experimental data. Results. A detailed analysis of competitive activity made it possible to determine that the shot phases “Aiming”, “Shot execution – active shot”, “Preparation for the shot” are informative indicators of technical and tactical actions of qualified rifle shooting athletes. The study determined time parameters of the phases during competitive activity. The difference between the average indicators of the athletes with different sports qualifications is at the limit of 2.55 seconds, which suggests that the duration of the restorative processes of the shooter’s body affects the performance of each shot.  Conclusions. A detailed analysis of air rifle shooting among men during competitive activity allowed to determine the difference in technical and tactical fitness between the athletes with different sports qualifications of MSU and CMSU levels: “Aiming” – MSU 950.56 seconds, CMSU 1017.91 seconds; “Shot execution – active shot” – MSU 964.45 seconds, CMSU 952.36 seconds; “Preparation for the shot” – MSU 1678.66 seconds, CMSU 1855.19 seconds, “Total execution time” – MSU 3593.68 seconds, CMSU 3825.47 seconds.


Algorithms ◽  
2021 ◽  
Vol 14 (2) ◽  
pp. 67
Author(s):  
Jin Nakabe ◽  
Teruhiro Mizumoto ◽  
Hirohiko Suwa ◽  
Keiichi Yasumoto

As the number of users who cook their own food increases, there is increasing demand for an optimal cooking procedure for multiple dishes, but the optimal cooking procedure varies from user to user due to the difference of each user’s cooking skill and environment. In this paper, we propose a system of presenting optimal cooking procedures that enables parallel cooking of multiple recipes. We formulate the problem of deciding optimal cooking procedures as a task scheduling problem by creating a task graph for each recipe. To reduce execution time, we propose two extensions to the preprocessing and bounding operation of PDF/IHS, a sequential optimization algorithm for the task scheduling problem, each taking into account the cooking characteristics. We confirmed that the proposed algorithm can reduce execution time by up to 44% compared to the base PDF/IHS, and increase execution time by about 900 times even when the number of required searches increases by 10,000 times. In addition, through the experiment with three recipes for 10 participants each, it was confirmed that by following the optimal cooking procedure for a certain menu, the actual cooking time was reduced by up to 13 min (14.8% of the time when users cooked freely) compared to the time when users cooked freely.


Author(s):  
Rogel Ladia Quilala ◽  
Ariel M Sison ◽  
Ruji P Medina

Hashes are used to check the integrity of data. This paper modified SHA-1 by incorporating mixing method in every round for better diffusion. The modification increased the hash output to 192-bits. Increasing the output increases the strength because breaking the hash takes longer. Based on the different message types, avalanche percentage of modified SHA-1 showed better diffusion at 51.64%, higher than the target 50%, while SHA-1 achieved 46.61%. The average execution time noted for modified SHA-1 is 0.33 seconds while SHA-1 is 0.08 seconds. Time increases as the number of messages hashed increases; the difference is negligible in fewer messages. <a name="_Hlk507405879"></a>On character hits, that is - no same character in the same position, modified SHA-1 achieved lower hit rate because of the mixing method added. The modifications’ effectiveness was also evaluated using a hash test program. After inputting 1000 hashes from random strings, the result shows no duplicate hash.


Author(s):  
Agung Yudha Berliantara

ETL scheduling is a challenging and exciting issue to solve. The ETL scheduling problem has many facets, one of which is the cost of time. If it is not handled correctly, it may take a very long time to execute and inconsistent data in very large data. In this study using Round-robin algorithm method that proved able to produce efficient results and in accordance with conventional methods. After doing the research, the difference between these two methods is about execution time. Through this experiment, the Round-robin scheduling method gives a more efficient execution time of up to 61% depending on the amount of data and the number of partitions used.


2018 ◽  
Vol 13 (04) ◽  
Author(s):  
Weni Cikita Kuyotok ◽  
Harijanto Sabijono ◽  
Victorina Z. Tirayoh

Computer technology is very helpful in the work, one of which is microsoft excel. This study aims to determine which functions of microsoft excel that always used, considered important and need to be mastered for a career as an auditor. The object of research in this research is BPK RI Representative of North Sulawesi Province and Faculty of Economics and Business of Sam Ratulangi University. The method of analysis used in this study is descriptive analysis whose purpose each data collected analyzed and then drawn a conclusion, the type of research used is descriptive quantitative. The results of this study indicate that the basic functions, formats function and filters and sorting data is a function that is always used by auditors. Further basic functions, filter and sorting data, format functions, and keyboard shortcuts are a very important function for the auditor. Finally basic functions, filter and sorting data, format functions and keyboard shortcuts are functions that are highly controlled by auditors. Just keep in mind that for the level of importance, there is a difference between the perceptions of junior team auditors and senior team auditors. The difference is that only senior team auditors think that keyboard shortcuts are a very important function, while junior team auditors think that keyboard shortcuts are limited to important functions only. For accounting students only keyboard shortcuts which is a function that is always used. Furthermore there are six functions that are considered very important by accounting students are basic functions, keyboard shortcuts, format functions, charts and graphs, filter and sorting data, and financial functions. Recent functions that are highly controlled by accounting students are basic functions, keyboard shortcuts and format functions.Keywords: Functions of Microsoft Excel, Auditor, Students


2010 ◽  
Vol 439-440 ◽  
pp. 1469-1474
Author(s):  
Le Cao ◽  
Biao Wang ◽  
Fei Liu

The measurement of worker differences is fundamental consideration in personnel assignment which is one key decision that influences the productivity and quality of assembly production. Due to the weakness of presented researches on worker differences measurement, the concept of station fitness, which takes the worker skill level and accumulated execution time in a given period as parameters, is proposed to better describe the worker competency and provide a way of measuring the differences among workers. A personnel assignment optimization model based on station fitness for assembly production is constructed, and the objectives are maximizing the station fitness for each assembly station and minimizing the difference of station fitness for workers assigned in the same assembly line. Then a heuristic algorithm based on the fitness matrix is presented to solve this model. The results from the example demonstrate the feasibility of the approach.


With the increase in the advent of parallel computing, it has become necessary to write OpenMP programs to achieve better speedup and to exploit parallel hardware efficiently. However, to achieve this, the programmers are required to understand OpenMP directives and clauses, the dependencies in their code, etc. A small mistake made by them, such as wrongly analysing a dependency or wrong data scoping of a variable, can result in an incorrect or inefficient program. In this paper, we propose a system which can automate the process of parallelization of a serial C code. The system accepts a serial program as input and generates the corresponding parallel code in OpenMP without altering the core logic of the program. The system has used different data scoping and work sharing constructs available in OpenMP platform.The system designed here aims at parallelizing “for” loops, “while” loops, nested “for” loops and recursive structures.The system has parallelized “for” loop by considering the induction variable. And converted “while” loop to “for” loop for parallelization. The system is tested by providing several programs such as matrix addition, quick sort, linear search etc. as input. The execution time of programs before and after parallelization is determined and a graph is plotted to help visualize the decrease in execution time


Author(s):  
A. Sorokine ◽  
R. N Stewart

Ability to easily combine the data from diverse sources in a single analytical workflow is one of the greatest promises of the Big Data technologies. However, such integration is often challenging as datasets originate from different vendors, governments, and research communities that results in multiple incompatibilities including data representations, formats, and semantics. Semantics differences are hardest to handle: different communities often use different attribute definitions and associate the records with different sets of evolving geographic entities. Analysis of global socioeconomic variables across multiple datasets over prolonged time is often complicated by the difference in how boundaries and histories of countries or other geographic entities are represented. Here we propose an event-based data model for depicting and tracking histories of evolving geographic units (countries, provinces, etc.) and their representations in disparate data. The model addresses the semantic challenge of preserving identity of geographic entities over time by defining criteria for the entity existence, a set of events that may affect its existence, and rules for mapping between different representations (datasets). Proposed model is used for maintaining an evolving compound database of global socioeconomic and environmental data harvested from multiple sources. Practical implementation of our model is demonstrated using PostgreSQL object-relational database with the use of temporal, geospatial, and NoSQL database extensions.


CSR activities in India have a long history. However there have been numerous criticisms rose against the way in which the CSR activities are carried out by many of the Indian companies. Several studies found that more than half of the Indian firms failed to meet their mandatory requirements on CSR spending and report unspent funds in their financial statements. These issues show’s that, there is a requirement to evaluate the behavior pattern of the firms’ CSR spending activities. In this article, we try to describe the heterogeneity in the CSR related spending activities of the Indian companies. We follow the panel regression clustering approach developed by Sarafidis and Weber (2015). In this approach individual companies are grouped into a number of clusters and within each cluster, the slope parameters are assumed to be similar in nature. The difference in slopes across clusters is due to the standard error-components structure. As the clusters are heterogeneous, they do not share common parameters.


Sign in / Sign up

Export Citation Format

Share Document