Indonesian-Aceh Application Translation Design Based on Android

Author(s):  
Rahmat Shaumi

Language is a sign to communicate. In the world, there are many languages that characterize the country, for example, the State of Indonesia which has various regional languages, one of which is the Aceh language which is dominantly used by residents in the Aceh Province to communicate. The purpose of this research is to design and build an android-based digital dictionary application that can be used to make it easier to find translated vocabulary in Indonesian or in Acehnese so that they can be used in general so as to provide convenience for the user. The method used in this thesis is the prototyping method. Based on the test results on the Android-based Indonesian-Aceh application, it can be seen and can conclude several things, namely; The search system is designed to be able to display words in the text file in program code faster because it does not require large data, in the coding process using the auto-search model and an array that matches the application user input string, this dictionary application is designed using Xamarin Microsoft Visual Studio 2017 which can be exported on IOS and Windows Phone systems in a single project built.

Author(s):  
Agung Riyadi

The One of many way to connect to the database through the android application is using volleyball and RESTAPI. By using RestAPI, the android application does not directly connect to the database but there is an intermediary in the form of an API. In android development, Android-volley has the disadvantage of making requests from large and large data, so an evaluation is needed to test the capabilities of the Android volley. This research was conducted to test android-volley to retrieve data through RESTAPI presented in the form of an application to retrieve medicinal plant data. From the test results can be used by volley an error occurs when the back button is pressed, in this case another process is carried out if the previous volley has not been loaded. This error occurred on several android versions such as lollipops and marshmallows also on some brands of devices. So that in using android-volley developer need to check the request queue process that is carried out by the user, if the data retrieval process by volley has not been completed, it is necessary to stop the process to download data using volley so that there is no Android Not Responding (ANR) error.Keywords: Android, Volley, WP REST API, ANR Error


2011 ◽  
pp. 877-891
Author(s):  
Katrin Weller ◽  
Isabella Peters ◽  
Wolfgang G. Stock

This chapter discusses folksonomies as a novel way of indexing documents and locating information based on user generated keywords. Folksonomies are considered from the point of view of knowledge organization and representation in the context of user collaboration within the Web 2.0 environments. Folksonomies provide multiple benefits which make them a useful indexing method in various contexts; however, they also have a number of shortcomings that may hamper precise or exhaustive document retrieval. The position maintained is that folksonomies are a valuable addition to the traditional spectrum of knowledge organization methods since they facilitate user input, stimulate active language use and timeliness, create opportunities for processing large data sets, and allow new ways of social navigation within document collections. Applications of folksonomies as well as recommendations for effective information indexing and retrieval are discussed.


Geophysics ◽  
1995 ◽  
Vol 60 (5) ◽  
pp. 1354-1364 ◽  
Author(s):  
Glenn W. Bear ◽  
Haydar J. Al‐Shukri ◽  
Albert J. Rudman

We have developed an improved Levenburg‐Marquart technique to rapidly invert Bouguer gravity data for a 3-D density distribution as a source of the observed field. This technique is designed to replace tedious forward modeling with an automatic solver that determines density models constrained by geologic information supplied by the user. Where such information is not available, objective models are generated. The technique estimates the density distribution within the source volume using a least‐squares inverse solution that is obtained iteratively by singular value decomposition using orthogonal decomposition of matrices with sequential Householder transformations. The source volume is subdivided into a series of right rectangular prisms of specified size but of unknown density. This discretization allows the construction of a system of linear equations relating the observed gravity field to the unknown density distribution. Convergence of the solution to the system is tightly controlled by a damping parameter which may be varied at each iteration. The associated algorithm generates statistical measures of solution quality not available with most forward methods. Along with the ability to handle large data sets within reasonable time constraints, the advantages of this approach are: (1) the ease with which pre‐existing geological information can be included to constrain the solution, (2) its minimization of subjective user input, (3) the avoidance of difficulties encountered during wavenumber domain transformations, and (4) the objective nature of the solution. Application to a gravity data set from Hamilton County, Indiana, has yielded a geologically reasonable result that agrees with published models derived from interpretation of gravity, magnetic, seismic, and drilling data.


2021 ◽  
Vol 1 (1) ◽  
pp. 29-35
Author(s):  
Ismail Majid

Abstrak Sistem Pencarian merupakan aplikasi penting diterapkan pada sebuah media informasi online, namun sejak hadirnya mesin pencari seperti Google orang lebih suka menggunakan alat ini untuk menemukan informasi. Karena metode pencarian yang digunakan terbukti keandalannya. Apakah kita mampu seperti itu? Penelitian ini membuktikan bahwa dengan menerapkan metode Google Custom Search API, kita dapat membangun sistem pencarian layaknya seperti mesin pencari Google, hasil pengujian menunjukkan hasil pencarian yang ditampilkan sangat relevan dan rata-rata berada pada peringkat pertama. Keuntungan lainnya metode ini dilengkapi koreksi ejaan salah untuk menyempurnakan kata kunci sebenarnya.   Abstract Search system is an important application applied to an online information media, but since the presence of search engines like Google, people prefer to use this tool to find information. Because the search method used is proven to be reliable. Are we able to be like that? This research proves that by implementing the Google Custom Search API method, we can build a search system like Google's search engine, the test results show that the search results displayed are very relevant and on average are ranked first. Another advantage of this method is that it includes incorrect spelling corrections to perfect the actual keywords.


2020 ◽  
Vol 3 (3) ◽  
pp. 144
Author(s):  
Riro Bregas Trengginaz ◽  
Ade Yusup ◽  
Daniel Sovian Sunyoto ◽  
Muhammad Ruhul Jihad ◽  
Yulianti Yulianti

The train ticket booking application is an important application because it involves financial transactions. If something goes wrong can be detrimental to the parties involved. The developer of the train ticket booking application must be able to provide a guarantee that the application has been made according to the requirements and has good quality. To provide a guarantee can be done by testing the application. Testing on a product intends to check if a program has been running as per its capacities or still have blunders that should be adjusted so as to have a decent quality program. A famous testing strategy and broadly utilized by analyzers to investigate the program works out in a good way or not is Black Box Testing and White Box Testing. In this examination, the product that will be surveyed utilizing Black Box Testing is a train ticket booking framework that has two structures: the login structure and the booking type of the train pass to be filled by the administrator. The application will test utilizing Black Box Testing in which the test is just planned to see whether the program is as per the capacity that the program needs without realizing the program code utilized. The sort of testing of the Black Box Testing technique shifts one of which is Equivalence Partitions utilized by the creators in this investigation. Comparability Partitions procedure is a test dependent on entering information on each structure on the framework, each info menu will be tried and assembled by work either legitimate or invalid. Incorrect test results can be found and corrected immediately. So after testing, all requirements have been met and can be guaranteed that the train ticket booking application that has been made is error free and meets all the requirements specified.


English Today ◽  
2015 ◽  
Vol 32 (2) ◽  
pp. 24-30 ◽  
Author(s):  
Reinhard Heuberger

English lexicography is undergoing a transformation so profound that both dictionary makers and users need new strategies to cope with the challenges of today's technologies and to take full advantage of their potential. Rundell has rightly stated that dictionaries have finally found their ideal platform in the electronic medium (2012: 15), which provides quicker and more sophisticated access to large data collections that are no longer subject to space restrictions. But the innovations go far beyond storage space and ease of access - customization, hybridization and user-input are amongst the most promising trends in electronic lexicography. Customization means that dictionaries can be adaptable, i.e. manually customized by the user, or even adaptive, i.e. automatically adapted to users’ needs on the basis of their behaviour (Granger, 2012: 4). Paquot lists genre, domain as well as L1 as examples of fruitful areas for customization (2012: 185). In the electronic medium, the barriers between different language resources such as dictionaries, encyclopaedias, databases, writing aids and translation tools are disappearing, a development referred to as hybridization (Granger, 2012: 4). And the concept of user-input is exemplified by the well-known platforms Wiktionary and Urban Dictionary, both of which are online reference works based on contributions by users.


2020 ◽  
Vol 20 (7) ◽  
pp. 1941-1953 ◽  
Author(s):  
Frank Techel ◽  
Kurt Winkler ◽  
Matthias Walcher ◽  
Alec van Herwijnen ◽  
Jürg Schweizer

Abstract. Snow instability tests provide valuable information regarding the stability of the snowpack. Test results are key data used to prepare public avalanche forecasts. However, to include them into operational procedures, a quantitative interpretation scheme is needed. Whereas the interpretation of the rutschblock test (RB) is well established, a similar detailed classification for the extended column test (ECT) is lacking. Therefore, we develop a four-class stability interpretation scheme. Exploring a large data set of 1719 ECTs observed at 1226 sites, often performed together with a RB in the same snow pit, and corresponding slope stability information, we revisit the existing stability interpretations and suggest a more detailed classification. In addition, we consider the interpretation of cases when two ECTs were performed in the same snow pit. Our findings confirm previous research, namely that the crack propagation propensity is the most relevant ECT result and that the loading step required to initiate a crack is of secondary importance for stability assessment. The comparison with the RB showed that the ECT classifies slope stability less reliably than the RB. In some situations, performing a second ECT may be helpful when the first test did not indicate rather unstable or stable conditions. Finally, the data clearly show that false-unstable predictions of stability tests outnumber the correct-unstable predictions in an environment where overall unstable locations are rare.


Author(s):  
Mutia Fariha

This research is qualitative research with descriptive analysis. The research was gives an overview of the errors made when solving integer basic operations problems. Subjects consisted of 30 people who were participants in the Substantive Technical Training of Madrasah Ibtidaiyah Mathematics Teachers  of the Aceh Province in the Balai Diklat Keagamaan Aceh (BDK) in 2018. Data was collected through tests and analyzed to determine the mistakes made by participants in solving problems. Data validation is done by triangulating data sources by comparing test results and interview data on the same subject. The results of the analysis showed that 13 participants (39%) from all subjects still made mistakes in solving problems. The type of error made is a procedure error, namely a sequence of steps in completing a count operation.


2020 ◽  
Vol 2 (3) ◽  
pp. 137-145
Author(s):  
Syariani Tambunan ◽  
Afkar Afkar ◽  
Nico Syahputra Sebayang

Soybean is an agricultural product that has a good nutritional value, especially Protein content. This study aims to find superior varieties that have a wide adaptation to the sour soil especially in Ultisol soils. The study was conducted in Gulo Village, Darul Hasanah Sub-District, Southeast Aceh Regency, Aceh Province, from May to September 2019. The study used a non factorial randomized block design (RBD), with 4 treatment varieties (V) levels that were repeated as many as 4 replications; namely V1: Anjasmoro Varieties, V2: Dena Varieties, V3: Deja Varieties 1, V4: Detaptive Varieties 1, Variance analysis test results showed that the plant height was 1 WAP, Age 2 WAP, Age 3 WAP, Age 4 WAP, Age 5 WAP , and Age 6 WAP had no significant effect. However the highest yield on 1 WAP was found in variety V4 (10.40) in the second test and the lowest was in the first test. V2 was second test. While the best number of segments and branches were produced by V3 treatment. The best results for the total number of pods, number of pods, total empty pods, number of sample plant seeds and weight of sample plant seeds were produced by treatment V3.


2020 ◽  
Vol 3 (1) ◽  
pp. 9
Author(s):  
Herman Herman ◽  
Lukman Syafie ◽  
Tasmil Tasmil ◽  
Muhammad Resha

Plagiarism is the use of data, language and writing without including the original author or source. The place where palgiate practice occurs most often is the academic environment. In the academic world, the most frequently plagiarized thing is scientific work, for example thesis. To minimize the practice of plagiarism, it is not enough to just remind students. Therefore we need a system or application that can help in measuring the level of similarity of student thesis proposals in order to minimize plagiarism practice. In computer science, the Rabin-Karp algorithm can be used in measuring the level of similarity of texts. The Rabin-Karp algorithm is a string matching algorithm that uses a hash function as a comparison between the search string (m) and substrings in text (n). The Rabin-Karp algorithm is a string search algorithm that can work for large data sizes. The test results show that the use of values on k-gram has an effect on the results of the measurement of similarity levels. In addition, it was also found that the use of the value 5 on k-gram was faster in executing than the values 4 and 6.


Sign in / Sign up

Export Citation Format

Share Document