total processing time
Recently Published Documents


TOTAL DOCUMENTS

44
(FIVE YEARS 12)

H-INDEX

6
(FIVE YEARS 1)

2021 ◽  
Vol 18 (4(Suppl.)) ◽  
pp. 1413
Author(s):  
Lee Jia Bin ◽  
Nor Asilah Wati Abdul Hamid ◽  
Zurita Ismail ◽  
Mohamed Faris Laham

RNA Sequencing (RNA-Seq) is the sequencing and analysis of transcriptomes. The main purpose of RNA-Seq analysis is to find out the presence and quantity of RNA in an experimental sample under a specific condition. Essentially, RNA raw sequence data was massive. It can be as big as hundreds of Gigabytes (GB). This massive data always makes the processing time become longer and take several days. A multicore processor can speed up a program by separating the tasks and running the tasks’ errands concurrently. Hence, a multicore processor will be a suitable choice to overcome this problem. Therefore, this study aims to use an Intel multicore processor to improve the RNA-Seq speed and analyze RNA-Seq analysis's performance with a multiprocessor. This study only processed RNA-Seq from quality control analysis until sorted the BAM (Binary Alignment/Map) file content. Three different sizes of RNA paired end has been used to make the comparison. The final experiment results showed that the implementation of RNA-Seq on an Intel multicore processor could achieve a higher speedup. The total processing time of RNA-Seq with the largest size of RNA raw sequence data (66.3 Megabytes) decreased from 317.638 seconds to 211.916 seconds. The reduced processing time was 105 seconds and near to 2 minutes. Furthermore, for the smallest RNA raw sequence data size, the total processing time decreased from 212.380 seconds to 163.961 seconds which reduced 48 seconds.


2021 ◽  
pp. 252-257
Author(s):  
П.В. Жиляков ◽  
С.И. Фатеев

В настоящей работе дано описание многопоточной параллельной организации работы алгоритмов, входящих в состав программной части подводной системы технического стереозрения подводного робота. Данные, полученные системой технического стереозрения, в последствии поступают на систему управления для выполнения операций по управлению роботом и оператору на экран монитора для последующего принятия им управленческих решений. В большинстве случаев выполнение алгоритмов происходит последовательно. Суммарное время обработки одного кадра складывается из времени работы всех алгоритмов, входящих в состав программной части. Таким образом, в однопоточном режиме даже самые быстрые алгоритмы будут ожидать своей очереди на выполнение. Следовательно, система управления и оператор в случаях, требующих большие вычислительные мощности, будут получать данные недостаточно быстро. Естественно для увеличения быстродействия всей программной части возникает потребность организовать работу входящих в состав алгоритмов параллельно, многопоточно, но такой способ организации работы дополнительно создаёт ряд проблем, которых бы не было, если работа алгоритмов была организована последовательно, однопоточно. В статье приведены способы решения этих проблем, проведено сравнение времени работы многопоточной и однопоточной реализации алгоритмов. This paper describes the multithreaded parallel organization of the algorithms that are part of the software of the underwater system of technical stereo vision of an underwater robot. The data obtained by the technical stereo vision system is subsequently transmitted to the control system to perform robot control operations and to the operator on the monitor screen for subsequent management decisions. In most cases, the algorithms are executed sequentially. The total processing time of one frame consists of the operating time of all algorithms included in the software part. Thus, in single-threaded mode, even the fastest algorithms will be waiting for their turn to execute. Consequently, the control system and the operator in cases requiring large computing power will not receive data quickly enough. Naturally, in order to increase the speed of the entire software part, there is a need to organize the work of the algorithms included in parallel, multithreaded, but this way of organizing work additionally creates a number of problems that would not exist if the algorithms were organized sequentially, single-threaded. The article presents ways to solve these problems, compares the operating time of multithreaded and single-threaded implementation of algorithms.


2021 ◽  
Vol 9 (2) ◽  
pp. 65-72
Author(s):  
Hendriko Hendriko ◽  
Teddy Pradipta Kajo ◽  
Jajang Jaenudin ◽  
Nur Khamdi ◽  
Tianur Tianur

Indonesia is a country that produce a large variety and quantity of fruit. One of the prominent fruits is pineapple. Small and medium-sized industries that process pineapples into derivative products have sprung up in various regions. However, most of the processing is still conducted manually. One thing that still needs to be improved is the washing and drying process. Therefore, this research developed an automatic washing and drying machine. The developed machine uses a one-tube system so it could reduce the transfer process from one stage to the next. Testing on the developed machines has been carried out. The test results show that this machine is capable to process as much as 6.5 kg of pineapple in one batch. Another test to determine the most effective duration of washing and drying process was also carried out. The test results show that the effective washing time is 180 seconds, and the drying process is 90 seconds. Apart from the level of cleanliness and dryness of pineapple, this test also observes the damage of pineapple as a result of the washing and drying process. Simulations to measure the total time to operate the machine have been carried out. The data shows that it takes 303 seconds, starting from inserting pineapple to the tube, washing, draining, and finally removing pineapples from the tube. By using both data of average total processing time and optimum weight, the capacity of developed machine could be calculated. The capacity of this machine is around 77 kg per hour or 8.008 kg per month. With this capacity, this machine can be used by SME with a large production capacity.


2021 ◽  
Vol 8 (1) ◽  
pp. 13-17
Author(s):  
Udi Subagyo ◽  
Nain Dhaniarti Raharjo ◽  
R, Achendri M.Kurniawan ◽  
Moch. Khamim ◽  
Boby Asukmajaya Raharjo

Balai RW is a construction or building used by residents for RT activities or activities of local residents and has been built since 1998 and is used as a place for local residents' activities. For the condition of the Balai RW XI in 2020 today, there are many that are not suitable for use, especially in wall construction, where many conditions are damp and the wall paint is peeling off so that repairs are needed in the form of painting the outer and inner walls. Implementation of Community Service Painting of Balai RW XI Jl. Silikat Kel.. Purwantoro Kec. Blimbing Malang city is carried out are: 1. Calculation of the Budget Plan for Painting Implementation Costs Balai RW XI is Rp. 13,955,000, - (Thirteen Million Nine Hundred Fifty Five Thousand Rupiah) the source of funds from PKM Polinema activities and RW XI Cash. 2. Painting the Balai RW XI is prioritized on repairing inner and outer wall paints. 3. Implementation of painting at Balai RW XI will be carried out from 20 July 2020 - 8 August 2020 with a total processing time of 3 weeks.


2021 ◽  
pp. 1-7
Author(s):  
Marsali Newman ◽  
Matthew Walsh ◽  
Rosemary Jeffrey ◽  
Richard Hiscock

<b><i>Objective:</i></b> The cell block (CB) is an important adjunct to cytological preparations in diagnostic cytopathology. Optimizing cellular material in the CB is essential to the success of ancillary studies such as immunohistochemistry (IHC) and molecular studies (MS). Our aim was to identify which CB method was most suitable in a variety of specimen types and levels of cellularity. <b><i>Study Design:</i></b> We assessed 4 different CB methods, thrombin clot method (TCM), MD Anderson method (MDAM), gelatin foam method (GFM), and agar method (AM), with descriptive observations and ranking of the methods based on quantity of cells and morphological features. <b><i>Results:</i></b> TCM performed best in ranking for both quantity of cells and morphological features, followed by MDAM, GFM, and AM. Lack of adjuvant in the MDAM resulted in some unique morphological advantages which, however, also resulted in inconsistent performance. In low cellularity cases insufficient cells were frequently identified on slides from MDAM and AM CBs. Technique touch time was similar for all methods, with total processing time being shortest for TCM followed by MDAM, GFM, and AM. <b><i>Conclusions:</i></b> TCM was the most robust CB technique, retaining high scores for ranking of quantity and morphology in a variety of specimen cellularities and specimen types.


Author(s):  
Yosua Halim ◽  
Cecilia Esti Nugraheni

Flow Shop Scheduling (FSS) is scheduled to involve n jobs and m machines in the same process sequence, where each machine processes precisely one job in a certain period. In FSS, when a machine is doing work, other machines cannot do the same job simultaneously. The solution to this problem is the job sequence with minimal total processing time.  Many algorithms can be used to determine the order in which the job is performed. In this paper, the algorithm used to solve the flow shop scheduling problem is the bee colony algorithm. The bee colony algorithm is an algorithm that applies the metaheuristic method and performs optimization according to the workings of the bee colony. To measure the performance of this algorithm, we conducted some experiments by using Taillard's Benchmark as problem instances. Based on experiments that have been carried out by changing the existing parameter values, the size of the bee population, the number of iterations, and the limit number of bees can affect the candidate solutions obtained. The limit is a control parameter for a bee when looking for new food sources. The more the number of bees, the more iterations, and the limit used, the better the final time of the sequence of work. The bee colony algorithm can reach the upper limit of the Taillard case in some cases in the number of machines 5 and 20 jobs. The more the number of machines and jobs to optimize, the worse the total processing time.


2020 ◽  
Vol 16 ◽  
pp. 309-316
Author(s):  
Mateusz Mikuła ◽  
Mariusz Dzieńkowski

The aim of the study was to compare the performance of two data exchange styles commonly used in web applications, i.e. REST and GraphQL. For the purposes of the study two test applications were developed containing the same functionalities, one of which was REST and the other one was GraphQL. They were used for performance tests done with the help of the JMeter tool, during which measurements of the total processing time of requests and the volume of data downloaded and sent were performed. An experiment was developed that tested the basic operations found in most network services: display, add, update, and delete data. The most attention was devoted to the information display operation in the case of which load tests were done. On the basis of performed studies and obtained results, no differences in performance during the operation of adding, editing and deleting data by applications based on REST API and GraphQL were found. During the display operation under heavy load conditions and while downloading small portions of data, the service using GraphQL had a better performance. When downloading large portions of data, the REST-based service exhibited a higher performance.


Foods ◽  
2020 ◽  
Vol 9 (10) ◽  
pp. 1342
Author(s):  
Aswathi Soni ◽  
Jeremy Smith ◽  
Richard Archer ◽  
Amanda Gardner ◽  
Kris Tong ◽  
...  

In this study, novel spore pouches were developed using mashed potato as a food model inoculated with either Geobacillus stearothermophilus or Clostridium sporogenes spores. These spore pouches were used to evaluate the sterilization efficiency of Coaxially induced microwave pasteurization and sterilization (CiMPAS) as a case study. CiMPAS technology combines microwave energy (915 MHz) along with hot water immersion to sterilize food in polymeric packages. The spore pouches were placed at pre-determined specific locations, especially cold spots in each food tray before being processed using two regimes (R-121 and R-65), which consisted of 121 °C and 65 °C at 12 and 22 kW, respectively, followed by recovery and enumeration of the surviving spores. To identify cold spots or the location for inoculation, mashed potato was spiked with Maillard precursors and processed through CiMPAS, followed by measurement of lightness values (*L-values). Inactivation equivalent to of 1–2 Log CFU/g and >6 Log CFU/g for Geobacillus stearothermophilus and Clostridium sporogenes spores, respectively was obtained on the cold spots using R-121, which comprised of a total processing time of 64.2 min. Whereas, inactivation of <1 and 2–3 Log CFU/g for G. stearothermophilus and C. sporogenes spores, respectively on the cold spots was obtained using R-65 (total processing time of 68.3 min), whereas inactivation of 1–3 Log CFU/g of C. sporogenes spores was obtained on the sides of the tray. The results were reproducible across three processing replicates for each regime and inactivation at the specific locations were clearly distinguishable. The study indicated a strong potential to use spore pouches as a tool for validation studies of microwave-induced sterilization.


2020 ◽  
Vol 22 (4) ◽  
pp. 754-774 ◽  
Author(s):  
Itai Gurvich ◽  
Kevin J. O’Leary ◽  
Lu Wang ◽  
Jan A. Van Mieghem

Problem definition: Collaboration is important in services but may lead to interruptions. Professionals exercise discretion on when to preempt individual tasks to switch to collaborative tasks. Academic/practical relevance: Discretionary task switching can introduce changeover times when resuming the preempted task and, thus, can increase total processing time. Methodology: We analyze and quantify how collaboration, through interruptions and discretionary changeovers, affects total processing time. We introduce an episodal workflow model that captures the interruption and discretionary changeover dynamics—each switch and the episode of work it preempts—present in settings in which collaboration and multitasking is paramount. A simulation study provides evidence that changeover times are properly identified and estimated without bias. We then deploy the model in a field study of hospital medicine physicians: “hospitalists.” The hospitalist workflow includes visiting patients, consulting with other caregivers to guide patient diagnosis and treatment, and documenting in the patient’s medical chart. The empirical analysis uses a data set assembled from direct observation of hospitalist activity and pager-log data. Results: We estimate that a hospitalist incurs a total changeover time during documentation of five minutes per patient per day. Managerial implications: This estimate represents a significant 20% of the total processing time per patient: caring for 14 patients per day, our model estimates that a hospitalist spends more than one hour each day on changeovers. This provides evidence that task switching can causally lead to longer documentation time.


Sign in / Sign up

Export Citation Format

Share Document