Part Separation Methods for Assembly Based Design in Additive Manufacturing

Author(s):  
Yosep Oh ◽  
Sara Behdad ◽  
Chi Zhou

The goal of this study is to develop a heuristic part separation algorithm for assembly-based design in Additive Manufacturing (AM). The objective is to minimize the total processing time including both buildup time and assembly time. In the proposed algorithm, the part separation is recursively conducted until the number of assemblies reaches a threshold value. The proposed method helps designers determine the proper number of assemblies and their buildup orientations. A numerical example is provided to illustrate the application of the algorithm.

2021 ◽  
pp. 1-7
Author(s):  
Marsali Newman ◽  
Matthew Walsh ◽  
Rosemary Jeffrey ◽  
Richard Hiscock

<b><i>Objective:</i></b> The cell block (CB) is an important adjunct to cytological preparations in diagnostic cytopathology. Optimizing cellular material in the CB is essential to the success of ancillary studies such as immunohistochemistry (IHC) and molecular studies (MS). Our aim was to identify which CB method was most suitable in a variety of specimen types and levels of cellularity. <b><i>Study Design:</i></b> We assessed 4 different CB methods, thrombin clot method (TCM), MD Anderson method (MDAM), gelatin foam method (GFM), and agar method (AM), with descriptive observations and ranking of the methods based on quantity of cells and morphological features. <b><i>Results:</i></b> TCM performed best in ranking for both quantity of cells and morphological features, followed by MDAM, GFM, and AM. Lack of adjuvant in the MDAM resulted in some unique morphological advantages which, however, also resulted in inconsistent performance. In low cellularity cases insufficient cells were frequently identified on slides from MDAM and AM CBs. Technique touch time was similar for all methods, with total processing time being shortest for TCM followed by MDAM, GFM, and AM. <b><i>Conclusions:</i></b> TCM was the most robust CB technique, retaining high scores for ranking of quantity and morphology in a variety of specimen cellularities and specimen types.


2018 ◽  
Vol 35 (8) ◽  
pp. 1508-1518
Author(s):  
Rosembergue Pereira Souza ◽  
Luiz Fernando Rust da Costa Carmo ◽  
Luci Pirmez

Purpose The purpose of this paper is to present a procedure for finding unusual patterns in accredited tests using a rapid processing method for analyzing video records. The procedure uses the temporal differencing technique for object tracking and considers only frames not identified as statistically redundant. Design/methodology/approach An accreditation organization is responsible for accrediting facilities to undertake testing and calibration activities. Periodically, such organizations evaluate accredited testing facilities. These evaluations could use video records and photographs of the tests performed by the facility to judge their conformity to technical requirements. To validate the proposed procedure, a real-world data set with video records from accredited testing facilities in the field of vehicle safety in Brazil was used. The processing time of this proposed procedure was compared with the time needed to process the video records in a traditional fashion. Findings With an appropriate threshold value, the proposed procedure could successfully identify video records of fraudulent services. Processing time was faster than when a traditional method was employed. Originality/value Manually evaluating video records is time consuming and tedious. This paper proposes a procedure to rapidly find unusual patterns in videos of accredited tests with a minimum of manual effort.


2014 ◽  
Vol 2014 ◽  
pp. 1-6 ◽  
Author(s):  
Taibo Luo ◽  
Yinfeng Xu

This paper investigates semi-online scheduling problems on two parallel machines under a grade of service (GoS) provision subject to minimize the makespan. We consider three different semi-online versions with knowing the total processing time of the jobs with higherGoSlevel, knowing the total processing time of the jobs with lowerGoSlevel, or knowing both in advance. Respectively, for the three semi-online versions, we develop algorithms with competitive ratios of3/2,20/13, and4/3which are shown to be optimal.


1986 ◽  
Vol 49 (8) ◽  
pp. 639-642 ◽  
Author(s):  
JOSEPH C. CORDRAY ◽  
DALE L. HUFFMAN ◽  
WILLIAM R. JONES

A 2 × 2 factorial design was used to study the effect of tenderization and liquid smoke on sensory and physical attribution of a fully cooked restructured pork item. The lean and fat mass was removed intact within 30 min postmortem from sow carcasses and assigned to a tenderized or non-tenderized treatment with and without liquid smoke. The four treatment groups were: non-tenderized, no liquid smoke (NTNS); non-tenderized with liquid smoke (NTS); tenderized, no liquid smoke (TNS); and tenderized with liquid smoke (TS). Mechanical tenderization was accomplished 1 h postmortem and the two original portions were subdivided for a 1% acid-neutralized liquid smoke treatment. Total processing time from exsanguination to a fully cooked product was 8 h. There were no differences (P&gt;0.05) among any of the treatments for cohesiveness, juiciness, flavor or connective tissue scores or cooking loss. The TNS treatment had higher (P&lt;0.06) tension values as determined by Instron measurements than the NTNS treatment. There were initially no practical differences between TBA values for fresh-frozen and cooked-frozen restructured pork. However, after 30 d of storage (−23°C), the cooked-frozen product had significantly higher TBA values than fresh-frozen product.


2017 ◽  
Vol 8 (2) ◽  
pp. 530-533 ◽  
Author(s):  
B. Sams ◽  
C. Litchfield ◽  
L. Sanchez ◽  
N. Dokoozlian

Yield mapping techniques have only recently started to be implemented by the Californian wine grape industry, but the advancement has necessitated new processing methods for large vineyards. The process for mapping large blocks harvested with multiple machines has only recently occurred and implies that their yield monitors have to be calibrated and corrected to the same scale. Here we discuss two methods for processing yield maps at the commercial level. Method 1 depends on many calibrations with delivered fruit weight to a winery. Method 2 normalizes raw files automatically can reduce total processing time by up to 90%.


2018 ◽  
Vol 7 (3.33) ◽  
pp. 1
Author(s):  
Ho Chul Kang ◽  
. .

In this paper, we propose an automatic segmentation method of right ventricle from computed tomography angiography (CTA) using Chan-Vese model and split plane detection. First, we remove noise in the images by applying anisotropic diffusion filter and extract the whole heart using Otsu Thresholding. Second, the volume of interest (VOI) is detected by Chan-Vese model and morpholotical operation. Third, we divide the heart to left and right region using power watershed. Finally we detect split plane which divide right heart to right ventricle and atrium. We tested our method in ten CT images and they were obtained from a different patient. For the evaluation of the computational performance of the proposed method, we measured the total processing time. The average of total processing time, from first step to third step, was 13.92±1.28 s. We expect for our method to be used in cardiac diagnosis for cardiologist.  


Author(s):  
Hong Shen ◽  
Yutao Zheng ◽  
Han Wang ◽  
Zhenqiang Yao

Inverse problem in laser forming involves the heating position planning and the determination of heating parameters. In this study, the heating positions are optimized in laser forming of single curved shapes based on the processing efficiency. The algorithm uses a probability function to initialize the heating position that is considered to be the bending points. The optimization process is to minimize the total processing time through adjusting the heating positions by considering the boundary conditions of the offset distances, the minimum bending angle, and the minimum distance between two adjacent heating positions. The optimized results are compared with those obtained by the distance-based model as well as the experimental data.


2021 ◽  
pp. 252-257
Author(s):  
П.В. Жиляков ◽  
С.И. Фатеев

В настоящей работе дано описание многопоточной параллельной организации работы алгоритмов, входящих в состав программной части подводной системы технического стереозрения подводного робота. Данные, полученные системой технического стереозрения, в последствии поступают на систему управления для выполнения операций по управлению роботом и оператору на экран монитора для последующего принятия им управленческих решений. В большинстве случаев выполнение алгоритмов происходит последовательно. Суммарное время обработки одного кадра складывается из времени работы всех алгоритмов, входящих в состав программной части. Таким образом, в однопоточном режиме даже самые быстрые алгоритмы будут ожидать своей очереди на выполнение. Следовательно, система управления и оператор в случаях, требующих большие вычислительные мощности, будут получать данные недостаточно быстро. Естественно для увеличения быстродействия всей программной части возникает потребность организовать работу входящих в состав алгоритмов параллельно, многопоточно, но такой способ организации работы дополнительно создаёт ряд проблем, которых бы не было, если работа алгоритмов была организована последовательно, однопоточно. В статье приведены способы решения этих проблем, проведено сравнение времени работы многопоточной и однопоточной реализации алгоритмов. This paper describes the multithreaded parallel organization of the algorithms that are part of the software of the underwater system of technical stereo vision of an underwater robot. The data obtained by the technical stereo vision system is subsequently transmitted to the control system to perform robot control operations and to the operator on the monitor screen for subsequent management decisions. In most cases, the algorithms are executed sequentially. The total processing time of one frame consists of the operating time of all algorithms included in the software part. Thus, in single-threaded mode, even the fastest algorithms will be waiting for their turn to execute. Consequently, the control system and the operator in cases requiring large computing power will not receive data quickly enough. Naturally, in order to increase the speed of the entire software part, there is a need to organize the work of the algorithms included in parallel, multithreaded, but this way of organizing work additionally creates a number of problems that would not exist if the algorithms were organized sequentially, single-threaded. The article presents ways to solve these problems, compares the operating time of multithreaded and single-threaded implementation of algorithms.


Sign in / Sign up

Export Citation Format

Share Document