scholarly journals Thermomechanical Characterisation of Copper Diamond and Benchmarking with the MultiMat Experiment

2021 ◽  
Vol 2021 ◽  
pp. 1-18
Author(s):  
Marcus Portelli ◽  
Michele Pasquali ◽  
Federico Carra ◽  
Alessandro Bertarelli ◽  
Pierluigi Mollicone ◽  
...  

The High-Luminosity Large Hadron Collider upgrade at CERN will result in an increase in the energy stored in the circulating particle beams, making it necessary to assess the thermomechanical performance of currently used and newly developed materials for use in beam intercepting devices such as collimators and absorbers. This study describes the thermomechanical characterisation of a novel copper diamond grade selected for use in tertiary collimators of the HL-LHC. The data obtained are used to build an elastoplastic material model and implemented in numerical simulations performed to benchmark experimental data obtained from the recently completed MultiMat experiment conducted at CERN’s HiRadMat facility, where various materials shaped as slender rods were tested under particle beam impact. The analyses focus on the dynamic longitudinal and flexural response of the material, with results showing that the material model is capable of replicating the material behaviour to a satisfactory level in both thermal and structural domains, accurately matching experimental measurements in terms of temperature, frequency content, and amplitude.


2018 ◽  
Vol 53 (5) ◽  
pp. 302-312 ◽  
Author(s):  
Gioacchino Alotta ◽  
Olga Barrera ◽  
Elise C Pegg

Wear debris from ultra-high-molecular-weight polyethylene components used for joint replacement prostheses can cause significant clinical complications, and it is essential to be able to predict implant wear accurately in vitro to prevent unsafe implant designs continuing to clinical trials. The established method to predict wear is simulator testing, but the significant equipment costs, experimental time and equipment availability can be prohibitive. It is possible to predict implant wear using finite element methods, though those reported in the literature simplify the material behaviour of polyethylene and typically use linear or elastoplastic material models. Such models cannot represent the creep or viscoelastic material behaviour and may introduce significant error. However, the magnitude of this error and the importance of this simplification have never been determined. This study compares the volume of predicted wear from a standard elastoplastic model, to a fractional viscoelastic material model. Both models have been fitted to the experimental data. Standard tensile tests in accordance with ISO 527-3 and tensile creep recovery tests were performed to experimentally characterise both (a) the elastoplastic parameters and (b) creep and relaxation behaviour of the ultra-high molecular weight polyethylene. Digital image correlation technique was used in order to measure the strain field. The predicted wear with the two material models was compared for a finite element model of a mobile-bearing unicompartmental knee replacement, and wear predictions were made using Archard’s law. The fractional viscoelastic material model predicted almost ten times as much wear compared to the elastoplastic material representation. This work quantifies, for the first time, the error introduced by use of a simplified material model in polyethylene wear predictions, and shows the importance of representing the viscoelastic behaviour of polyethylene for wear predictions.



2018 ◽  
Vol 170 ◽  
pp. 01001
Author(s):  
Ignacio Asensi Tortajada

The Large Hadron Collider (LHC) has envisaged a series of upgrades towards a High Luminosity LHC (HL-LHC) delivering five times the LHC nominal instantaneous luminosity. The ATLAS Phase II upgrade, in 2024, will accommodate the upgrade of the detector and data acquisition system for the HL-LHC. The Tile Calorimeter (TileCal) will undergo a major replacement of its on- and off-detector electronics. In the new architecture, all signals will be digitized and then transferred directly to the off-detector electronics, where the signals will be reconstructed, stored, and sent to the first level of trigger at the rate of 40 MHz. This will provide better precision of the calorimeter signals used by the trigger system and will allow the development of more complex trigger algorithms. Changes to the electronics will also contribute to the reliability and redundancy of the system. Three different front-end options are presently being investigated for the upgrade, two of them based on ASICs, and a final solution will be chosen after extensive laboratory and test beam studies that are in progress. A hybrid demonstrator module is being developed using the new electronics while conserving compatibility with the current system. The status of the developments will be presented, including results from the several tests with particle beams.



Author(s):  
S. A. Antipov ◽  
N. Biancacci ◽  
J. Komppula ◽  
E. Métral ◽  
B. Salvant ◽  
...  


2017 ◽  
Author(s):  
G. Apollinari ◽  
I. Béjar Alonso ◽  
O. Brüning ◽  
P. Fessia ◽  
M. Lamont ◽  
...  


2021 ◽  
Vol 5 (1) ◽  
Author(s):  
Georges Aad ◽  
Anne-Sophie Berthold ◽  
Thomas Calvet ◽  
Nemer Chiedde ◽  
Etienne Marie Fortin ◽  
...  

AbstractThe ATLAS experiment at the Large Hadron Collider (LHC) is operated at CERN and measures proton–proton collisions at multi-TeV energies with a repetition frequency of 40 MHz. Within the phase-II upgrade of the LHC, the readout electronics of the liquid-argon (LAr) calorimeters of ATLAS are being prepared for high luminosity operation expecting a pileup of up to 200 simultaneous proton–proton interactions. Moreover, the calorimeter signals of up to 25 subsequent collisions are overlapping, which increases the difficulty of energy reconstruction by the calorimeter detector. Real-time processing of digitized pulses sampled at 40 MHz is performed using field-programmable gate arrays (FPGAs). To cope with the signal pileup, new machine learning approaches are explored: convolutional and recurrent neural networks outperform the optimal signal filter currently used, both in assignment of the reconstructed energy to the correct proton bunch crossing and in energy resolution. The improvements concern in particular energies derived from overlapping pulses. Since the implementation of the neural networks targets an FPGA, the number of parameters and the mathematical operations need to be well controlled. The trained neural network structures are converted into FPGA firmware using automated implementations in hardware description language and high-level synthesis tools. Very good agreement between neural network implementations in FPGA and software based calculations is observed. The prototype implementations on an Intel Stratix-10 FPGA reach maximum operation frequencies of 344–640 MHz. Applying time-division multiplexing allows the processing of 390–576 calorimeter channels by one FPGA for the most resource-efficient networks. Moreover, the latency achieved is about 200 ns. These performance parameters show that a neural-network based energy reconstruction can be considered for the processing of the ATLAS LAr calorimeter signals during the high-luminosity phase of the LHC.



Author(s):  
Soner Camuz ◽  
Samuel Lorin ◽  
Kristina Wärmefjord ◽  
Rikard Söderberg

Current methodologies for variation simulation of compliant sheet metal assemblies and parts are simplified by assuming linear relationships. From the observed physical experiments, it is evident that plastic strains are a source of error that is not captured in the conventional variational simulation methods. This paper presents an adaptation toward an elastoplastic material model with isotropic hardening in the method of influence coefficients (MIC) methodology for variation simulations. The results are presented in two case studies using a benchmark case involving a two-dimensional (2D) quarter symmetric plate with a centered hole, subjected to both uniaxial and biaxial displacement. The adaptation shows a great reduction in central processing unit time with limited effect on the accuracy of the results compared to direct Monte Carlo simulations.



Sign in / Sign up

Export Citation Format

Share Document