The expected average differential background rate in the energy region of interest, corresponding to (1, 13) keV and (4, 50) keV for electronic and nuclear recoils, amounts to (12.3 ± 0.6) (keV t y)⁻¹ and (2.2 ± 0.5)x10⁻³ (keV t y)⁻¹, respectively, in a 4 t fiducial mass. Lastly, this framework is employed together with the radiopurity information from the previous section to estimate the XENONnT background rate and projected sensitivity in the search for WIMPs. The Monte Carlo simulations pipeline is described in the second part of this thesis. In addition, the improved remote operation and simulations to evaluate the efficiency of the measured samples are also reported.Ī detector model based on the Geant4 toolkit has been developed to evaluate the XENONnT performance, along with additional packages to convert the simulated energy deposits into data-like signals, based on detector design and early operation inputs. The full characterization of this background is described, as well as potential ways for further reduction. This gamma spectrometer features an ultra-low background rate of (164 ± 2) counts/day in the 100-2700 keV region, hence it is idoneus for the XENONnT material selection campaign. In the context of this effort the GeMSE (Germanium Material and meteorite Screening Experiment) detector has been utilized. The first part of this thesis summarizes the radioassay campaign carried out to select materials with low traces of radioactive contaminants for the detector construction, in order to achieve the low event rate regime required for WIMP searches. In this work we predict the experimental background and project the WIMP sensitivity of XENONnT. ![]() Located at the INFN Laboratori Nazionali del Gran Sasso, and with 5.9 t of instrumented liquid xenon, it is the upgraded version of XENON1T, which is up to now the world-leading experiment in the search for WIMPs. XENONnT is a dark matter direct detection experiment exploiting the dual-phase xenon time projection chamber technique. This stack provides a unified software environment to users, providing over 600 different scientific applications that are available in over 4,000 different combinations of version, compiler and CPU architecture. This solution is used on over 20 different clusters with heterogeneous configurations, on processor architectures ranging from AMD's 2010 Magny-Cours to Intel's 2017 Skylake SP, with or without GPUs, with InfiniBand, Ethernet or OmniPath as the network fabric, and with Slurm or Torque/Moab as the scheduler. In this paper, we present the solution that we created, which has allowed Compute Canada to serve the needs of over 10,000 researchers across the country. ![]() We also had to consider the practicality of each approach for our users, and reproducibility of software installations performed by staff located at various sites across Canada. Distribution, portability and performance were three important technical criteria for us. We had to find software solutions to solve the challenges involved to achieve this goal. This is nevertheless what Compute Canada set out to do in 2016, in the midst of deploying a new generation of large clusters. Presenting a unified software environment can tremendously facilitate the task of supporting researchers, but is challenging to implement. ![]() Each cluster may run a different operating system, use a different generation of CPU, GPU, or network fabric, or be managed by a different team of system administrators. In this paper, we present current state of CernVM project and compare performance of CVMFS to performance of traditional network file system like AFS and discuss possible scenarios that could further improve its performance and scalability.Įxploiting an advanced computing platform consisting of several clusters distributed across the second-largest country in the world is challenging. We provide a mechanism to publish pre-built and configured experiment software releases to a central distribution point from where it finds its way to the running CernVM instances via the hierarchy of proxy servers or content delivery networks. The procedures for building, installing and validating software releases remains under the control and responsibility of each user community. The experiment application software and its specific dependencies are built independently from CernVM and delivered to the appliance just in time by means of a CernVM File System (CVMFS) specifically designed for efficient software distribution. ![]() It aims to provide a complete and portable environment for developing and running LHC data analysis on any end-user computer (laptop, desktop) as well as on the Grid, independently of Operating System platforms (Linux, Windows, MacOS). CernVM is a Virtual Software Appliance capable of running physics applications from the LHC experiments at CERN.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |