Jaimie Anderson, Wojciech Blonski, Joy Gaziano et al.
Hasil untuk "hep-ex"
Menampilkan 20 dari ~758714 hasil · dari CrossRef, DOAJ, arXiv, Semantic Scholar
Shanshan Wang, Huaibin Mabel Ko, Dana J. Lukin et al.
Maciej Glowacki
Learning robust and generalisable abstractions from high-dimensional input data is a central challenge in machine learning and its applications to high-energy physics (HEP). Solutions of lower functional complexity are known to produce abstractions that generalise more effectively and are more robust to input perturbations. In complex hypothesis spaces, inductive biases make such solutions learnable by shaping the loss geometry during optimisation. In a HEP classification task, we show that a soft symmetry respecting inductive bias creates approximate degeneracies in the loss, which we identify as pseudo-Goldstone modes. We quantify functional complexity using metrics derived from first principles Hessian analysis and via compressibility. Our results demonstrate that solutions of lower complexity give rise to abstractions that are more generalisable, robust, and efficiently distillable.
Matt Pelton, Sarah Abdel-Meguid, Eshani Goradia et al.
Mark N. Costantini, Elie Hammou, Zahari Kassabov et al.
We present the open-source SIMUnet code, designed to fit Standard Model Effective Field Theory (SMEFT) Wilson coefficient alongside Parton Distribution Functions (PDFs) of the proton. SIMUnet can perform SMEFT global fits, as well as simultaneous fits of the PDFs and of an arbitrarily large number of SMEFT degrees of freedom, by including both PDF-dependent and PDF-independent observables. SIMUnet can also be used to determine whether the effects of any New Physics models can be fitted away in a global fit of PDFs. SIMUnet is built upon the open-source NNPDF code and is released together with documentation, and tutorials. To illustrate the functionalities of the new tool, we present a new global analysis of the SMEFT Wilson coefficients accounting for their interplay with the PDFs. We increment our previous analysis of the LHC Run II top quark data with both (i) the Higgs production and decay rates data from the LHC, and (ii) the precision electroweak and diboson measurements from LEP and the LHC.
D. Ciangottini, A. Forti, L. Heinrich et al.
This white paper presents the current status of the R&D for Analysis Facilities (AFs) and attempts to summarize the views on the future direction of these facilities. These views have been collected through the High Energy Physics (HEP) Software Foundation's (HSF) Analysis Facilities forum, established in March 2022, the Analysis Ecosystems II workshop, that took place in May 2022, and the WLCG/HSF pre-CHEP workshop, that took place in May 2023. The paper attempts to cover all the aspects of an analysis facility.
Thomas Bergauer
The latest update of the European Strategy for Particle Physics stimulated the preparation of the European Detector Roadmap document in 2021 by the European Committee for Future Accelerators ECFA. This roadmap, defined during a bottom-up process by the community, outlines nine technology domains for HEP instrumentation and pinpoints urgent R&D topics, known as Detector R&D Themes (DRDTs). Task forces were set for each domain, leading to Detector R&D Collaborations (DRDs), now hosted at CERN. After an intensive period over the last months, seven DRD collaborations have been established, which are now starting to set up their collaboration structures and begin to work. One is still in the preparation phase. In this publication, I will give an overview of the set-up process and the current status of all DRD collaborations covering detector developments in the field of gaseous detectors, noble liquid detectors for rare event searches, semiconductor detectors, photodetectors and concepts for particle ID, quantum sensors, calorimetry, electronics for HEP instrumentation and mechanical and integration aspects.
Amisha Ahuja, Matt Pelton, Sahil Raval et al.
Shilpa Sannapaneni, Sarah Philip, Amit Desai et al.
Kosei Takagi, Nanako Hata, Yuki Fujii
Juan Miguel Carceller, Frank Gaede, Gerardo Ganis et al.
A performant and easy-to-use event data model (EDM) is a key component of any HEP software stack. The podio EDM toolkit provides a user friendly way of generating such a performant implementation in C++ from a high level description in yaml format. Finalizing a few important developments, we are in the final stretches for release v1.0 of podio, a stable release with backward compatibility for datafiles written with podio from then on. We present an overview of the podio basics, and go into slighty more technical detail on the most important topics and developments. These include: schema evolution for generated EDMs, multithreading with podio generated EDMs, the implementation of them as well as the basics of I/O. Using EDM4hep, the common and shared EDM of the Key4hep project, we highlight a few of the smaller features in action as well as some lessons learned during the development of EDM4hep and podio. Finally, we show how podio has been integrated into the Gaudi based event processing framework that is used by Key4hep, before we conclude with a brief outlook on potential developments after v1.0.
German F. R. Sborlini, Roger Hernández-Pinto, Salvador Ochoa-Oregon et al.
Fragmentation Functions (FF) are universal non-perturbative objects that model hadronization in some general kind of processes. They are mainly extracted from experimental data, hence constraining the parameters of the corresponding fits is crucial for achieving reliable results. As expected, the production of lighter hadrons is favoured w.r.t. heavy ones, thus we would like to exploit the precise knowledge of pion FFs to constraint the shape of kaon (or heavier) FFs. In this talk, we show how imposing specific cuts on photon-hadron production leads to relations between the $u$-started FFs. For doing so, we exploit the reconstruction of momentum fractions in terms of experimentally-accessible quantities and introduce NLO QCD + LO QED corrections to reduce the theoretical uncertainties.
Oliver Gutsche, Tulika Bose, Margaret Votava et al.
The HL-LHC run is anticipated to start at the end of this decade and will pose a significant challenge for the scale of the HEP software and computing infrastructure. The mission of the U.S. CMS Software & Computing Operations Program is to develop and operate the software and computing resources necessary to process CMS data expeditiously and to enable U.S. physicists to fully participate in the physics of CMS. We have developed a strategic plan to prioritize R&D efforts to reach this goal for the HL-LHC. This plan includes four grand challenges: modernizing physics software and improving algorithms, building infrastructure for exabyte-scale datasets, transforming the scientific data analysis process and transitioning from R&D to operations. We are involved in a variety of R&D projects that fall within these grand challenges. In this talk, we will introduce our four grand challenges and outline the R&D program of the U.S. CMS Software & Computing Operations Program.
Ulrich Schwickerath, Andrii Verbytskyi
We present a revived version of CERNLIB, the basis for software ecosystems of most of the pre-LHC HEP experiments. The efforts to consolidate CERNLIB are part of the activities of the Data Preservation for High Energy Physics collaboration to preserve data and software of the past HEP experiments. The presented version is based on CERNLIB version 2006 with numerous patches made for compatibility with modern compilers and operating systems. The code is available in the CERN GitLab repository with all the development history starting from the early 1990s. The updates also include a re-implementation of the build system in CMake to ensure CERNLIB compliance with the current best practices and to increase the chances of preserving the code in a compilable state for the decades to come. The revived CERNLIB project also includes updated documentation, which we believe is a cornerstone for any preserved software depending on it.
Alice Leroy, Henri Perrin, Raphael Porret et al.
Gregory Corwin, Conor H. O'Neill, Amer K. Abu Alfa
Moniyka Sachar, Jason J. Pan, James Park
Jim Hoff, Seda Memik
In this whitepaper, we argue that nurturing HEP Lab-Engineering cooperation through established collaboration and support mechanisms will advance the scientific mission of the labs significantly, while at the same time giving the laboratories a stronger position in influencing the next generation workforce of engineers that will provide their services towards the unique computing and technology needs of the HEP community. The authors of this whitepaper are electronics and computer engineers and so, naturally, the arguments herein are made from their perspective. However, these arguments are only strengthened by the simple fact that they could also have been made from the perspective of mechanical engineers, civil engineers or numerous other technologists. At the same time, this document serves as a summary of discussions that occurred during the Joint Instrumentation Frontier & Community Engagement Frontier Townhall meeting on November 10, 2020
Mohamed Aly, Jackson Burzynski, Bryan Cardwell et al.
The second workshop on the HEP Analysis Ecosystem took place 23-25 May 2022 at IJCLab in Orsay, to look at progress and continuing challenges in scaling up HEP analysis to meet the needs of HL-LHC and DUNE, as well as the very pressing needs of LHC Run 3 analysis. The workshop was themed around six particular topics, which were felt to capture key questions, opportunities and challenges. Each topic arranged a plenary session introduction, often with speakers summarising the state-of-the art and the next steps for analysis. This was then followed by parallel sessions, which were much more discussion focused, and where attendees could grapple with the challenges and propose solutions that could be tried. Where there was significant overlap between topics, a joint discussion between them was arranged. In the weeks following the workshop the session conveners wrote this document, which is a summary of the main discussions, the key points raised and the conclusions and outcomes. The document was circulated amongst the participants for comments before being finalised here.
Fulvio Martinelli, Chiara Magliocca, Roberto Cardella et al.
This paper presents a small-area monolithic pixel detector ASIC designed in 130 nm SiGe BiCMOS technology for the upgrade of the pre-shower detector of the FASER experiment at CERN. The purpose of this prototype is to study the integration of fast front-end electronics inside the sensitive area of the pixels and to identify the configuration that could satisfy at best the specifications of the experiment. Self-induced noise, instabilities and cross-talk were minimised to cope with the several challenges associated to the integration of pre-amplifiers and discriminators inside the pixels. The methodology used in the characterisation and the design choices will also be described. Two of the variants studied here will be implemented in the pre-production ASIC of the FASER experiment pre-shower for further tests.
Halaman 25 dari 37936