Marcia Irene Canto, Elizabeth Abou-Diwan, Helena Saba et al.
Hasil untuk "hep-ex"
Menampilkan 20 dari ~757759 hasil · dari CrossRef, DOAJ, arXiv, Semantic Scholar
Rama Tarakji, Sunny Sandhu, Eric F. Martin
Sachiyo Onishi, Masaya Kubota, Masahito Shimizu
Tian-Zi Song, Kai-Xuan Huang, Yu-Jie Zeng et al.
While visualization plays a crucial role in high-energy physics (HEP) experiments, the existing detector description formats including Geant4, ROOT, GDML, and DD4hep face compatibility limitations with modern visualization platforms. This paper presents a universal interface that automatically converts these four kinds of detector descriptions into FBX, an industry standard 3D model format which can be seamlessly integrated into advanced visualization platforms like Unity. This method bridges the gap between HEP instrumental display frameworks and industrial-grade visualization ecosystems, enabling HEP experiments to harness rapid technological advancements. Furthermore, it lays the groundwork for the future development of additional HEP visualization applications, such as event display, virtual reality, and augmented reality.
Shiho Nakamura, Naonori Inoue, Koji Uno
Yasutaka Saito, Sumito Sato
Naoki Hayata, Norio Araki, Shin’ichi Miyamoto
Yaquan Fang, Christina Gao, Ying-Ying Li et al.
Numerous challenges persist in High Energy Physics (HEP), the addressing of which requires advancements in detection technology, computational methods, data analysis frameworks, and phenomenological designs. We provide a concise yet comprehensive overview of recent progress across these areas, in line with advances in quantum technology. We will discuss the potential of quantum devices in detecting subtle effects indicative of new physics beyond the Standard Model, the transformative role of quantum algorithms and large-scale quantum computers in studying real-time non-perturbative dynamics in the early universe and at colliders, as well as in analyzing complex HEP data. Additionally, we emphasize the importance of integrating quantum properties into HEP experiments to test quantum mechanics at unprecedented high-energy scales and search for hints of new physics. Looking ahead, the continued integration of resources to fully harness these evolving technologies will enhance our efforts to deepen our understanding of the fundamental laws of nature.
Tobias Golling, Lukas Heinrich, Michael Kagan et al.
We propose masked particle modeling (MPM) as a self-supervised method for learning generic, transferable, and reusable representations on unordered sets of inputs for use in high energy physics (HEP) scientific data. This work provides a novel scheme to perform masked modeling based pre-training to learn permutation invariant functions on sets. More generally, this work provides a step towards building large foundation models for HEP that can be generically pre-trained with self-supervised learning and later fine-tuned for a variety of down-stream tasks. In MPM, particles in a set are masked and the training objective is to recover their identity, as defined by a discretized token representation of a pre-trained vector quantized variational autoencoder. We study the efficacy of the method in samples of high energy jets at collider physics experiments, including studies on the impact of discretization, permutation invariance, and ordering. We also study the fine-tuning capability of the model, showing that it can be adapted to tasks such as supervised and weakly supervised jet classification, and that the model can transfer efficiently with small fine-tuning data sets to new classes and new data domains.
J. Albrecht, A. A. Alves, G. Amadio et al.
Particle physics has an ambitious and broad experimental programme for the coming decades. This programme requires large investments in detector hardware, either to build new facilities and experiments, or to upgrade existing ones. Similarly, it requires commensurate investment in the R&D of software to acquire, manage, process, and analyse the shear amounts of data to be recorded. In planning for the HL-LHC in particular, it is critical that all of the collaborating stakeholders agree on the software goals and priorities, and that the efforts complement each other. In this spirit, this white paper describes the R&D activities required to prepare for this software upgrade.
Shingo Ogiwara, Yusuke Nomoto, Taro Osada
Lisette Collins, Jamie O. Yang, Michael E. Lazarus
Jamie O. Yang, Soonwook Hong, Wendy Ho
Aryan Roy, Jim Pivarski, Chad Wells Freer
Analysis on HEP data is an iterative process in which the results of one step often inform the next. In an exploratory analysis, it is common to perform one computation on a collection of events, then view the results (often with histograms) to decide what to try next. Awkward Array is a Scikit-HEP Python package that enables data analysis with array-at-a-time operations to implement cuts as slices, combinatorics as composable functions, etc. However, most C++ HEP libraries, such as FastJet, have an imperative, one-particle-at-a-time interface, which would be inefficient in Python and goes against the grain of the array-at-a-time logic of scientific Python. Therefore, we developed fastjet, a pip-installable Python package that provides FastJet C++ binaries, the classic (particle-at-a-time) Python interface, and the new array-oriented interface for use with Awkward Array. The new interface streamlines interoperability with scientific Python software beyond HEP, such as machine learning. In one case, adopting this library along with other array-oriented tools accelerated HEP analysis code by a factor of 20. It was designed to be easily integrated with libraries in the Scikit-HEP ecosystem, including Uproot (file I/O), hist (histogramming), Vector (Lorentz vectors), and Coffea (high-level glue). We discuss the design of the fastjet Python library, integrating the classic interface with the array oriented interface and with the Vector library for Lorentz vector operations. The new interface was developed as open source.
G. Krnjaic, N. Toro, A. Berlin et al.
Dark matter particles can be observably produced at intensity-frontier experiments, and opportunities in the next decade will explore important parameter space motivated by thermal DM models, the dark sector paradigm, and anomalies in data. This whitepaper describes the motivations, detection strategies, prospects and challenges for such searches, as well as synergies and complementarity both within RF6 and across HEP.
Flaminia Bartolini
Silvia Costa is a journalist and a centre-left politician, she was elected in 2009 at the European Parliament within the SD coalition and from 2020 is the Government Commissioner for the preservation and re-purposing of the ex-prison complex on the island of Santo Stefano. The Ventotene-Santo Stefano project, which has just been dedicated to former EU Parliament President David Sassoli, seeks to preserve material and memorial legacies of two sites of confinement where the very idea of European Union was conceived.
Guo-Li Wang, Wei Li, Tai-Fu Feng et al.
We choose the Reduction Formula, PCAC and Low Energy Theory to reduce the $S$ matrix of a OZI allowed two-body strong decay involving a light pseudoscalar, the covariant transition amplitude formula with relativistic wave functions as input is derived. After confirm this method by the decay $D^*(2010)\to Dπ$, we study the newly observed $D_{s0}(2590)^{+}$ with supposing it to be the state $D_s(2^1S_0)^+$, we find its decay width $Γ$ is highly sensitive to the $D_{s0}(2590)^{+}$ mass, which result in the meaningless comparison of widths by different models with various input masses. Instead of width, we studied the overlap integral over the wave functions of initial and final states, here we parameterized it as $X$ which is model-independent, and the ratio $Γ/{|{\vec P_f}|^3}$, both are almost mass independent, to give us useful information. The results show that, all the existing theoretical predictions $X_{D_s(2S) \to D^*K}=0.25\sim 0.41$ and $Γ/{|{\vec P_f}|^3}=0.81\sim1.77$ MeV$^{-2}$ are smaller than experimental data $0.585^{+0.015}_{-0.035}$ and $4.54^{+0.25}_{-0.52}$ MeV$^{-2}$. Further compared with $X^{ex}_{D^*(2010) \to Dπ}=0.540\pm0.009$, the current data $X^{ex}_{D_s(2S) \to D^*K}=0.585^{+0.015}_{-0.035}$ is too big to be an reasonable value, so it is early to say $D_{s0}(2590)^{+}$ is the conventional $D_s(2^1S_0)^+$ meson.
E. Elsen
Particle physics has an ambitious and broad experimental programme for the coming decades. This programme requires large investments in detector hardware, either to build new facilities and experiments, or to upgrade existing ones. Similarly, it requires commensurate investment in the R&D of software to acquire, manage, process, and analyse the shear amounts of data to be recorded. In planning for the HL-LHC in particular, it is critical that all of the collaborating stakeholders agree on the software goals and priorities, and that the efforts complement each other. In this spirit, this white paper describes the R&D activities required to prepare for this software upgrade. a meeting of the Machine Learning CWP working group. It was held as a parallel session at the Machine Learning (IML)” workshop, an organisation formed in 2016 to facilitate communication regarding R&D on ML applications in the LHC experiments. Description: This one-day workshop was organised to engage with the experimental HEP community involved in computing and software for Intensity Frontier experiments at FNAL. Plans for the CWP were described, with discussion about commonalities between the HL-LHC challenges and the challenges of the FNAL neutrino and muon experiments
Halaman 12 dari 37888