B. Knowles, C. Howe, D. Aden
Hasil untuk "hep-ex"
Menampilkan 20 dari ~758735 hasil · dari DOAJ, CrossRef, Semantic Scholar, arXiv
Qian Wang, T. Hisatomi, Yohichi Suzuki et al.
Michel-Pierre Coll, Hannah Hobson, G. Bird et al.
The Heartbeat Evoked Potential (HEP) has been proposed as a neurophysiological marker of interoceptive processing. Despite its use to validate interoceptive measures and to assess interoceptive functioning in clinical groups, the empirical evidence for a relationship between HEP amplitude and interoceptive processing, including measures of such processing, is scattered across several studies with varied designs. The aim of this systematic review and meta-analysis was to examine the body of HEP-interoception research, and consider the associations the HEP shows with various direct and indirect measures of interoception, and how it is affected by manipulations of interoceptive processing. Specifically, we assessed the effect on HEP amplitude of manipulating attention to the heartbeat; manipulating participants' arousal; the association between the HEP and behavioural measures of cardiac interoception; and comparisons between healthy and clinical groups. Following database searches and screening, 45 studies were included in the systematic review and 42 in the meta-analyses. We noted variations in the ways individual studies have attempted to address key confounds, particularly the cardiac field artefact. Meta-analytic summaries indicated there were moderate to large effects of attention, arousal, and clinical status on the HEP, and a moderate association between HEP amplitude and behavioural measures of interoception. Problematically, the reliability of the meta-analytic effects documented here remain unknown, given the lack of standardised protocols for measuring the HEP. Thus, it is possible effects are driven by confounds such as cardiac factors or somatosensory effects.
Jameel Alp, Alyssa M. Bren, Tyson Sievers et al.
Manuel Mendizabal, Juan Pablo Arab
Bharathi Selvan, Melissa M. Tran, Christine O’Connell et al.
Maram Alenzi, Dimo Dimitrov, Iyiad AlAbdul-Razzak et al.
Claire David
Artificial Intelligence (AI) and Machine Learning (ML) have been prevalent in particle physics for over three decades, shaping many aspects of High Energy Physics (HEP) analyses. As AI's influence grows, it is essential for physicists $\unicode{x2013}$ as both researchers and informed citizens $\unicode{x2013}$ to critically examine its foundations, misconceptions, and impact. This paper explores AI definitions, examines how ML differs from traditional programming, and provides a brief review of AI/ML applications in HEP, highlighting promising trends such as Simulation-Based Inference, uncertainty-aware machine learning, and Fast ML for anomaly detection. Beyond physics, it also addresses the broader societal harms of AI systems, underscoring the need for responsible engagement. Finally, it stresses the importance of adapting research practices to an evolving AI landscape, ensuring that physicists not only benefit from the latest tools but also remain at the forefront of innovation.
Sorina Popescu
We present a study of high-$β$* optics configurations at Interaction Point 2 (IP2) of the Large Hadron Collider (LHC), developed to enable forward and diffractive physics measurements with the ALICE experiment during Runs 3 and 4. Using MAD-X, we designed a $β$* = 30 m optics scheme that satisfies beam stability and aperture requirements, while offering improved sensitivity to small-angle scattering. The configuration follows the Achromatic Telescopic Squeeze (ATS) optics scheme, originally developed for IP1 and IP5, which provides enhanced control over phase advance and chromaticity. The resulting optics layout enables a forward physics program with continuous data-taking. We also outline possible extensions toward even higher $β$* values and discuss the implementation roadmap.
Mohammad Atif, Kriti Chopra, Ozgur Kilic et al.
Next-generation High Energy Physics (HEP) experiments will generate unprecedented data volumes, necessitating High Performance Computing (HPC) integration alongside traditional high-throughput computing. However, HPC adoption in HEP is hindered by the challenge of porting legacy software to heterogeneous architectures and the sparse documentation of these complex scientific codebases. We present CelloAI, a locally hosted coding assistant that leverages Large Language Models (LLMs) with retrieval-augmented generation (RAG) to support HEP code documentation and generation. This local deployment ensures data privacy, eliminates recurring costs and provides access to large context windows without external dependencies. CelloAI addresses two primary use cases, code documentation and code generation, through specialized components. For code documentation, the assistant provides: (a) Doxygen style comment generation for all functions and classes by retrieving relevant information from RAG sources (papers, posters, presentations), (b) file-level summary generation, and (c) an interactive chatbot for code comprehension queries. For code generation, CelloAI employs syntax-aware chunking strategies that preserve syntactic boundaries during embedding, improving retrieval accuracy in large codebases. The system integrates callgraph knowledge to maintain dependency awareness during code modifications and provides AI-generated suggestions for performance optimization and accurate refactoring. We evaluate CelloAI using real-world HEP applications from ATLAS, CMS, and DUNE experiments, comparing different embedding models for code retrieval effectiveness. Our results demonstrate the AI assistant's capability to enhance code understanding and support reliable code generation while maintaining the transparency and safety requirements essential for scientific computing environments.
M. Atzori Corona, M. Cadeddu, N. Cargioli et al.
Direct detection dark matter experiments have proven to be compelling probes for studying low-energy neutrino interactions with both nuclei and atomic electrons, offering complementary information to accelerator and reactor-based neutrino experiments. Recently, the XENONnT and PandaX-4T collaborations reported the first evidence of coherent elastic neutrino-nucleus scattering from $^8\mathrm{B}$ solar neutrinos. Thanks to their excellent background rejection capabilities and distinctive signal signatures, dual-phase time projection chambers are also sensitive to $pp$ solar neutrinos via their elastic scattering off atomic electrons in the target material. Although this signal is subdominant within the Standard Model, it becomes significantly enhanced in many beyond the Standard Model scenarios, offering a unique opportunity to probe new physics in the low-energy regime. While the precision of current neutrino measurements from dark matter detectors remains lower than that achieved by dedicated neutrino experiments, their sensitivity to the tau neutrino component of solar neutrinos helps complete the overall picture, especially when investigating flavor-dependent new physics effects.
Johannes Albrecht, Leon Bozianu, Lukas Calefice et al.
In modern High Energy Physics (HEP) experiments, triggers perform the important task of selecting, in real time, the data to be recorded and saved for physics analyses. As a result, trigger strategies play a key role in extracting relevant information from the vast streams of data produced at facilities like the Large Hadron Collider (LHC). As the energy and luminosity of the collisions increase, these strategies must be upgraded and maintained to suit the experimental needs. This whitepaper compiled by the SMARTHEP Early Stage Researchers presents a high-level overview and reviews recent developments of triggering practices employed at the LHC. The general trigger principles applied at modern HEP experiments are highlighted, with specific reference to the current trigger state-of-the-art within the ALICE, ATLAS, CMS and LHCb collaborations. Furthermore, a brief synopsis of the new trigger paradigm required by the upcoming high-luminosity upgrade of the LHC is provided.
Kimitoshi Kubo, Hiroki Niwa, Kazuteru Komuro
Daniel Murnane, Savannah Thais, Ameya Thete
Graph neural networks (GNNs) have gained traction in high-energy physics (HEP) for their potential to improve accuracy and scalability. However, their resource-intensive nature and complex operations have motivated the development of symmetry-equivariant architectures. In this work, we introduce EuclidNet, a novel symmetry-equivariant GNN for charged particle tracking. EuclidNet leverages the graph representation of collision events and enforces rotational symmetry with respect to the detector's beamline axis, leading to a more efficient model. We benchmark EuclidNet against the state-of-the-art Interaction Network on the TrackML dataset, which simulates high-pileup conditions expected at the High-Luminosity Large Hadron Collider (HL-LHC). Our results show that EuclidNet achieves near-state-of-the-art performance at small model scales (<1000 parameters), outperforming the non-equivariant benchmarks. This study paves the way for future investigations into more resource-efficient GNN models for particle tracking in HEP experiments.
Raghunath Sahoo
With the advent of unprecedented collision energy at the Large Hadron Collider, CERN, Geneva, a new domain of particle production and possible formation of Quark-Gluon Plasma (QGP) in high-multiplicity proton-proton collisions and the collisions of light nuclei has been a much-discussed topic recently. In this review, I discuss some of the recent observations leading to such a possibility, associated challenges, and some predictions for the upcoming light-nuclei collisions at the LHC.
Aaron Chou, Kent Irwin, Reina H. Maruyama et al.
Strong motivation for investing in quantum sensing arises from the need to investigate phenomena that are very weakly coupled to the matter and fields well described by the Standard Model. These can be related to the problems of dark matter, dark sectors not necessarily related to dark matter (for example sterile neutrinos), dark energy and gravity, fundamental constants, and problems with the Standard Model itself including the Strong CP problem in QCD. Resulting experimental needs typically involve the measurement of very low energy impulses or low power periodic signals that are normally buried under large backgrounds. This report documents the findings of the 2023 Quantum Sensors for High Energy Physics workshop which identified enabling quantum information science technologies that could be utilized in future particle physics experiments, targeting high energy physics science goals.
Audrey Bennett, Alexandra Bery, Patricia Esposito et al.
J. BouSaba, Y. Magnus, W. Sannaa et al.
Meghna Bhattacharya, Paolo Calafiura, Taylor Childers et al.
Today's world of scientific software for High Energy Physics (HEP) is powered by x86 code, while the future will be much more reliant on accelerators like GPUs and FPGAs. The portable parallelization strategies (PPS) project of the High Energy Physics Center for Computational Excellence (HEP/CCE) is investigating solutions for portability techniques that will allow the coding of an algorithm once, and the ability to execute it on a variety of hardware products from many vendors, especially including accelerators. We think without these solutions, the scientific success of our experiments and endeavors is in danger, as software development could be expert driven and costly to be able to run on available hardware infrastructure. We think the best solution for the community would be an extension to the C++ standard with a very low entry bar for users, supporting all hardware forms and vendors. We are very far from that ideal though. We argue that in the future, as a community, we need to request and work on portability solutions and strive to reach this ideal.
B. Messerly, R. Fine, A. Olivier et al.
We introduce the MINERvA Analysis Toolkit (MAT), a utility for centralizing the handling of systematic uncertainties in HEP analyses. The fundamental utilities of the toolkit are the MnvHnD, a powerful histogram container class, and the systematic Universe classes, which provide a modular implementation of the many universe error analysis approach. These products can be used stand-alone or as part of a complete error analysis prescription. They support the propagation of systematic uncertainty through all stages of analysis, and provide flexibility for an arbitrary level of user customization. This extensible solution to error analysis enables the standardization of systematic uncertainty definitions across an experiment and a transparent user interface to lower the barrier to entry for new analyzers.
Halaman 28 dari 37937