S. Plymate, L. Matej, Robert E. Jones et al.
Hasil untuk "hep-ex"
Menampilkan 20 dari ~642843 hasil · dari arXiv, DOAJ, Semantic Scholar
Alberto Zucchetta
The discovery of the Higgs boson ten years ago and successful measurement of the Higgs boson couplings to third generation fermions by LHC mark great milestones for HEP. The much weaker coupling to the second generation quarks predicted by the SM makes the measurement of the light Yukawa Higgs couplings, as the Higgs-charm ones, much more challenging. With the latest tagging algorithms capabilities, a lot of progress has been made to constrain these couplings. In this talk, the latest results of direct and indirect measurements by the CMS experiment are presented. Prospects for future improvements are also given.
Caroline L. Peixoto, Vitória Karoline F. Monteiro, José Osmar S. Júnior et al.
Thrombosis has emerged as a significant concern during the Coronavirus Disease 2019 (COVID-19) pandemic, with patients experiencing increased venous thromboembolism due to prolonged immobilization and inflammation. In Brazil, studies show a higher thrombosis risk among COVID-19 patients, emphasizing the need for effective thromboprophylaxis. Heparin (HEP), commonly used in hospitals, enhances antithrombin III (ATIII) activity to inhibit thrombin and factor Xa, thus reducing thrombosis risk. However, it can cause adverse effects like bleeding and HEP-induced thrombocytopenia, complicating its use and prompting the search for safer anticoagulant alternatives. This study aimed to evaluate the anticoagulant properties of sulfated polysaccharides (SP) derived from the red seaweed Hypnea musciformis, particularly their hydrolysates with different molecular weights. Additionally, computational analyses were conducted to investigate their interaction with ATIII, compared to HEP, to determine if the mechanism of action is similar. In vitro, the assays assessed the antithrombotic activity using activated partial thromboplastin time (APTT) and prothrombin time (PT) tests, with low-molecular-weight HEP CLEXANE (LMWH) as a positive control. Results showed that the intact polysaccharide and one hydrolysate (EX 5) prolonged activated partial thromboplastin time, while no samples affected prothrombin time. The in vivo bleeding time test revealed that these samples had a significantly lower hemorrhagic tendency than the positive control. Computational simulations indicated a stronger interaction between ATIII and the intact polysaccharide compared to its hydrolysate. These findings suggest that SP from H. musciformis could offer a promising anticoagulant therapy with reduced bleeding risk for clinical application in thrombotic conditions.
Yu-Jie Zeng, Tian-Zi Song, Xue-Sen Wang et al.
In high-energy physics~(HEP) experiments, visualization software plays a pivotal role in detector design, offline software development, and event data analysis. The visualization tools integrate detailed detector geometry with complex event data models, providing the researchers with invaluable insights into experimental results. Phoenix is an emerging general-purpose visualization platform for the current and next-generation HEP experiments. In this study, we develop an event display software based on Phoenix for the CEPC experiment. It offers necessary functionalities for visualizing detector geometries and displaying event data, allowing the researchers to optimize detector design, test simulation and reconstruction algorithms, and analyze event data in a visualized way. Additionally, we discuss the future applications of the event display software, including its usage in online monitoring and the potential to build virtual reality projects for enhanced data visualization.
Lorenzo Cotrozzi, Anna Driutti, Fedor Ignatov et al.
PrecisionSM is an annotated database that compiles the available data on low-energy cross sections of electron-positron collisions into hadronic channels. This database organizes and collects data samples from $e^+e^-$ experiments, which are used as input for the data-driven theoretical evaluation of the muon anomalous magnetic moment, $a_μ$, serving as a precise test of the Standard Model when compared to the experimental measurements of $a_μ$. The database is accessible through a custom website (https://precision-sm.github.io) which contains details about the data samples, such as the treatment of radiative corrections, as well as links to papers on INSPIRE-HEP and to tables on HEPData. The PrecisionSM database was developed within a Joint Research Initiative in the group application of the European hadron physics community, STRONG2020, and is now incorporated into the RadioMonteCarLow2 Working Group (RMCL2 WG) activities, which have the more general goal of improving the theoretical description of scattering processes at $e^+e^-$ colliders. The results of Phase I of the new RMCL2 WG have been published in Aliberti et al, arXiv:hep-ph/2410.22882. In this proceeding, we will report on the status of the PrecisionSM database, which currently contains a list of the dominant $2π$ channel as well as $3π$ and $π^0γ$, and on the ongoing work for the other channels and for responsive plots.
Shilin Liu, Clark McGrew
Data analysis in HEP experiments often uses binned likelihood from data and finite Monte Carlo sample. Statistical uncertainty of Monte Carlo sample has been introduced in Frequentist Inference in some literatures, but they are not suitable for Bayesian Inference. This technical note introduces the binned likelihood with Monte Carlo statistical uncertainty in Bayesian Inference and includes the derivation of it. It turns out that the results are similar to the results in [1]. But this tech-note gives an alternate and more intuitive derivation of the content
Mohammad Atif, Meghna Battacharya, Paolo Calafiura et al.
High-energy physics (HEP) experiments have developed millions of lines of code over decades that are optimized to run on traditional x86 CPU systems. However, we are seeing a rapidly increasing fraction of floating point computing power in leadership-class computing facilities and traditional data centers coming from new accelerator architectures, such as GPUs. HEP experiments are now faced with the untenable prospect of rewriting millions of lines of x86 CPU code, for the increasingly dominant architectures found in these computational accelerators. This task is made more challenging by the architecture-specific languages and APIs promoted by manufacturers such as NVIDIA, Intel and AMD. Producing multiple, architecture-specific implementations is not a viable scenario, given the available person power and code maintenance issues. The Portable Parallelization Strategies team of the HEP Center for Computational Excellence is investigating the use of Kokkos, SYCL, OpenMP, std::execution::parallel and alpaka as potential portability solutions that promise to execute on multiple architectures from the same source code, using representative use cases from major HEP experiments, including the DUNE experiment of the Long Baseline Neutrino Facility, and the ATLAS and CMS experiments of the Large Hadron Collider. This cross-cutting evaluation of portability solutions using real applications will help inform and guide the HEP community when choosing their software and hardware suites for the next generation of experimental frameworks. We present the outcomes of our studies, including performance metrics, porting challenges, API evaluations, and build system integration.
Julie Munch Torndal, Jenny List, Dimitrios Ntounis et al.
The Higgs mechanism is a central part of the Standard Model which has not yet been fully established experimentally without the measurement of the Higgs self-coupling. Future linear $e^+e^-$ colliders are able to access centre-of-mass energies of 500 GeV and beyond and can therefore probe the Higgs self-coupling directly through the measurement of double Higgs production. A new analysis of the capability to measure the double Higgs-strahlung, $e^+e^-\to ZHH$, at a centre-of-mass energy of 500 GeV is ongoing based on the detailed, Geant4-based simulation of the ILD detector concept. This study has identified several aspects concerning the reconstruction techniques to fully exploit the detector potential, which are expected to improve precision reach and will be presented in this contribution. Additionally, the requirements that the Higgs self-coupling measurement puts on the choice of centre-of-mass energy will be evaluated as this is important for shaping the landscape of future colliders such as ILC or $C^3$.
Nick Smith, Bo Jayatilaka, David Mason et al.
We present a novel data format design that obviates the need for data tiers by storing individual event data products in column objects. The objects are stored and retrieved through Ceph S3 technology, with a layout designed to minimize metadata volume and maximize data processing parallelism. Performance benchmarks of data storage and retrieval are presented.
Fei Ma, Feiyi Liu, Wei Li
Recently methods of graph neural networks (GNNs) have been applied to solving the problems in high energy physics (HEP) and have shown its great potential for quark-gluon tagging with graph representation of jet events. In this paper, we introduce an approach of GNNs combined with a HaarPooling operation to analyze the events, called HaarPooling Message Passing neural network (HMPNet). In HMPNet, HaarPooling not only extracts the features of graph, but embeds additional information obtained by clustering of k-means of different particle features. We construct Haarpooling from five different features: absolute energy $\log E$, transverse momentum $\log p_T$, relative coordinates $(Δη,Δφ)$, the mixed ones $(\log E, \log p_T)$ and $(\log E, \log p_T, Δη,Δφ)$. The results show that an appropriate selection of information for HaarPooling enhances the accuracy of quark-gluon tagging, as adding extra information of $\log P_T$ to the HMPNet outperforms all the others, whereas adding relative coordinates information $(Δη,Δφ)$ is not very effective. This implies that by adding effective particle features from HaarPooling can achieve much better results than solely pure message passing neutral network (MPNN) can do, which demonstrates significant improvement of feature extraction via the pooling process. Finally we compare the HMPNet study, ordering by $p_T$, with other studies and prove that the HMPNet is also a good choice of GNN algorithms for jet tagging.
Vincent Dumont, Xiangyang Ju, Juliane Mueller
The Generative Adversarial Network (GAN) is a powerful and flexible tool that can generate high-fidelity synthesized data by learning. It has seen many applications in simulating events in High Energy Physics (HEP), including simulating detector responses and physics events. However, training GANs is notoriously hard and optimizing their hyperparameters even more so. It normally requires many trial-and-error training attempts to force a stable training and reach a reasonable fidelity. Significant tuning work has to be done to achieve the accuracy required by physics analyses. This work uses the physics-agnostic and high-performance-computer-friendly hyperparameter optimization tool HYPPO to optimize and examine the sensitivities of the hyperparameters of a GAN for two independent HEP datasets. This work provides the first insights into efficiently tuning GANs for Large Hadron Collider data. We show that given proper hyperparameter tuning, we can find GANs that provide high-quality approximations of the desired quantities. We also provide guidelines for how to go about GAN architecture tuning using the analysis tools in HYPPO.
Michel H. Villanueva, Sudhir Malik, Meirin Oan Evans
Among the upgrades in current high energy physics (HEP) experiments and the new facilities coming online, solving software challenges has become integral for the success of the collaborations, The demand for human resources highly-skilled in both HEP and software domains is increasing. With a highly distributed environment in human resources, the sustainability of the HEP ecosystem requires a continuous effort in the equipment of physicists with the required abilities in software development. In this paper, the collective software training program in HEP and its activities led by the HEP Software Foundation (HSF) and the Institute for Research and Innovation in Software in HEP (IRIS-HEP) are presented. Experiment-agnostic, open, and accessible modules for training have been developed, focusing on common software material with ranges from core software skills needed by everyone to advanced training required to produce high-quality sustainable software. A basic software curriculum was built, and an introductory software training event has been prepared to serve HEP entrants. This program serves individuals with transferable skills that are becoming increasingly important to careers in the realm of software and computing, whether inside or outside HEP.
H A Mariz, E. Sato, S. H. Barbosa et al.
Guillaume Taillepied
Measurements of electroweak-bosons in heavy-ion collisions with ALICE are reported. They are measured at large rapidities, via their (di)muonic decay: Z $\rightarrow μ^+ μ^-$ and W$^\pm \rightarrow μ^\pm ν$. The Z production and nuclear modification factor in Pb--Pb collisions at $\sqrt{s_{_{\rm NN}}}$ = 5.02 TeV are presented and compared to theoretical predictions with or without nuclear modifications of the Parton Distribution Functions. Measurements of electroweak-boson production in p--Pb collisions at $\sqrt{s_{_{\rm NN}}}$ = 5.02 and 8.16 TeV are also presented and discussed.
Francesco Capozzi, Shirley Weishi Li, Guanying Zhu et al.
We show that the Deep Underground Neutrino Experiment (DUNE), with significant but feasible new efforts, has the potential to deliver world-leading results in solar neutrinos. With a 100 kton-year exposure, DUNE could detect $\gtrsim 10^5$ signal events above 5 MeV electron energy. Separate precision measurements of neutrino-mixing parameters and the $^8$B flux could be made using two detection channels ($ν_e + \, ^{40}$Ar and $ν_{e,μ,τ} + e^-$) and the day-night effect ($> 10 σ$). New particle physics may be revealed through the comparison of solar neutrinos (with matter effects) and reactor neutrinos (without), which is discrepant by $\sim 2 σ$ (and could become $5.6 σ$). New astrophysics may be revealed through the most precise measurement of the $^8$B flux (to 2.5\%) and the first detection of the {\it hep} flux (to 11\%). {\it DUNE is required:} No other experiment, even proposed, has been shown capable of fully realizing these discovery opportunities.
Mika Vesterinen
Tree-level $b$ decays play a critical role in characterising the quark flavour sector, and exposing possible effects of physics beyond the Standard Model. These proceedings cover recent results from the LHCb experiment on semileptonic $b$ baryon decays, $\mathcal{R}(D^{\ast-})$ using three-prong hadronic $τ$ decays, $CP$ observables in $B^- \to D^{(\ast)}h^-$ decays, and an updated combination on the CKM angle $γ$.
Jana Crkovská
The multiplicity dependence of charmed-particle production can unveil new information on processes taking part at the parton level and on the interplay of soft and hard production mechanisms in collisions of relativistic hadrons. In this contribution, we report on multiplicity-differential measurements of $\rm{J}/ψ$ in pp and p-Pb collisions studied by the ALICE Collaboration. Comparisons between measurements at different energies are drawn as well as comparisons with $\rm{D}$ mesons. We also discuss the comparison with different theoretical predictions.
An-Sheng Cheng, Yu-Hsiang Cheng, Chiu-Hsia Chiou et al.
A. G. Kumar, agopal, S. Illanjiam et al.
A. Bethani, A. J. Bevan, J. Hays et al.
We review the concept of support vector machines (SVMs) and discuss examples of their use. One of the benefits of SVM algorithms, compared with neural networks and decision trees is that they can be less susceptible to over fitting than those other algorithms are to over training. This issue is related to the generalisation of a multivariate algorithm (MVA); a problem that has often been overlooked in particle physics. We discuss cross validation and how this can be used to improve the generalisation of a MVA in the context of High Energy Physics analyses. The examples presented use the Toolkit for Multivariate Analysis (TMVA) based on ROOT and describe our improvements to the SVM functionality and new tools introduced for cross validation within this framework.
Halaman 16 dari 32143