W. Buckley
Hasil untuk "Modern"
Menampilkan 20 dari ~4311212 hasil · dari arXiv, DOAJ, CrossRef, Semantic Scholar
J. McMullan, P. Burke
Ronald G. Ehrenberg, R. Smith
A. Guttmann
M. Coles
A. Zvezdin, V. Kotov
J. Sakurai, S. F. Tuan, E. Commins
S. Castaldo, F. Capasso
M. Trouillot
T. Thiemann
This is an introduction to the by now fifteen years old research field of canonical quantum general relativity, sometimes called "loop quantum gravity". The term "modern" in the title refers to the fact that the quantum theory is based on formulating classical general relativity as a theory of connections rather than metrics as compared to in original version due to Arnowitt, Deser and Misner. Canonical quantum general relativity is an attempt to define a mathematically rigorous, non-perturbative, background independent theory of Lorentzian quantum gravity in four spacetime dimensions in the continuum. The approach is minimal in that one simply analyzes the logical consequences of combining the principles of general relativity with the principles of quantum mechanics. The requirement to preserve background independence has lead to new, fascinating mathematical structures which one does not see in perturbative approaches, e.g. a fundamental discreteness of spacetime seems to be a prediction of the theory providing a first substantial evidence for a theory in which the gravitational field acts as a natural UV cut-off. An effort has been made to provide a self-contained exposition of a restricted amount of material at the appropriate level of rigour which at the same time is accessible to graduate students with only basic knowledge of general relativity and quantum field theory on Minkowski space.
M. Roco
Takashi Obase, Takanori Kodama, Takao Kawasaki et al.
It has been hypothesized that the Earth may have experienced snowball events in the past, during which its surface became completely covered with ice. Previous studies used general circulation models to investigate the onset and climate of such snowball events. Using the MIROC4m coupled atmosphere--ocean climate model, this study examined the changes in the oceanic circulation during the onset of a modern snowball Earth and elucidated their evolution to steady states under the snowball climate. Abruptly changing the solar constant to 94% of its present-day value caused the modern Earth climate to turn into a snowball state after ~1300 years and initiated rapid increase in sea ice thickness. During onset of the snowball, extensive sea ice formation and melting of sea ice in the mid-latitudes caused substantial freshening of surface waters and salinity stratification. By contrast, such salinity stratification was absent if the duration between the change in the solar flux and the snowball onset was short. After snowball onset, the global sea ice cover and the buildup of salinity stratification caused drastic weakening in the deep ocean circulation. However, the meridional overturning circulation resumed within several hundred years after the snowball onset because the density flux by sea ice production weakens the salinity stratification. While the evolution of the oceanic circulation would depend on the continental distribution and the evolution of continental ice sheets, our results highlight the gradual growth of sea ice and associated brine rejection are essential factors for the transient evolution of the oceanic circulation in the snowball events.
Daniel Soliman
The impact investment market has an estimated value of almost $1.6 trillion. Significant progress has been made in determining the financial returns of impact investing. Investors are still, however, in the early stages of determining impact return. In this study, the author proposes the use of impact internal rate of return (impact IRR) to evaluate and monitor impact investments. This approach, which utilizes components of modern portfolio theory, adapted financial tools, and existing datasets, is demonstrated herein through initial use cases and examples showing how it can be employed to optimize impact.
Ming Du, Hanna Ruth, Steven Henke et al.
Ptychography has become an indispensable tool for high-resolution, non-destructive imaging using coherent light sources. The processing of ptychographic data critically depends on robust, efficient, and flexible computational reconstruction software. We introduce Pty-Chi, an open-source ptychographic reconstruction package built on PyTorch that unifies state-of-the-art analytical algorithms with automatic differentiation methods. Pty-Chi provides a comprehensive suite of reconstruction algorithms while supporting advanced experimental parameter corrections such as orthogonal probe relaxation and multislice modeling. Leveraging PyTorch as the computational backend ensures vendor-agnostic GPU acceleration, multi-device parallelization, and seamless access to modern optimizers. An object-oriented, modular design makes Pty-Chi highly extendable, enabling researchers to prototype new imaging models, integrate machine learning approaches, or build entirely new workflows on top of its core components. We demonstrate Pty-Chi's capabilities through challenging case studies that involve limited coherence, low overlap, and unstable illumination during scanning, which highlight its accuracy, versatility, and extensibility. With community-driven development and open contribution, Pty-Chi offers a modern, maintainable platform for advancing computational ptychography and for enabling innovative imaging algorithms at synchrotron facilities and beyond.
Nizar ALHafez, Ahmad Kurdi
This paper presents a comprehensive comparison of three dominant parallel programming models in High Performance Computing (HPC): Message Passing Interface (MPI), Open Multi-Processing (OpenMP), and Compute Unified Device Architecture (CUDA). Selecting optimal programming approaches for modern heterogeneous HPC architectures has become increasingly critical. We systematically analyze these models across multiple dimensions: architectural foundations, performance characteristics, domain-specific suitability, programming complexity, and recent advancements. We examine each model's strengths, weaknesses, and optimization techniques. Our investigation demonstrates that MPI excels in distributed memory environments with near-linear scalability for communication-intensive applications, but faces communication overhead challenges. OpenMP provides strong performance and usability in shared-memory systems and loop-centric tasks, though it is limited by shared memory contention. CUDA offers substantial performance gains for data-parallel GPU workloads, but is restricted to NVIDIA GPUs and requires specialized expertise. Performance evaluations across scientific simulations, machine learning, and data analytics reveal that hybrid approaches combining two or more models often yield optimal results in heterogeneous environments. The paper also discusses implementation challenges, optimization best practices, and emerging trends such as performance portability frameworks, task-based programming, and the convergence of HPC and Big Data. This research helps developers and researchers make informed decisions when selecting programming models for modern HPC applications, emphasizing that the best choice depends on application requirements, hardware, and development constraints.
Abhinav Jain, Cindy Grimm, Stefan Lee
Dormant tree pruning is labor-intensive but essential to maintaining modern highly-productive fruit orchards. In this work we present a closed-loop visuomotor controller for robotic pruning. The controller guides the cutter through a cluttered tree environment to reach a specified cut point and ensures the cutters are perpendicular to the branch. We train the controller using a novel orchard simulation that captures the geometric distribution of branches in a target apple orchard configuration. Unlike traditional methods requiring full 3D reconstruction, our controller uses just optical flow images from a wrist-mounted camera. We deploy our learned policy in simulation and the real-world for an example V-Trellis envy tree with zero-shot transfer, achieving a 30% success rate -- approximately half the performance of an oracle planner.
Ihor Pysmennyi, Roman Kyslyi, Kyrylo Kleshch
Traditional quality assurance (QA) methods face significant challenges in addressing the complexity, scale, and rapid iteration cycles of modern software systems and are strained by limited resources available, leading to substantial costs associated with poor quality. The object of this research is the Quality Assurance processes for modern distributed software applications. The subject of the research is the assessment of the benefits, challenges, and prospects of integrating modern AI-oriented tools into quality assurance processes. We performed comprehensive analysis of implications on both verification and validation processes covering exploratory test analyses, equivalence partitioning and boundary analyses, metamorphic testing, finding inconsistencies in acceptance criteria (AC), static analyses, test case generation, unit test generation, test suit optimization and assessment, end to end scenario execution. End to end regression of sample enterprise application utilizing AI-agents over generated test scenarios was implemented as a proof of concept highlighting practical use of the study. The results, with only 8.3% flaky executions of generated test cases, indicate significant potential for the proposed approaches. However, the study also identified substantial challenges for practical adoption concerning generation of semantically identical coverage, "black box" nature and lack of explainability from state-of-the-art Large Language Models (LLMs), the tendency to correct mutated test cases to match expected results, underscoring the necessity for thorough verification of both generated artifacts and test execution results. The research demonstrates AI's transformative potential for QA but highlights the importance of a strategic approach to implementing these technologies, considering the identified limitations and the need for developing appropriate verification methodologies.
Luijim Jose
Background/purpose. The persistent risk of semantic anachronism challenges both literary interpretation and pedagogy, as modern readers frequently impose contemporary meanings onto historically charged vocabulary. This study introduces the Contextual Diachronic Semantic Framework (CDSF), a five-layered analytical model designed to trace the evolution of word meaning over time. The primary aim is to demonstrate how CDSF uncovers semantic complexity and prevents misreading in canonical literature, while offering practical applications in literature instruction and critical reading. Materials/methods. The study employs a qualitative, text-centered methodology, applying the CDSF to five lexical items in Shirley Jackson’s The Lottery: “lottery,” “village,” “tradition,” “black box,” and “stones.” The five analytic layers—Etymological Trajectory Analysis, Diachronic Semantic Mapping, Contextual Literary Function, Cultural-Hermeneutic Embedding, and Interpretive Reconstruction—draw from historical dictionaries, linguistic corpora, literary criticism, and classroom pedagogy. Educational implications were derived by aligning findings with strategies for teaching vocabulary and symbolic language. Results. Findings reveal that each term operates as a site of historical memory, cultural critique, and thematic irony. The CDSF allows for context-sensitive interpretation, helping both scholars and students decode deeper meanings. In pedagogical terms, the framework provides a replicable tool for guiding learners beyond surface-level readings toward historically grounded literary analysis. Conclusion. The CDSF is a rigorous, interdisciplinary model that enhances scholarly interpretation and supports literature instruction. It promotes critical reading, prevents semantic misinterpretation, and equips teachers with a research-informed strategy for fostering historical empathy and interpretive depth in the classroom.
Johnathan Roberts
P. Jeschke
Halaman 23 dari 215561