R. Noyori
Hasil untuk "Modern"
Menampilkan 20 dari ~4316275 hasil · dari CrossRef, DOAJ, arXiv, Semantic Scholar
Charles Taylor
In a discussion of ideas and ideologies from Nietzsche to Gail Sheehy, from Allan Bloom to Michel Foucault, the author sorts out the good from the harmful in the modern cultivation of an authentic self. He sets forth the entire network of thought and morals that link our quest for self-creation with our impulse toward self-fashioning, and shows how such efforts must be conducted against an existing set of rules, or a gridwork of moral measurement. Seen against this network, our modern preoccupations with expression, rights, and the subjectivity of human thought reveal themselves as assets, not liabilities.
J. Clark, Y. Beyene, G. WoldeGabriel et al.
Yuliia Bondar
The purpose of the research is to provide a comprehensive analysis of employment dynamics and to identify the main factors influencing change in the context of modern economic development. Methodology. The methodological basis is a comparative analysis of employment and total population statistics, which are the main determinants that reveal the level of employment at different times. This made it possible to trace changes in trends before and during the deployment of large-scale military operations. The study is based on official statistical data and methods of dynamic analysis, generalisation and economic and statistical approaches, providing a comprehensive assessment of employment transformation. Results. The analysis showed that employment is the main element of the labour market and reflects the extent to which a society utilises its labour potential, determining the state of a country's socio-economic development. Employment not only provides the population with a means of obtaining labour income, but it is also an important factor in social stability, the reproduction of human capital, and regional development. The statistical analysis revealed a significant reduction in the employed population and a change in the trajectory of its dynamics in conditions of military upheaval. The main determinants of these changes were found to be demographic losses, large-scale external migration, a decrease in economic activity and structural imbalances in labour supply and demand. At the same time, adaptation processes in the labour market are evident, as demonstrated by the gradual stabilisation of individual employment indicators. Practical significance. The results obtained can be used to inform state employment policy and develop measures to support the labour market in the event of military challenges. They can also be used to forecast future trends in its development during the economic recovery period. Value / Originality. The novelty of this study lies in its comprehensive approach to assessing employment dynamics among the Ukrainian population. By combining analysis of the pre-war and wartime periods, it provides a deeper understanding of the nature of labour market transformations and their systemic consequences.
Ilia V. Chugunov, Vladimir P. Reshetnikov, Alexander A. Marchuk
A significant fraction of galaxies show warps in their discs, usually noticeable at its periphery. The exact origin of this phenomenon is not fully established, although multiple warp formation mechanisms are proposed. In this study, we create a sample of more than 1000 distant ($z \lesssim 2.5$) edge-on galaxies imaged by HST and JWST. For these galaxies, we measurd characteristics of warps and finally analyse how their parameters and frequency change with time. We focus on our main result that galaxies with strong warps were more prevalent in the past compared to the modern epoch. We check how selection effects and varying image quality between objects in our sample could influence our results and conclude that varying fraction of warped galaxies is not caused by observational effects, but represents a genuine evolution. Such a trend may be consistent with mergers and interactions between galaxies being the primary mechanism of warp formation, as number density of galaxies decreases with time, implying higher rate of mergers and interactions in the past.
Lara Kreis, Justus Henneberg, Valentin Henkys et al.
Range minimum queries are frequently used in string processing and database applications including biological sequence analysis, document retrieval, and web search. Hence, various data structures have been proposed for improving their efficiency on both CPUs and GPUs.Recent work has also shown that hardware-accelerated ray tracing on modern NVIDIA RTX graphic cards can be exploited to answer range minimum queries by expressing queries as rays, which are fired into a scene of triangles representing minima of ranges at different granularities. While these approaches are promising, they suffer from at least one of three issues: severe memory overhead, high index construction time, and low query throughput. This renders these methods practically unusable on larger arrays: For example, the state-of-art GPU-based approaches LCA and RTXRMQ exceed the memory capacity of an NVIDIA RTX 4090 GPU for input arrays of size >= 2^29. To tackle these problems, in this work, we present a new approach called GPU-RMQ which is based on a hierarchical approach. GPU-RMQ first constructs a hierarchy of range minimum summaries on top of the original array in a highly parallel fashion. For query answering, only the relevant portions of the hierarchy are then processed in an optimized massively-parallel scan operation. Additionally, GPU-RMQ is hybrid in design enabling the use of both ray tracing cores and CUDA cores across different levels of the hierarchy to handle queries. Our experimental evaluation shows that GPU-RMQ outperforms the state-of-the-art approaches in terms of query throughput especially for larger arrays while offering a significantly lower memory footprint and up to two orders-of-magnitude faster index construction. In particular, it achieves up to ~8x higher throughput than LCA, ~17x higher throughput than RTXRMQ, and up to ~4800x higher throughput compared to an optimized CPU-based approach.
I.S. Samolygo, K.A. Andrianova, M.A. Manina et al.
<p style="font-weight: bold;"> I.S. Samolygo, K.A. Andrianova, M.A. Manina, A.S. Pestova, S.I. Erdes </p> <p> I.M. Sechenov First Moscow State Medical University (Sechenov University), Moscow, Russian Federation </p> <p> Focus on health of children and adolescents is a key part of socio-demographic policy of any state. Preventive vaccination remains one of the most efficient public healthcare tools that can significantly reduce infection dissemination and child mortality. The review highlights current trends, issues and challenges facing the healthcare system in the Russian Federation and worldwide. An alarming increase in morbidity of vaccine-preventable infections including measles, pertussis, pneumococcal and rotavirus infections associated with decreased vaccination coverage, increasing anti-vaccination trends, global migration and the COVID-19 pandemic effects is reported. Analysis of current national vaccination schedules, vaccination coverage, reasons for immunization refusals, as well as specific features of immune response formation in children with various immunobiological agents is presented. Special attention should be given to importance of comprehensive preventive work including not only extended vaccination schedules, but also enhanced educational and public awareness activities among parents and a medical community. A need for systemic measures to increase confidence in vaccination, to ensure equal access to vaccines, and to improve immunization coverage in children is emphasized. </p> <p> <span style="font-weight: bold;">Keywords:</span> pediatric health, preventive vaccination, vaccine-preventable infections, vaccination, measles, pertussis, pneumococcus, rotavirus. </p> <p style="font-style: italic;"> <b>For citation:</b><span style="font-style: italic;"> Samolygo I.S., Andrianova K.A., Manina M.A., Pestova A.S., Erdes S.I.</span> Modern challenges of preventive vaccination in children: from epidemiological trends to social barriers: literature review. Russian Journal of Woman and Child Health. 2025;8(4):366–371 (in Russ.). DOI: 10.32364/2618-8430-2025-8-4-13 </p>
Tersoo Abiem, Tertsea Igbawua, Jacob Adawa
This study assessed the connection between Direct Normal Irradiance (DNI) and Total Cloud Cover (TCC) throughout Nigeria, a region with considerable yet underexploited solar energy potential. Daily DNI data from Modern-Era Retrospective Analysis for Research and Applications Version 2 (MERRA-2) and TCC data from the ERA5 reanalysis of the European Centre for Medium-Range Weather Forecasts (ECMWF) were aggregated into monthly means and analyzed across four climatological seasons. Seasonal variability was evaluated using standard deviation, while long-term trends were examined through simple linear regression, with statistical significance appraised at p < 0.05. Pearson’s correlation coefficient was used to estimate DNI–TCC relationship. Results show that TCC peaked during JJA in the Af and Am climates, whereas DNI peaked in DJF and SON, particularly in the BSh and BWh regions. Variability in both DNI and TCC was highest in DJF and lowest in JJA. DNI variability was highest in the Aw zone and lowest in Am, while TCC variability peaked in the Aw and BWh zones and was lowest in Af and Am. Regression analysis revealed a strong inverse relationship between DNI and TCC in the Csb and BWh zones, while Af and Am exhibited complex interactions. Correlation analysis showed strongest negative relationship during DJF and JJA (mean r = –0.73), and weakest during MAM (r = –0.33). Trend analysis indicated a modest increase in DNI across all climate zones, with TCC decreasing except in Af. DNI values exceeding 400 W/m² were most likely in northern zones during DJF and SON but low in the south.
Josep Plana-Riu, F. Xavier Trias, Àdel Alsalti-Baldellou et al.
Computational Fluid Dynamics (CFD) simulations are often constrained by the memory-bound nature of sparse matrix-vector operations, which eventually limits performance on modern high-performance computing (HPC) systems. This work introduces a novel approach to increase arithmetic intensity in CFD by leveraging repeated matrix block structures. The method transforms the conventional sparse matrix-vector product (SpMV) into a sparse matrix-matrix product (SpMM), enabling simultaneous processing of multiple right-hand sides. This shifts the computation towards a more compute-bound regime by reusing matrix coefficients. Additionally, an inline mesh-refinement strategy is proposed: simulations initially run on a coarse mesh to establish a statistically steady flow, then refine to the target mesh. This reduces the wall-clock time to reach transition, leading to faster convergence with equivalent computational cost. The methodology is evaluated using theoretical performance bounds and validated through three test cases: a turbulent channel flow, Rayleigh-Bénard convection, and an industrial airfoil simulation. Results demonstrate substantial speed-ups - from modest improvements in basic configurations to over 50% in the mesh-refinement setup - highlighting the benefits of integrating SpMM across all CFD operators, including divergence, gradient, and Laplacian.
Yueze Liu, Ajay Nagi Reddy Kumdam, Ronit Kanjilal et al.
Modern roleplaying models are increasingly sophisticated, yet they consistently struggle to capture the essence of believable, engaging characters. We argue this failure stems from training paradigms that overlook the dynamic interplay of a character's internal world. Current approaches, including Retrieval-Augmented Generation (RAG), fact-based priming, literature-based learning, and synthetic data generation, exhibit recurring limitations in modeling the deliberative, value-conflicted reasoning that defines human interaction. In this paper, we identify four core concepts essential for character authenticity: Values, Experiences, Judgments, and Abilities (VEJA). We propose the VEJA framework as a new paradigm for data curation that addresses these systemic limitations. To illustrate the qualitative ceiling enabled by our framework, we present a pilot study comparing a manually curated, VEJA-grounded dataset against a state-of-the-art synthetic baseline. Using an LLM-as-judge evaluation, our findings demonstrate a significant quality gap, suggesting that a shift toward conceptually grounded data curation, as embodied by VEJA, is necessary for creating roleplaying agents with genuine depth and narrative continuity. The full dataset is available at https://github.com/HyouinKyoumaIRL/Operation-Veja
DongHyun Choi, Lucas Spangher, Chris Hidey et al.
Transformer-based Large Language Models, which suffer from high computational costs, advance so quickly that techniques proposed to streamline earlier iterations are not guaranteed to benefit more modern models. Building upon the Funnel Transformer proposed by Dai and Le (2020), which progressively compresses intermediate representations, we investigate the impact of funneling in contemporary Gemma2 Transformer architectures. We systematically evaluate various funnel configurations and recovery methods, comparing: (1) standard pretraining to funnel-aware pretraining strategies, (2) the impact of funnel-aware fine-tuning, and (3) the type of sequence recovery operation. Our results demonstrate that funneling creates information bottlenecks that propagate through deeper network layers, particularly in larger models (e.g., Gemma 7B), leading to at times unmanageable performance lost. However, carefully selecting the funneling layer and employing effective recovery strategies, can substantially mitigate performance losses, achieving up to a 44\% reduction in latency. Our findings highlight key trade-offs between computational efficiency and model accuracy, providing practical guidance for deploying funnel-based approaches in large-scale natural language applications.
Francesco Dalmonte, Emirhan Bayar, Emre Akbas et al.
Anomaly detection in medical images is an important yet challenging task due to the diversity of possible anomalies and the practical impossibility of collecting comprehensively annotated data sets. In this work, we tackle unsupervised medical anomaly detection proposing a modernized autoencoder-based framework, the Q-Former Autoencoder, that leverages state-of-the-art pretrained vision foundation models, such as DINO, DINOv2 and Masked Autoencoder. Instead of training encoders from scratch, we directly utilize frozen vision foundation models as feature extractors, enabling rich, multi-stage, high-level representations without domain-specific fine-tuning. We propose the usage of the Q-Former architecture as the bottleneck, which enables the control of the length of the reconstruction sequence, while efficiently aggregating multiscale features. Additionally, we incorporate a perceptual loss computed using features from a pretrained Masked Autoencoder, guiding the reconstruction towards semantically meaningful structures. Our framework is evaluated on four diverse medical anomaly detection benchmarks, achieving state-of-the-art results on BraTS2021, RESC, and RSNA. Our results highlight the potential of vision foundation model encoders, pretrained on natural images, to generalize effectively to medical image analysis tasks without further fine-tuning. We release the code and models at https://github.com/emirhanbayar/QFAE.
Peng Chen, Jiaji Zhang, Hailiang Zhao et al.
In modern GPU inference, cache efficiency remains a major bottleneck. In recommendation models, embedding hit rates largely determine throughput, while in large language models, KV-cache misses substantially increase time-to-first-token (TTFT). Heuristic policies such as \textsc{LRU} often struggle under structured access patterns. Learning-based approaches are promising, but in practice face two major limitations: they degrade sharply when predictions are inaccurate, or they gain little even with accurate predictions due to conservative designs. Some also incur high overhead, further limiting practicality. We present \textsc{LCR}, a practical framework for learning-based GPU caching that delivers performance gains while ensuring robustness and efficiency. Its core algorithm, \textsc{LARU}, enhances \textsc{LRU} with machine-learned predictions and dynamically adapts to prediction accuracy through online error estimation. When predictions are accurate, \textsc{LARU} achieves near-optimal performance. With inaccurate predictions, it degrades gracefully to near-\textsc{LRU} performance. With \textsc{LCR}, we bridge the gap between empirical progress and theoretical advances in learning-based caching. Experiments show that \textsc{LCR} delivers consistent gains under realistic conditions. In DLRM and LLM scenarios, it improves throughput by up to 24.2\% and reduces P99 TTFT by up to 28.3\%, outperforming widely used inference systems. Even under poor predictions, its performance remains stable, demonstrating practical robustness.
Patrick Diehl, Noujoud Nader, Deepti Gupta
Parallel programming remains one of the most challenging aspects of High-Performance Computing (HPC), requiring deep knowledge of synchronization, communication, and memory models. While modern C++ standards and frameworks like OpenMP and MPI have simplified parallelism, mastering these paradigms is still complex. Recently, Large Language Models (LLMs) have shown promise in automating code generation, but their effectiveness in producing correct and efficient HPC code is not well understood. In this work, we systematically evaluate leading LLMs including ChatGPT 4 and 5, Claude, and LLaMA on the task of generating C++ implementations of the Mandelbrot set using shared-memory, directive-based, and distributed-memory paradigms. Each generated program is compiled and executed with GCC 11.5.0 to assess its correctness, robustness, and scalability. Results show that ChatGPT-4 and ChatGPT-5 achieve strong syntactic precision and scalable performance.
Sebastian Kot
This paper examines Modern Mercantilism, characterized by rising economic nationalism, strategic technological decoupling, and geopolitical fragmentation, as a disruptive shift from the post-1945 globalization paradigm. It applies Principal Component Analysis (PCA) to 768-dimensional SBERT-generated semantic embeddings of curated news articles to extract orthogonal latent factors that discriminate binary event outcomes linked to protectionism, technological sovereignty, and bloc realignments. Analysis of principal component loadings identifies key semantic features driving classification performance, enhancing interpretability and predictive accuracy. This methodology provides a scalable, data-driven framework for quantitatively tracking emergent mercantilist dynamics through high-dimensional text analytics
M. B. Smirnova
Although Francisco de Quevedo’s sonnets, unlike, for example, Shakespeare’s, have not become a fact of Russian culture and literature, the existing experience of translating them is of interest from the point of view of the very possibility of conveying to the modern reader the meaning and pragmatics of the Baroque text. The peculiarities of the Spanish author’s poetic language are associated with the specifics of lyrical subjectivity, which is far from romantic and post-romantic ideas about authorship as a confession based upon personal experience, but is a set of masks or roles that he constructed in accordance with one or another poetic canon. One of these canonical languages was Petrarchism, which largely determined the language of Quevedo’s love sonnets. The novelty of the Spanish Baroque author lies in the fact that, relying on Petrarchist conventions, he subjects them to witty reflection and turns them into the subject of a conceptual game. At the same time, the reflective nature of the sonnet as a Renaissance genre is reduced to the poetic word. The true hero of Quevedo’s sonnets is not himself or even the feeling of love as such, but a conceit (concepto), the pragmatic goal of this kind of poetry is to surprise the reader and involve him in an intellectual adventure. Analysis of two sonnets and their Russian versions (by A. Koss and A. Geleskul) allows us to draw conclusions about different translation strategies (literal and adapting), which nevertheless lead in different ways to the weakening and erosion of the conceit. The advantage of the first translation is its philological precision. The second impresses with its poetical qualities and naturalness. However, in the first case, the endeavor to follow the original step by step leads to overly heavy syntactic and grammatical constructions. This distracts the reader, preventing him from following the sophisticated paradigm of metaphors into which the basic conceit unfolds. In the second case, the intellectual and rhetorical basis of the sonnet, the supporting elements of the final conceit are sacrificed.
Debopriya Roy Dipta, Thore Tiemann, Berk Gulmezoglu et al.
The cloud computing landscape has evolved significantly in recent years, embracing various sandboxes to meet the diverse demands of modern cloud applications. These sandboxes encompass container-based technologies like Docker and gVisor, microVM-based solutions like Firecracker, and security-centric sandboxes relying on Trusted Execution Environments (TEEs) such as Intel SGX and AMD SEV. However, the practice of placing multiple tenants on shared physical hardware raises security and privacy concerns, most notably side-channel attacks. In this paper, we investigate the possibility of fingerprinting containers through CPU frequency reporting sensors in Intel and AMD CPUs. One key enabler of our attack is that the current CPU frequency information can be accessed by user-space attackers. We demonstrate that Docker images exhibit a unique frequency signature, enabling the distinction of different containers with up to 84.5% accuracy even when multiple containers are running simultaneously in different cores. Additionally, we assess the effectiveness of our attack when performed against several sandboxes deployed in cloud environments, including Google's gVisor, AWS' Firecracker, and TEE-based platforms like Gramine (utilizing Intel SGX) and AMD SEV. Our empirical results show that these attacks can also be carried out successfully against all of these sandboxes in less than 40 seconds, with an accuracy of over 70% in all cases. Finally, we propose a noise injection-based countermeasure to mitigate the proposed attack on cloud environments.
Shuo Yuan, Le Yi Wang, George Yin et al.
This paper introduces a stochastic hybrid system (SHS) framework in state space model to capture sensor, communication, and system contingencies in modern power systems (MPS). Within this new framework, the paper concentrates on the development of state estimation methods and algorithms to provide reliable state estimation under randomly intermittent and noisy sensor data. MPSs employ diversified measurement devices for monitoring system operations that are subject to random measurement errors and rely on communication networks to transmit data whose channels encounter random packet loss and interruptions. The contingency and noise form two distinct and interacting stochastic processes that have a significant impact on state estimation accuracy and reliability. This paper formulates stochastic hybrid system models for MPSs, introduces coordinated observer design algorithms for state estimation, and establishes their convergence and reliability properties. A further study reveals a fundamental design tradeoff between convergence rates and steady-state error variances. Simulation studies on the IEEE 5-bus system and IEEE 33-bus system are used to illustrate the modeling methods, observer design algorithms, convergence properties, performance evaluations, and impact sensor system selections.
Alessandro V. M. Oliveira, Thiago Caliari, Rodolfo R. Narcizo
The modernization of an airline's fleet can reduce its operating costs, improve the perceived quality of service offered to passengers, and mitigate emissions. The present paper investigates the market incentives that airlines have to adopt technological innovation from manufacturers by acquiring new generation aircraft. We develop an econometric model of fleet modernization in the Brazilian commercial aviation over two decades. We examine the hypothesis of an inverted-U relationship between market concentration and fleet modernization and find evidence that both the extremes of competition and concentration may inhibit innovation adoption by carriers. We find limited evidence associating either hubbing activity or low-cost carriers with the more intense introduction of new types of aircraft models and variants in the industry. Finally, our results suggest that energy cost rises may provoke boosts in fleet modernization in the long term, with carriers possibly targeting more eco-efficient operations up to two years after an upsurge in fuel price.
Halaman 49 dari 215814