D. Canfield, R. Raiswell, J. Westrich et al.
Hasil untuk "Modern"
Menampilkan 20 dari ~4316245 hasil · dari CrossRef, DOAJ, arXiv, Semantic Scholar
Alexander A. Barkovich, Ekaterina S. Astapkina
The authors perform the linguistic analysis of neural network modeling of the semantic field “Internet” on the material of available online Russian-language content. The relevance of the study is ensured by the quality and quantity of the linguistic material in the “big data” format and by an innovative methodological approach to its meta-description with neural network instruments. The study is aimed at giving a linguistic characteristic of neural network modeling of the semantic field “Internet” in Russian-language discourse. The material was Russian-language Internet content. The volume of the content had not been limited to obtain statistically representative metadata. This approach corresponds to the mainly declarative limitations of the Internet discourse functionality. Due to the focus on the “intelligent” algorithms for processing Internet content, such as basic for our research OpenAI project, the high referentiality of language data was ensured. The authors used a wide range of methods, from component analysis to discourse analysis, with modern neural network instruments. A two-dimensional neural network modeling was carried out with cluster and stratum analysis of language units associated with the conceptual phenomenon Internet. The conducted research demonstrated the potential of neural network modeling techniques to study the semantic field “Internet”. The modeling identified and verified a wide range of language units whose speech functionality was associated with the conceptual phenomenon Internet as the core of the corresponding semantic field. The results obtained are promising; we can confidently implement the neural network modeling patterns tested in this study into linguistic practice. This, in turn, will develop the paradigm of linguistics, modernize methodological approaches to language functioning, and identify and qualify speech innovations.
G Abarajithan, Zhenghua Ma, Francesco Restuccia et al.
Hardware-firmware integration is becoming a productivity bottleneck due to the increasing complexity of accelerators, characterized by intricate memory hierarchies and firmware-intensive execution. While numerous verification techniques focus on early-stage, approximate modeling of such systems to speed up initial development, developers still rely heavily on FPGA emulation to integrate firmware with RTL/HLS hardware, resulting in significant delays in debug iterations and time-to-market. We present a fast, cycle-accurate co-verification framework that bridges production firmware and RTL/gate-level hardware. FIREBRIDGE enables firmware debugging, profiling, and verification in seconds using standard simulators such as VCS, Vivado Xsim, or Xcelium, by compiling the firmware for x86 and bridging it with simulated subsystems via randomized memory bridges. Our approach provides off-chip data movement profiling, memory congestion emulation, and register-level protocol testing, which are critical for modern accelerator verification. We demonstrate a speedup of up to 50x in debug iteration over the conventional FPGA-based flow for system integration between RTL/HLS and production firmware on various types of accelerators, such as systolic arrays and CGRAs, while ensuring functional equivalence. FIREBRIDGE accelerates system integration by supporting robust co-verification of hardware and firmware, and promotes a structured, parallel development workflow tailored for teams building heterogeneous computing platforms. Repository: https://github.com/abarajithan11/axis-systolic-array/tree/master/firebridge
Николаев А.А.
В статье рассматриваются современные подходы к проведению лабораторных занятий по химии в школьной системе образования, анализируется роль практической деятельности в формировании ключевых компетенций у учащихся, а также подчеркивает необходимость интеграции традиционных методов с современными цифровыми технологиями, такими как виртуальные лаборатории, симуляторы и средства дополненной реальности.Представлены примеры практических лабораторных работ, включая использование цифрового микроскопа, pH-метра, кондуктометра и термопары, а также описаны этапы выполнения работ и оформление результатов.Особое внимание уделяется перспективам развития лабораторных занятий с учетом цифровизации и проектных методов обучения, что способствует развитию исследовательских навыков, критического мышления и мотивации у школьников, подчеркивается важность создания гибкой, интерактивной системы обучения, способной подготовить учащихся к современным вызовам и профессиональной деятельности.
Fernando R. de Moraes Barros
Resumo O propósito do presente texto é comentar a tese de doutorado A recepção do pensamento de Nietzsche na obra literária de Dalcídio Jurandir - defendida por Oclécio das Chagas Lacerda, em 2022, na Escola de Filosofia, Letras e Ciências Humanas da Universidade Federal de São Paulo (Unifesp) - à luz de dois temas centrais por ela desenvolvidos: a interpretação “antropofágica” da crítica nietzschiana ao niilismo cristão e o sentido amazônico de “transvaloração” dela decorrente.
Chen Wei
With focusing on the travel notes of American Orientalist Owen Lattimore on the Mongolian-Xinjiang Camel Road from 1926 to 1927, this paper explores the practical skills and knowledge system on this branch of the Silk Road in early modern times. Through a detailed study of camel caravans' choices of transportation, organization and division of labor in caravans, travel equipment and security maintaining, seasonality and route selection, supply and medical care, logistics management, market transactions, currency adaptation, and the collection and transmission of business travel information, this paper reveals the various daily skills that supported the operation of the Silk Road, and shows how camel caravans used these skills to overcome environmental and social uncertainties and promote trade and cultural exchanges. The research concludes that it was these long-term accumulated and constantly practiced skills that made the Silk Road a trade and cultural network across Eurasia. O. Lattimore's travel notes are of great historical and practical significance for understanding this process.
Sahil Kale
Modern large language models integrate web search to provide real-time answers, yet it remains unclear whether they are efficiently calibrated to use search when it is actually needed. We introduce a benchmark evaluating both the necessity and effectiveness of web access across commercial models with no access to internal states or parameters. The dataset includes a static split of 783 temporally anchored questions answerable from pre-cutoff knowledge, aimed at testing whether models invoke search based on low internal confidence, and a dynamic split of 288 post-cutoff queries designed to test whether models recognise when search is required and retrieve updated information. Web access substantially improves static accuracy for GPT-5-mini and Claude Haiku 4.5, though confidence calibration worsens. On dynamic queries, both models frequently invoke search yet remain below 70 percent accuracy due to weak query formulation. Costs per accuracy-improving call remain low, but returns diminish once initial retrieval fails. Selective invocation helps, but models become overconfident and inconsistent after search. Overall, built-in web search meaningfully improves factual accuracy and can be invoked selectively, yet models remain overconfident, skip retrieval when it is essential, and falter once initial search queries underperform. Taken together, internal web search works better as a good low-latency verification layer than a reliable analytical tool, with clear room for improvement.
Maciej Wysocki, Paweł Sakowski
This paper investigates an important problem of an appropriate variance-covariance matrix estimation in the Modern Portfolio Theory. We propose a novel framework for variancecovariance matrix estimation for purposes of the portfolio optimization, which is based on deep learning models. We employ the long short-term memory (LSTM) recurrent neural networks (RNN) along with two probabilistic deep learning models: DeepVAR and GPVAR to the task of one-day ahead multivariate forecasting. We then use these forecasts to optimize portfolios of stocks and cryptocurrencies. Our analysis presents results across different combinations of observation windows and rebalancing periods to compare performances of classical and deep learning variance-covariance estimation methods. The conclusions of the study are that although the strategies (portfolios) performance differed significantly between different combinations of parameters, generally the best results in terms of the information ratio and annualized returns are obtained using the LSTM-RNN models. Moreover, longer observation windows translate into better performance of the deep learning models indicating that these methods require longer windows to be able to efficiently capture the long-term dependencies of the variance-covariance matrix structure. Strategies with less frequent rebalancing typically perform better than these with the shortest rebalancing windows across all considered methods.
Md. Nayeem, Md Shamse Tabrej, Kabbojit Jit Deb et al.
Automatic Speech Recognition (ASR) has undergone a profound transformation over the past decade, driven by advances in deep learning. This survey provides a comprehensive overview of the modern era of ASR, charting its evolution from traditional hybrid systems, such as Gaussian Mixture Model-Hidden Markov Models (GMM-HMMs) and Deep Neural Network-HMMs (DNN-HMMs), to the now-dominant end-to-end neural architectures. We systematically review the foundational end-to-end paradigms: Connectionist Temporal Classification (CTC), attention-based encoder-decoder models, and the Recurrent Neural Network Transducer (RNN-T), which established the groundwork for fully integrated speech-to-text systems. We then detail the subsequent architectural shift towards Transformer and Conformer models, which leverage self-attention to capture long-range dependencies with high computational efficiency. A central theme of this survey is the parallel revolution in training paradigms. We examine the progression from fully supervised learning, augmented by techniques like SpecAugment, to the rise of self-supervised learning (SSL) with foundation models such as wav2vec 2.0, which drastically reduce the reliance on transcribed data. Furthermore, we analyze the impact of largescale, weakly supervised models like Whisper, which achieve unprecedented robustness through massive data diversity. The paper also covers essential ecosystem components, including key datasets and benchmarks (e.g., LibriSpeech, Switchboard, CHiME), standard evaluation metrics (e.g., Word Error Rate), and critical considerations for real-world deployment, such as streaming inference, on-device efficiency, and the ethical imperatives of fairness and robustness. We conclude by outlining open challenges and future research directions.
Marco Cafiso, Paolo Paradisi
The Hopfield network (HN) is a classical model of associative memory whose dynamics are closely related to the Ising spin system with 2-body interactions. Stored patterns are encoded as minima of an energy function shaped by a Hebbian learning rule, and retrieval corresponds to convergence towards these minima. Modern Hopfield Networks (MHNs) introduce p-body interactions among neurons with p greater than 2 and, more recently, also exponential interaction functions, which significantly improve network's storing and retrieval capacity. While the criticality of HNs and p-body MHNs were extensively studied since the 1980s, the investigation of critical behavior in exponential MHNs is still in its early stages. Here, we study a stochastic exponential MHN (SMHN) with a multiplicative salt-and-pepper noise. While taking the noise probability p as control parameter, the average overlap parameter Q and a diffusion scaling H are taken as order parameters. In particular, H is related to the time correlation features of the network, with H greater than 0.5 signaling the emergence of persistent time memory. We found the emergence of a critical transition in both Q and H, with the critical noise level weakly decreasing as the load N increases. Notably, for each load N, the diffusion scaling H highlights a transition between a sub- and a super-critical regime, both with short-range correlated dynamics. Conversely, the critical regime, which is found in the range of p around 0.23-0.3, displays a long-range correlated dynamics with highly persistent temporal memory marked by the high value H around 1.3.
Josep Plana-Riu, F. Xavier Trias, Àdel Alsalti-Baldellou et al.
Computational Fluid Dynamics (CFD) simulations are often constrained by the memory-bound nature of sparse matrix-vector operations, which eventually limits performance on modern high-performance computing (HPC) systems. This work introduces a novel approach to increase arithmetic intensity in CFD by leveraging repeated matrix block structures. The method transforms the conventional sparse matrix-vector product (SpMV) into a sparse matrix-matrix product (SpMM), enabling simultaneous processing of multiple right-hand sides. This shifts the computation towards a more compute-bound regime by reusing matrix coefficients. Additionally, an inline mesh-refinement strategy is proposed: simulations initially run on a coarse mesh to establish a statistically steady flow, then refine to the target mesh. This reduces the wall-clock time to reach transition, leading to faster convergence with equivalent computational cost. The methodology is evaluated using theoretical performance bounds and validated through three test cases: a turbulent channel flow, Rayleigh-Bénard convection, and an industrial airfoil simulation. Results demonstrate substantial speed-ups - from modest improvements in basic configurations to over 50% in the mesh-refinement setup - highlighting the benefits of integrating SpMM across all CFD operators, including divergence, gradient, and Laplacian.
O. P. Makarchuk, D. M. Karvatskyi
In the present paper, we study a set that can be treated as a generalised set of subsums for a geometric series. This object was discovered independently in various mathematical aspects. For instance, it is closely related to various systems of representation of real numbers. The main object of this paper was particularly studied by R. Kenyon, who brought up a question about the Lebesgue measure of the set and conjectured that it is positive. Further, Z. Nitecki confirmed the hypothesis by using nontrivial topological techniques. However, the aforementioned result is quite limited, as this particular case should satisfy a rigid condition of homogeneity. Despite the limited progress, the problem remained understudied in a general framework. The study of topological, metric, and fractal properties of the set of subsums for a numerical series is a separate research direction in mathematics. On the other hand, the topic is related to another modern mathematical problem, namely, deepening of the Jessen-Wintner theorem for infinite Bernoulli convolutions and their generalisations. The essence of the problem is to reveal the necessary and sufficient conditions for the probability distribution of a random subsum of a geometric series to be absolutely continuous or singular. The Jessen-Wintner theorem guarantees that the distribution is pure (pure discrete, pure singular, or pure absolutely continuous). Meanwhile, the Levy theorem gives us the necessary and sufficient condition for the distribution to be discrete. Since the set of subsums for an absolutely convergent series coincides with the set of possible outcomes of the corresponding probability distribution, under certain conditions, it allows us to apply various probability techniques for its further investigation. In particular, some techniques help us to prove that the above sets have a positive Lebesgue measure and allow to deepen the Jessen-Wintner theorem under certain conditions.
Minje Kim, Jan Skoglund
This paper explores the integration of model-based and data-driven approaches within the realm of neural speech and audio coding systems. It highlights the challenges posed by the subjective evaluation processes of speech and audio codecs and discusses the limitations of purely data-driven approaches, which often require inefficiently large architectures to match the performance of model-based methods. The study presents hybrid systems as a viable solution, offering significant improvements to the performance of conventional codecs through meticulously chosen design enhancements. Specifically, it introduces a neural network-based signal enhancer designed to post-process existing codecs' output, along with the autoencoder-based end-to-end models and LPCNet--hybrid systems that combine linear predictive coding (LPC) with neural networks. Furthermore, the paper delves into predictive models operating within custom feature spaces (TF-Codec) or predefined transform domains (MDCTNet) and examines the use of psychoacoustically calibrated loss functions to train end-to-end neural audio codecs. Through these investigations, the paper demonstrates the potential of hybrid systems to advance the field of speech and audio coding by bridging the gap between traditional model-based approaches and modern data-driven techniques.
Prashant Serai, Peidong Wang, Eric Fosler-Lussier
Modeling the errors of a speech recognizer can help simulate errorful recognized speech data from plain text, which has proven useful for tasks like discriminative language modeling, improving robustness of NLP systems, where limited or even no audio data is available at train time. Previous work typically considered replicating behavior of GMM-HMM based systems, but the behavior of more modern posterior-based neural network acoustic models is not the same and requires adjustments to the error prediction model. In this work, we extend a prior phonetic confusion based model for predicting speech recognition errors in two ways: first, we introduce a sampling-based paradigm that better simulates the behavior of a posterior-based acoustic model. Second, we investigate replacing the confusion matrix with a sequence-to-sequence model in order to introduce context dependency into the prediction. We evaluate the error predictors in two ways: first by predicting the errors made by a Switchboard ASR system on unseen data (Fisher), and then using that same predictor to estimate the behavior of an unrelated cloud-based ASR system on a novel task. Sampling greatly improves predictive accuracy within a 100-guess paradigm, while the sequence model performs similarly to the confusion matrix.
Boris I. Bednyi, Nikolay V. Rybakov, Nadezhda A. Khodeeva
Modern Russian postgraduate school is institutionally oriented towards the reproduction of the personnel potential for science and higher education. Since the career trajectories of a significant part of PhD graduates go beyond the academic labor market, the scientific and pedagogical community is discussing the prospects for the development of the so-called professional postgraduate studies in Russia, which should provide targeted training of highly qualified personnel for knowledge-intensive sectors of the economy and the sphere of intellectual services. The discourse on professional postgraduate studies is focused on the possibility of adapting the effective practices of foreign universities, and, unfortunately, is currently not supported by quantitative data on the demand for such a format of postgraduate training in Russia. The purpose of this study is an empirical analysis of the demand for professional postgraduate studies in the field of technical sciences. Using data on PhD graduates who successfully defended dissertations in technical sciences in 2019 as an example, for the first time a quantitative assessment was made of the prevalence of practice-oriented dissertations, the authors of which are employees of organizations in the knowledge-intensive sectors of the economy. The empirical basis of the study was the publicly available data on the defense of dissertations for the degree of candidate of technical sciences in Russia in 2019 (N=1663). For a detailed analysis, dissertation materials were selected, which contained information about postgraduate studies and the place of employment of dissertators (N=715). As a result of the study, parameters were determined that characterize the degree of prevalence in Russia of practice-oriented dissertations on various disciplinary groups of technical sciences, including: the proportion of PhD graduates employed outside the academic sphere; the proportion of dissertations thematically related to the professional activities of their authors; prevalence of preparation of dissertations on the basis of enterprises of the real sector of the economy; differences in socio-demographic characteristics and publication activity of PhD graduates working on dissertations at universities and in science-intensive business organizations. On the basis of the analysis, a conclusion is made about the expediency of developing professional postgraduate programs in the field of engineering and technology aimed at staffing the innovation sphere, as well as legitimizing the special requirements for these programs and practice-oriented dissertations prepared during their implementation.
Javier Mauricio García Mogollón, Ramiro Gamboa Suárez, Luis Alfredo Jiménez Rodríguez
Objective: To evaluate the use of the Global Reporting Initiative (GRI) standard and its ability to address social responsibility problems in the university context, determining the adaptation of internal management processes to changes in modern organizations and verifying corporate social responsibility. Methodology: A case study was carried out in a higher education institution through documentary review and qualitative research with an exploratory-descriptive design, using bibliometric consultation as the main technique. Findings and discussion: Some Colombian universities, both public and private, have adopted GRI standards to report on their performance in social, environmental and economic responsibility. However, only two universities in Colombia have fully implemented these standards. It was noted that instead of full GRI reports, some universities generate reports related to the Sustainable Development Goals (SDGs), which may confuse readers about true accountability. Conclusions: There is a commitment to transparency and accountability, but the methods and standards adopted vary. Although compliant sustainability reports have been developed, they do not all follow specific GRI guidelines. The lack of widespread adoption of GRI standards in most Colombian universities limits their potential as agents of social and environmental change, and contributes to less clear and effective accountability processes.
Reese Kuper, Ipoom Jeong, Yifan Yuan et al.
As semiconductor power density is no longer constant with the technology process scaling down, modern CPUs are integrating capable data accelerators on chip, aiming to improve performance and efficiency for a wide range of applications and usages. One such accelerator is the Intel Data Streaming Accelerator (DSA) introduced in Intel 4th Generation Xeon Scalable CPUs (Sapphire Rapids). DSA targets data movement operations in memory that are common sources of overhead in datacenter workloads and infrastructure. In addition, it becomes much more versatile by supporting a wider range of operations on streaming data, such as CRC32 calculations, delta record creation/merging, and data integrity field (DIF) operations. This paper sets out to introduce the latest features supported by DSA, deep-dive into its versatility, and analyze its throughput benefits through a comprehensive evaluation. Along with the analysis of its characteristics, and the rich software ecosystem of DSA, we summarize several insights and guidelines for the programmer to make the most out of DSA, and use an in-depth case study of DPDK Vhost to demonstrate how these guidelines benefit a real application.
Somya Swarnkar, Rittick Roy, Tejinder Kaur et al.
The dual nature of matter and radiation and the concept of the structure of an atom share a number of key conceptual elements from quantum mechanics. Despite the similarities, we find that the concept of the structure of an atom is well understood by students, in contrast to the wave-particle duality. The study analyzes students' comprehension of these two concepts by conducting a semi-structured focus group interview and questionnaire. Through students' performance in the questionnaire and their descriptive responses, we find that the difficulties in their learning and understandings reflect the treatment of the respective topic in the curriculum. The introduction of the structure of an atom is early and repeated, whereas the dual nature of matter and radiation is introduced late and abruptly. Based on our findings, we propose reforms in the present curriculum that are necessary for an improved way of introducing the concept of modern physics, like wave particle duality, to Indian students.
Moses Ike, Kandy Phan, Keaton Sadoski et al.
Modern Industrial Control Systems (ICS) attacks evade existing tools by using knowledge of ICS processes to blend their activities with benign Supervisory Control and Data Acquisition (SCADA) operation, causing physical world damages. We present SCAPHY to detect ICS attacks in SCADA by leveraging the unique execution phases of SCADA to identify the limited set of legitimate behaviors to control the physical world in different phases, which differentiates from attackers activities. For example, it is typical for SCADA to setup ICS device objects during initialization, but anomalous during processcontrol. To extract unique behaviors of SCADA execution phases, SCAPHY first leverages open ICS conventions to generate a novel physical process dependency and impact graph (PDIG) to identify disruptive physical states. SCAPHY then uses PDIG to inform a physical process-aware dynamic analysis, whereby code paths of SCADA process-control execution is induced to reveal API call behaviors unique to legitimate process-control phases. Using this established behavior, SCAPHY selectively monitors attackers physical world-targeted activities that violates legitimate processcontrol behaviors. We evaluated SCAPHY at a U.S. national lab ICS testbed environment. Using diverse ICS deployment scenarios and attacks across 4 ICS industries, SCAPHY achieved 95% accuracy & 3.5% false positives (FP), compared to 47.5% accuracy and 25% FP of existing work. We analyze SCAPHYs resilience to futuristic attacks where attacker knows our approach.
Moisés Silva-Muñoz, Alberto Franzin, Hugues Bersini
Database systems play a central role in modern data-centered applications. Their performance is thus a key factor in the efficiency of data processing pipelines. Modern database systems expose several parameters that users and database administrators can configure to tailor the database settings to the specific application considered. While this task has traditionally been performed manually, in the last years several methods have been proposed to automatically find the best parameter configuration for a database. Many of these methods, however, use statistical models that require high amounts of data and fail to represent all the factors that impact the performance of a database, or implement complex algorithmic solutions. In this work we study the potential of a simple model-free general-purpose configuration tool to automatically find the best parameter configuration of a database. We use the irace configurator to automatically find the best parameter configuration for the Cassandra NoSQL database using the YCBS benchmark under different scenarios. We establish a reliable experimental setup and obtain speedups of up to 30% over the default configuration in terms of throughput, and we provide an analysis of the configurations obtained.
Halaman 48 dari 215813