QoE-aware edge server placement in mobile edge computing using an enhanced genetic algorithm
Jinxiang Sha, Jintao Wu, Mingliang Wang
et al.
Mobile Edge Computing (MEC) enhances service quality by decentralizing computational resources to network edges, thereby reducing latency and improving Quality of Service (QoS). However, the spatial distribution of edge servers critically impacts network transmission efficiency, while heterogeneous user perceptions of QoS metrics frequently lead to suboptimal Quality of Experience (QoE). Current research on Edge Server Placement (ESP) predominantly focuses on localized optimization of QoS metrics, yet fails to adequately incorporate systematic QoE modeling and coordinated optimization frameworks, leading to significant discrepancies between actual user experience and satisfaction with resource allocation. To address this gap, this study establishes a formalized QoE-aware Edge Server Placement (EESP) framework by rigorously characterizing the interdependence between QoE and QoS. We first prove the NP-completeness of the EESP problem through computational complexity analysis. Subsequently, we develop an Integer Linear Programming-based exact solver (EESP-O) for small-scale scenarios and propose an Enhanced Genetic Algorithm (EESP-EGA) for large-scale deployments. The EESP-EGA integrates adaptive crossover probability mechanisms and elite retention strategies to achieve near-optimal solutions for complex real-world configurations. Experimental evaluations conducted on a broad range of real-world datasets demonstrate that the proposed method outperforms several existing representative approaches in terms of QoE.
Electronic computers. Computer science
A Synergistic Multi-Agent Framework for Resilient and Traceable Operational Scheduling from Unstructured Knowledge
Luca Cirillo, Marco Gotelli, Marina Massei
et al.
In capital-intensive industries, operational knowledge is often trapped in unstructured technical manuals, creating a barrier to efficient and reliable maintenance planning. This work addresses the need for an integrated system that can automate knowledge extraction and generate optimized, resilient, operational plans. A synergistic multi-agent framework is introduced that transforms unstructured documents into a structured knowledge base using a self-validating pipeline. This validated knowledge feeds a scheduling engine that combines multi-objective optimization with discrete-event simulation to generate robust, capacity-aware plans. The framework was validated on a complex maritime case study. The system successfully constructed a high-fidelity knowledge base from unstructured manuals and the scheduling engine produced a viable, capacity-aware operational plan for 118 interventions. The optimized plan respected all daily (6) and weekly (28) task limits, executing 64 tasks on their nominal date, bringing 8 forward, and deferring 46 by an average of only 2.0 days (95th percentile 4.8 days) to smooth the workload and avoid bottlenecks. An interactive user interface with a chatbot and planning calendar provides verifiable “plan-to-page” traceability, demonstrating a novel, end-to-end synthesis of document intelligence, agentic AI, and simulation to unlock strategic value from legacy documentation in high-stakes environments.
Electronic computers. Computer science
Mediating and moderating roles of AI literacy: How it shapes the impacts of psychological resilience on work stress and job burnout among young university teachers in China
Weiwei Yin, Guofang Ren, Guowei Zhang
In the current context of China's higher education, work stress is a frequent challenge for young university teachers, often co-occurring with job burnout. This is a significant issue that is associated with both their teaching effectiveness and their career development. The purpose of this study is to explore the mediating roles of AI literacy and psychological resilience in the relationship between work stress and job burnout among young university teachers, as well as how AI literacy moderates the associations among work stress, psychological resilience, and job burnout of these young teachers. A nationwide survey was conducted, involving 411 university teachers. The main findings are as follows: (1) Both AI literacy and psychological resilience play mediating roles in the relationship between work stress and job burnout. When acting as multiple mediating variables, their model fit significantly outperforms the case where they serve as single mediating variables respectively. (2) The study finds that AI literacy moderates the associations between work stress and job burnout, as well as between psychological resilience and job burnout. In essence, the intensity of these relationships varies with the level of AI literacy. (3) The mediating effect of psychological resilience is also associated with AI literacy, suggesting that AI literacy can moderate the way psychological resilience mediates the relationship between work stress and job burnout. These research findings provide practical evidence and recommendations for alleviating job burnout through improvements in AI literacy and psychological resilience among young university teachers.
Electronic computers. Computer science
You Can't Get There From Here: Redefining Information Science to address our sociotechnical futures
Scott Humr, Mustafa Canan
Current definitions of Information Science are inadequate to comprehensively describe the nature of its field of study and for addressing the problems that are arising from intelligent technologies. The ubiquitous rise of artificial intelligence applications and their impact on society demands the field of Information Science acknowledge the sociotechnical nature of these technologies. Previous definitions of Information Science over the last six decades have inadequately addressed the environmental, human, and social aspects of these technologies. This perspective piece advocates for an expanded definition of Information Science that fully includes the sociotechnical impacts information has on the conduct of research in this field. Proposing an expanded definition of Information Science that includes the sociotechnical aspects of this field should stimulate both conversation and widen the interdisciplinary lens necessary to address how intelligent technologies may be incorporated into society and our lives more fairly.
Super Kawaii Vocalics: Amplifying the "Cute" Factor in Computer Voice
Yuto Mandai, Katie Seaborn, Tomoyasu Nakano
et al.
"Kawaii" is the Japanese concept of cute, which carries sociocultural connotations related to social identities and emotional responses. Yet, virtually all work to date has focused on the visual side of kawaii, including in studies of computer agents and social robots. In pursuit of formalizing the new science of kawaii vocalics, we explored what elements of voice relate to kawaii and how they might be manipulated, manually and automatically. We conducted a four-phase study (grand N = 512) with two varieties of computer voices: text-to-speech (TTS) and game character voices. We found kawaii "sweet spots" through manipulation of fundamental and formant frequencies, but only for certain voices and to a certain extent. Findings also suggest a ceiling effect for the kawaii vocalics of certain voices. We offer empirical validation of the preliminary kawaii vocalics model and an elementary method for manipulating kawaii perceptions of computer voice.
A Penny Synthesized is a Penny Earned? An Exploratory Analysis of Accuracy in the SIPP Synthetic Beta
Jordan Stanley, Evan Totty
Electronic computers. Computer science
LSQCA: Resource-Efficient Load/Store Architecture for Limited-Scale Fault-Tolerant Quantum Computing
Takumi Kobori, Yasunari Suzuki, Yosuke Ueno
et al.
Current fault-tolerant quantum computer (FTQC) architectures utilize several encoding techniques to enable reliable logical operations with restricted qubit connectivity. However, such logical operations demand additional memory overhead to ensure fault tolerance. Since the main obstacle to practical quantum computing is the limited qubit count, our primary mission is to design floorplans that can reduce memory overhead without compromising computational capability. Despite extensive efforts to explore FTQC architectures, even the current state-of-the-art floorplan strategy devotes 50% of memory space to this overhead, not to data storage, to ensure unit-time random access to all logical qubits. In this paper, we propose an FTQC architecture based on a novel floorplan strategy, Load/Store Quantum Computer Architecture (LSQCA), which can achieve almost 100% memory density. The idea behind our architecture is to separate all memory regions into small computational space called Computational Registers (CR) and space-efficient memory space called Scan-Access Memory (SAM). We define an instruction set for these abstract structures and provide concrete designs named point-SAM and line-SAM architectures. With this design, we can improve the memory density by allowing variable-latency memory access while concealing the latency with other bottlenecks. We also propose optimization techniques to exploit properties of quantum programs observed in our static analysis, such as access locality in memory reference timestamps. Our numerical results indicate that LSQCA successfully leverages this idea. In a resource-restricted situation, a specific benchmark shows that we can achieve about 90% memory density with 5% increase in the execution time compared to a conventional floorplan, which achieves at most 50% memory density for unit-time random access. Our design ensures broad quantum applicability.
Accelerating Time-to-Science by Streaming Detector Data Directly into Perlmutter Compute Nodes
Samuel S. Welborn, Bjoern Enders, Chris Harris
et al.
Recent advancements in detector technology have significantly increased the size and complexity of experimental data, and high-performance computing (HPC) provides a path towards more efficient and timely data processing. However, movement of large data sets from acquisition systems to HPC centers introduces bottlenecks owing to storage I/O at both ends. This manuscript introduces a streaming workflow designed for an high data rate electron detector that streams data directly to compute node memory at the National Energy Research Scientific Computing Center (NERSC), thereby avoiding storage I/O. The new workflow deploys ZeroMQ-based services for data production, aggregation, and distribution for on-the-fly processing, all coordinated through a distributed key-value store. The system is integrated with the detector's science gateway and utilizes the NERSC Superfacility API to initiate streaming jobs through a web-based frontend. Our approach achieves up to a 14-fold increase in data throughput and enhances predictability and reliability compared to a I/O-heavy file-based transfer workflow. Our work highlights the transformative potential of streaming workflows to expedite data analysis for time-sensitive experiments.
Scalable Computation of Inter-Core Bounds Through Exact Abstractions
Mohammed Aristide Foughali, Marius Mikučionis, Maryline Zhang
Real-time systems (RTSs) are at the heart of numerous safety-critical applications. An RTS typically consists of a set of real-time tasks (the software) that execute on a multicore shared-memory platform (the hardware) following a scheduling policy. In an RTS, computing inter-core bounds, i.e., bounds separating events produced by tasks on different cores, is crucial. While efficient techniques to over-approximate such bounds exist, little has been proposed to compute their exact values. Given an RTS with a set of cores C and a set of tasks T , under partitioned fixed-priority scheduling with limited preemption, a recent work by Foughali, Hladik and Zuepke (FHZ) models tasks with affinity c (i.e., allocated to core c in C) as a Uppaal timed automata (TA) network Nc. For each core c in C, Nc integrates blocking (due to data sharing) using tight analytical formulae. Through compositional model checking, FHZ achieved a substantial gain in scalability for bounds local to a core. However, computing inter-core bounds for some events of interest E, produced by a subset of tasks TE with different affinities CE, requires model checking the parallel composition of all TA networks Nc for each c in CE, which produces a large, often intractable, state space. In this paper, we present a new scalable approach based on exact abstractions to compute exact inter-core bounds in a schedulable RTS, under the assumption that tasks in TE have distinct affinities. We develop a novel algorithm, leveraging a new query that we implement in Uppaal, that computes for each TA network Nc in NE an abstraction A(Nc) preserving the exact intervals within which events occur on c, therefore drastically reducing the state space. The scalability of our approach is demonstrated on the WATERS 2017 industrial challenge, for which we efficiently compute various types of inter-core bounds where FHZ fails to scale.
Introdução ao Jupyter Notebook
Quinn Dombrowski, Tassie Gniady, David Kloster
Jupyter Notebook fornece um ambiente onde você pode trabalhar com facilidade o seu código na linguagem Python. Esta lição descreve como instalar o software Jupyter Notebook, como executar e criar ficheiros para o Jupyter Notebook.
History of scholarship and learning. The humanities, Computer software
Retracted: Optimization of Teaching Evaluation System for Football Professional Teachers Based on Multievaluation Model
null Complexity
Electronic computers. Computer science
BioIntertidal Mapper software: A satellite approach for NDVI-based intertidal habitat mapping
Sara Haro, Jonathan Jimenez-Reina, Ricardo Bermejo
et al.
BioIntertidal Mapper is a user-friendly tool, with a graphical user interface, that automates the selection and processing of Sentinel-2 imagery, to generate intertidal habitat maps. The software uses Google Earth Engine API and the WorldTides API to select imagery acquired at low tide within a specified timeframe. These images are subsequently processed to calculate a Normalized Difference Vegetation Index, which is masked, based on a shapefile defining the area of interest. Maps are exported to a Google Drive folder. The program offers a simple solution for scientific and environmental manager to map intertidal habitats using free and publicly available satellite imagery.
A role‐entity based human activity recognition using inter‐body features and temporal sequence memory
Rahul Shrivastava, Vivek Tiwari, Swati Jain
et al.
Abstract Recognizing entities and their corresponding roles are important in human activity recognition. In light of recent advancements, the primary emphasis is recognizing the abstract activities involving person‐person interaction. The contribution of this work is proposing an architecture, which utilizes the knowledge of the human body parts coordinates in role detection of each individual. The network preprocesses the coordinates to build intra‐body and inter‐body features. The extracted features build the relationship between the interacting bodies and learn the temporal relation corresponding to each role using the human memory‐inspired hierarchical temporal memory. The model is tested on vague samples of mutual actions in the experimental work. The model is found robust in action and role recognition tasks and performed well per expectations.
Photography, Computer software
Stream Iterative Distributed Coded Computing for Learning Applications in Heterogeneous Systems
Homa Esfahanizadeh, Alejandro Cohen, Muriel Medard
To improve the utility of learning applications and render machine learning solutions feasible for complex applications, a substantial amount of heavy computations is needed. Thus, it is essential to delegate the computations among several workers, which brings up the major challenge of coping with delays and failures caused by the system's heterogeneity and uncertainties. In particular, minimizing the end-to-end job in-order execution delay, from arrival to delivery, is of great importance for real-world delay-sensitive applications. In this paper, for computation of each job iteration in a stochastic heterogeneous distributed system where the workers vary in their computing and communicating powers, we present a novel joint scheduling-coding framework that optimally split the coded computational load among the workers. This closes the gap between the workers' response time, and is critical to maximize the resource utilization. To further reduce the in-order execution delay, we also incorporate redundant computations in each iteration of a distributed computational job. Our simulation results demonstrate that the delay obtained using the proposed solution is dramatically lower than the uniform split which is oblivious to the system's heterogeneity and, in fact, is very close to an ideal lower bound just by introducing a small percentage of redundant computations.
Actor-Critic Traction Control Based on Reinforcement Learning with Open-Loop Training
M. Funk Drechsler, T. A. Fiorentin, H. Göllinger
The use of actor-critic algorithms can improve the controllers currently implemented in automotive applications. This method combines reinforcement learning (RL) and neural networks to achieve the possibility of controlling nonlinear systems with real-time capabilities. Actor-critic algorithms were already applied with success in different controllers including autonomous driving, antilock braking system (ABS), and electronic stability control (ESC). However, in the current researches, virtual environments are implemented for the training process instead of using real plants to obtain the datasets. This limitation is given by trial and error methods implemented for the training process, which generates considerable risks in case the controller directly acts on the real plant. In this way, the present research proposes and evaluates an open-loop training process, which permits the data acquisition without the control interaction and an open-loop training of the neural networks. The performance of the trained controllers is evaluated by a design of experiments (DOE) to understand how it is affected by the generated dataset. The results present a successful application of open-loop training architecture. The controller can maintain the slip ratio under adequate levels during maneuvers on different floors, including grounds that are not applied during the training process. The actor neural network is also able to identify the different floors and change the acceleration profile according to the characteristics of each ground.
Electronic computers. Computer science
What Does TERRA-REF's High Resolution, Multi Sensor Plant Sensing Public Domain Data Offer the Computer Vision Community?
David LeBauer, Max Burnette, Noah Fahlgren
et al.
A core objective of the TERRA-REF project was to generate an open-access reference dataset for the evaluation of sensing technologies to study plants under field conditions. The TERRA-REF program deployed a suite of high-resolution, cutting edge technology sensors on a gantry system with the aim of scanning 1 hectare (10$^4$) at around 1 mm$^2$ spatial resolution multiple times per week. The system contains co-located sensors including a stereo-pair RGB camera, a thermal imager, a laser scanner to capture 3D structure, and two hyperspectral cameras covering wavelengths of 300-2500nm. This sensor data is provided alongside over sixty types of traditional plant phenotype measurements that can be used to train new machine learning models. Associated weather and environmental measurements, information about agronomic management and experimental design, and the genomic sequences of hundreds of plant varieties have been collected and are available alongside the sensor and plant phenotype data. Over the course of four years and ten growing seasons, the TERRA-REF system generated over 1 PB of sensor data and almost 45 million files. The subset that has been released to the public domain accounts for two seasons and about half of the total data volume. This provides an unprecedented opportunity for investigations far beyond the core biological scope of the project. The focus of this paper is to provide the Computer Vision and Machine Learning communities an overview of the available data and some potential applications of this one of a kind data.
Scientific Computing in the Cavendish Laboratory and the pioneering women Computors
Verity Allan, Caitriona Leedham
The use of computers and the role of women in radio astronomy and X-ray crystallography research at the Cavendish Laboratory between 1949 and 1975 have been investigated. We recorded examples of when computers were used, what they were used for and who used them from hundreds of papers published during these years. The use of the EDSAC, EDSAC 2 and TITAN computers was found to increase considerably over this time-scale and they were used for a diverse range of applications. The majority of references to computer operators and programmers referred to women, 57% for astronomy and 62% for crystallography, in contrast to a very small proportion, 4% and 13% respectively, of female authors of papers.
Job Scheduling in High Performance Computing
Yuping Fan
The ever-growing processing power of supercomputers in recent decades enables us to explore increasing complex scientific problems. Effective scheduling these jobs is crucial for individual job performance and system efficiency. The traditional job schedulers in high performance computing (HPC) are simple and concentrate on improving CPU utilization. The emergence of new hardware resources and novel hardware structure impose severe challenges on traditional schedulers. The increasing diverse workloads, including compute-intensive and data-intensive applications, require more efficient schedulers. Even worse, the above two factors interplay with each other, which makes scheduling problem even more challenging. In recent years, many research has discussed new scheduling methods to combat the problems brought by rapid system changes. In this research study, we have investigated challenges faced by HPC scheduling and state-of-art scheduling methods to overcome these challenges. Furthermore, we propose an intelligent scheduling framework to alleviate the problems encountered in modern job scheduling.
Consumer Behavior in Crisis Situations. Research on the Effects of COVID-19 in Romania
Silvius STANCIU, Riana Iren RADU, Violeta SAPIRA
et al.
The emergence of some critical incidents of economic, biological type-crises, armed conflicts, natural cataclysms can affect significantly the activity of the human society. The article aims at analyzing the behavior of the Romanian consumer in the context of COVID-19 emergence. The performed research has highlighted the particularities of the emergence of this sanitary crisis at the local economy level. Although the Romanian population’s infection degree has been more reduced as compared to the Western states, the strict prevention measures imposed by the authorities have determined a model of behavior of the consumer close to the one of other states affected by the infection with the new coronavirus, SARS-CoV2. The market studies performed by specialized companies have shown that imposing home isolation conditions, due to the emergency state, has significantly reduced the social activities of the Romanian consumer, the actions being oriented mainly towards covering the basic necessities. The health of the consumers (purchase of medicines or visit to the physician), procuring food or financial activities at the banking units are the main motivations for leaving the residence. By comparison, the sports activities or the visits for supporting family members have the lowest weight. A segment of consumers, advocate of traditional commerce, has been forced to appeal to modern trade methods based on online shopping, and the specialists’ estimations provide the maintenance of the trade behavior. Companies will have to focus on understanding the consumer’s needs and to adapt their product offer and distribution system so that to reduce the new consumption limits and to facilitate the sales act. The main orientation during the crisis towards the local products can represent an opportunity for the Romanian companies, but Government support measures are necessary for the Romanian producers.
Electronic computers. Computer science, Economic theory. Demography
Operational Safety Risk Assessment for the Water Channels of the South-to-North Water Diversion Project Based on TODIM-FMEA
Huimin Li, Li Ji, Feng Li
et al.
The South-to-North Water Diversion Project consists of long-distance water delivery channels and a complicated geological environment along the way. To deal with the operation safety of the water conveyance channels in the middle route of the South-to-North Water Diversion Project, this study analyzes six failure modes: structural cracks, poor water delivery during ice periods, instability of canal slopes, material aging, abnormal leakage, and foundation defects. Based on FMEA, a multigranularity language evaluation method that can be converted into interval intuitionistic fuzzy numbers is used to evaluate the severity (S), occurrence (O), and detection difficulty (D) of the six failure modes. Interval intuitionistic fuzzy entropy is used to calculate the weights of the risk factors. Finally, a ranking model of each failure mode is built based on the TODIM method. The final ranking results show that the risk of abnormal leakage is the largest, and the risk of poor water delivery during ice periods is the smallest. The feasibility and validity of the calculation results are verified by comparing them with the ranking results of the traditional RPN and TOPSIS methods. The TODIM-FMEA risk assessment model offers a new solution to the problem of risk assessment for water transfer projects.
Electronic computers. Computer science